Military History | How To Make War | Wars Around the World Rules of Use How to Behave on an Internet Forum
Murphy's Law in Action Discussion Board
   Return to Topic Page
Subject: The Robopocalypse Came And Went
SYSOP    7/20/2014 7:58:42 AM
 
Quote    Reply

Show Only Poster Name and Title     Newest to Oldest
keffler25       7/21/2014 12:03:36 PM
Robots building robots is here. It's not the civilians afraid of it; it's the military. War's chaos is bad enough without out of control machines adding to it.
 
Quote    Reply

ker       7/22/2014 10:06:00 AM
Different nations will advance this kind of tec in their own ways. After the nuke power plant failure in Japan a push was made to produce a robot that could drive a cart, use human tools, climb a lader, turn valve and replace a pump. If that kind of machine is in massprodution and can be reprogramed by end users whatching what humans do rather than keyboarding computer code then it becomes the low cost option for an autoloader in tanks. It also can take part of the tank maitanance load of the crew. Japan has a strong tec base and a contracting daft pool and no draft. Backing off China and Russia and North Korea gives Japan motive to use bots. It could be a brake out situation where they have thoughts about quickly feilding something but keep them to themselves. Any R and D work would be non-defence just like NASA and the Department of Energy in the U.S. Its dual use tec.
 
Quote    Reply

WarNerd       7/22/2014 12:29:13 PM
There is a world of difference between a robot flying way points to a human selected location, matches a static image, and then crashing into it and a robot that searches for and picks its own targets.  We have some that do the later, usually static devices called [land or sea] 'mines'.  Sensor fuzed munitions are also claimed to fall into the category, though the level of human supervision is greater than for mines and the independent action only a few seconds long.
 
The problem is that if one of these robotic vehicles makes a mistake you have to assume all will make the same mistake and pull them from service until new software arrives.  Testing complex software takes time, witness the F-23's problems in this regard.
 
Quote    Reply

ker       7/22/2014 3:00:40 PM
Not all organizations will use the tec in same way. All kinds of analise paralise and fear of bad press will prevent use by some groups. Other groups with different encentives will move forward. They may asked to be paid to slow down their robot weapons programes even if they don't really have much of one. They may make Iranian style press relesses about how the can produce weapons that western nations can't. In fact they might be repainting western industrial machines and straping guns and video game parts onto them. (Yes video game tec, the hard robot problem of "simotainious location and maping" was over come by using a Kinexs (sp) video game hardware that see how players move their bodys and inputs that inplace game controler input. Some one discovered it can capture as data the location of walls and objects as well.) The Irainians could creat the ilusion that they are ahead in tec when infact they were just more egar to look scairy. For them that could be a win win. Some armys/mobs still think scairy is a feature not a bug. Others
 
Quote    Reply

Reactive       7/23/2014 2:36:33 PM
There is a world of difference between a robot flying way points to a human selected location, matches a static image, and then crashing into it and a robot that searches for and picks its own targets.  We have some that do the later, usually static devices called [land or sea] 'mines'.  Sensor fuzed munitions are also claimed to fall into the category, though the level of human supervision is greater than for mines and the independent action only a few seconds long.
 
 
Why is the duration of AI decisionmaking important, surely the important thing is that on several weapon systems currently available that seek targets of opportunity within a given threat library they already make the decision to engage or not engage? Not just that but they have gained this capability without so much as a peep from anyone - whether that is a loitering system or the terminal phase of a missile is irrelevant. 
 
 
Quote    Reply

WarNerd       7/23/2014 10:45:51 PM
Why is the duration of AI decisionmaking important, surely the important thing is that on several weapon systems currently available that seek targets of opportunity within a given threat library they already make the decision to engage or not engage? Not just that but they have gained this capability without so much as a peep from anyone - whether that is a loitering system or the terminal phase of a missile is irrelevant. 
It is very relevant in terms of the source of any accidental deaths. A sensor fuzed munition is basically a more accurate cluster bomb, with the same footprint. The decision and location it is deployed is entirely a human decision. An ALCM sent to just find and kill something unknown to the firer, just kill something over there 500 ± 100 miles, without human confirmation of a valid target, is entirely different.
 
Quote    Reply

Reactive       7/24/2014 5:20:50 PM

 
 
Quote    Reply

Reactive       7/24/2014 8:56:41 PM
My post went AWOL.
 
What I said, (or thought I'd said), was that it's not just the CBU-97 and similar that have semi-autonomous guidance - i.e. human chooses release and direction, system selectively engages target matched to threat library.
 
• SADARM  
• Brimstone 
• Switchblade 
• Loitering Attack Missile (Cancelled)
• ALARM - loitering mode (high altitude parachute) to hit pop-up emitters. 
 
And then we have
 
• Dozens of varieties of torpedoes.
• Arguably most currently deployed ASHM variants - sent to vicinity of suspected target and then either threat-matched or not depending on the operator. 
 
The key to all of this is the ROE's that apply at the time of use, threat-libraries being tailored to the theatre in question and the system's sensors and software getting that match correct. We're not yet talking about systems that go off and decide whether or not to start a war or whether a given person looks suspicious enough to constitute a threat but rather systems that match known threats against a library and engage using simple machine-logic. 
 
 My point is that you're refusing to acknowledge that many of these systems are "a robot that searches for and picks its own targets" with the caveat that a man in the loop decides that in a given vicinity there is likely to be a threat and that IF there is it is permissible to prosecute - whether that range and time duration is 1km and 10s or 1000km and 10 hours is immaterial. 
 
 
 
 
 
 
 
Quote    Reply

Reactive       7/24/2014 9:26:53 PM
In fairness you can probably remove torpedoes from that list as long as you are referring to western arsenals and not systems, such as those sold (or copied) by Russia and China to countries such as NK and Iran that simply search for any vessel of appropriate size and try to sink it. 
 
Quote    Reply



 Latest
 News
 
 Most
 Read
 
 Most
 Commented
 Hot
 Topics