Murphy's Law: The Robopocalypse Came And Went

Archives

July 20, 2014: While many civilians fret over the use of “robotic” aircraft, such beasts don’t really exist yet. Then there is the similar fear of robotic weapons being unleashed on the world. That is more than an irrational fear, it is an inaccurate understanding of weapons that have been around for a long time. Robotic killer weapons have existed for over half a century and no one seems to have noticed.

Meanwhile the troops can’t wait to get truly robotic aircraft. Not just aircraft that can take off, fly around doing something useful, and then land by itself but also software that can interpret what the cameras are seeing. All this isn’t science fiction but is already happening bit by bit. That’s how technology works and the troops would like this particular process sped up. Currently there are some smaller UAVs (like Raven) that can be launched by hand (throwing them) and then automatically fly a programmed route, speed and altitude, and then return and land (via controlled crash). In this case the operator mainly acts as the observer, looking at the video for anything the unit commander (often standing nearby or looking at the video as well) can use. With these systems the operator can interrupt the automatic flight path and have the UAV circle a spot or control the UAV himself. But these UAVs are not fully robotic and they aren’t armed.

What the UAV operators want is more powerful and reliable automatic flight software as well as pattern analysis software good enough to understand what is being watched. Flight control software has been around for decades and is regularly used by manned aircraft. Some UAVs currently use this software to simply fly from one air field to another, as well as for automatic takeoffs and landings. What the UAV users want is software that merges with pattern recognition software (used by the video cameras) to alert human controllers if something of interest is spotted below and keep looking at this find until ordered to move on by the human controller.

There is also demand for UAVs that will fire at the enemy on their own. That is on the way. In 2011 an American firm developed software that enabled an armed UAV to seek out, identify, and attack (with a missile) targets, without any human intervention. While this created some alarming headlines, this capability is nothing new and first appeared during World War II. Work on these World War II robotic weapons has continued since then, much to the joy of the occasional journalist looking for a scary story.

Meanwhile the military keeps encouraging research in this area. In 2009 the U.S. Air Force released a report (Unmanned Aircraft Systems Flight Plan 2009-2047) predicting the eventual availability of flight control software that would enable UAVs to seek out and attack targets without human intervention. This alarmed many people, especially those that didn't realize this kind of software has been in service for a long time.

It all began towards the end of World War II, when "smart torpedoes" first appeared. These weapons had sensors that homed in on the sound of surface ships. This torpedo followed the target until the magnetic fuze detected that the torpedo was underneath the ship and detonated the warhead. The acoustic homing torpedoes saw use before the war ended, and even deadlier wake homing torpedoes were perfected and put into service (by Russia) in the 1960s. The “wake homing” torpedoes detected the wake of a ship and followed the wake to where the ship currently was and detonated.

Another post-World War II development was the "smart mine." This was a naval mine that lay on the bottom, in shallow coastal waters. The mine has sensors that detect noise, pressure, and metal. With these three sensors the mine can be programmed to only detonate when certain types of ships pass overhead. Thus, with both the smart mines and torpedoes, once you deploy them the weapons are on their own, to seek out and destroy a target. For over a century more primitive “contact mines” exploded when a ship ran into them. These weapons were not alarming to the general public but aircraft that do the same thing are.

Smart airborne weapons have also been in use for decades. The most common is the cruise missile, which is given a target location and then flies off to find and destroy the target. Again, not too scary. But a UAV that uses the same technology as smart mines (sensors that find and software that selects, a target to attack) is alarming. What scares people is that they don't trust software. Given the experience most of us have with software, that's a reasonable fear.

But the military operates are in a unique environment. Death is an ever-present danger. Friendly fire occurs far more than people realize (or even the military will admit). Combat troops were reluctant to talk about friendly fire (mainly because of guilt and PTSD/combat stress) even among themselves, and the military had a hard time collecting data on the subject. After making a considerable effort (several times after World War II), it was concluded that up to 20 percent of American casualties were from friendly fire. This helps explain why military people and civilians have a different attitude towards robotic killing machines. If these smart UAVs bring victory more quickly, then fewer friendly troops will be killed (by friendly or hostile fire). Civilians are more concerned about the unintentional death of civilians or friendly troops. Civilians don't appreciate, as much as the troops do, the need to use "maximum violence" (a military term) to win the battle as quickly as possible.

The U.S. Air Force has good reason to believe that they can develop reliable software for autonomous armed UAVs. The air force, and the aviation industry in general, have already developed highly complex and reliable software for operating aircraft. For example, there has been automatic landing software in use for over a decade. Flight control software handles many more mundane functions, like dealing with common in-flight problems. This kind of software makes it possible for difficult (impossible, in the case of the F-117) to fly military aircraft to be controlled by a pilot. Weapons guidance systems have long used target recognition systems that work with a pattern recognition library that enables many different targets to be identified and certain ones to be attacked. To air force developers, autonomous armed UAVs that can be trusted to kill enemy troops, and not civilians or friendly ones, are not extraordinary but the next stop in a long line of software developments.

What civilians fear, and journalists exploit, is capabilities that weapons are a long, long way from having. These include persistence (the ability to keep at it for more than a few minutes or hours) and replication (robots building robots). Without those two capabilities, robotic weapons are no real threat to mankind. And that’s why there was no general panic when robotic torpedoes, smart mines and guided missiles showed up over fifty years ago. But now we have selective memory colliding with headline hunting (or clickbait) and the road to the robopocalypse gets a little foggy.

 

 

X

ad

Help Keep Us From Drying Up

We need your help! Our subscription base has slowly been dwindling.

Each month we count on your contribute. You can support us in the following ways:

  1. Make sure you spread the word about us. Two ways to do that are to like us on Facebook and follow us on Twitter.
  2. Subscribe to our daily newsletter. We’ll send the news to your email box, and you don’t have to come to the site unless you want to read columns or see photos.
  3. You can contribute to the health of StrategyPage.
Subscribe   contribute   Close