The Pentagon Doesn’t Trust Its Own Robots

The Pentagon’s science advisers want military robots to operate with far greater autonomy than they do today. Only one problem: there’s a cloud of distrust and misunderstanding hovering over the robots that the Pentagon already has.


That’s an unexpected conclusion in a July study from the Defense Science Board, recently acquired by Steve Aftergood of the Federation of American Scientists. (.pdf) The Board wondered what’s inhibiting the development of autonomous military vehicles and other systems. It found that the humans who have to interact with robots in high-stakes situations often labor under the misimpression that autonomy means the machine can do a human’s job, rather than help a human do her job more efficiently. And some simply don’t have faith that the robots work as directed.
There’s a “lack of trust among operators that a given unmanned system will operate as intended,” the Board found. One major reason: “Most [Defense Department] deployments of unmanned systems were motivated by the pressing needs of conflict, so systems were rushed to theater with inadequate support, resources, training and concepts of operation.” War may spur innovation, but it’s not always the best place to beta-test.
And there’s a deeper, conceptual problem behind the frustration. “Treating autonomy as a widget or ‘black box’ supports an ‘us versus the computer’ attitude among commanders rather than the more appropriate understanding that there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines,” the Board found.
People who study military robotics have started to hear similar frustrations. Top officers have ticked off laundry lists of functionality they want for next-gen ‘bots to their makers. They seem not to appreciate that the robots they already have can currently do the things they still consider science fiction. “Essentially you are seeing the frustration that comes from the divide between the research folks,  and the rest of the system –the buyers, the users, the budget and legal folks,” says Peter Singer of the Brookings Institution, who wrote an authoritative book about military robotics.
This is an opportune moment to take stock of where the military stands with its unmanned systems as it considers boosting their autonomous functions for a new generation. About a third of the military’s entire airfleet is robotic. The drones on deck will take on missions previously reserved for advanced pilots, like flying off of aircraft carriers — which the Navy wants to make as autonomous an operation as possible. Meanwhile, the blue-sky researchers at Darpa want to design surveillance cameras with filtration software so analysts aren’t stuck sifting through terabytes of irrelevant video feeds.
The Defense Science Board doesn’t single out specific programs for poor user interfaces or dubious functionality. (A slight exception is the Air Force’s Global Hawk surveillance drone, dinged for hogging bandwidth.) Generically speaking, the Board chides existing unmanned vehicles for performing “manned operational functions off-board over a communication link, which often results in cumbersome operator control systems, brittle operations and less robust capability than could otherwise be achieved with onboard processing.” Nor is it clear how the Board drew its conclusions about the military distrust in its robots. There’s no poll about how the brass, commanders or frontline operators feel about their mechanized tools or companions.
The human-robot interaction isn’t the only obstacle to furthering robot autonomy. The Board is also concerned about the data glut from increased surveillance; proprietary software; and the possibility of high-stakes failure (like, say, spoofing a drone that’s supposed to fly itself). The Pentagon also tends to buy hardware before perfecting the software that operates it.
Primarily, the Board wants “some military leaders” to stop thinking of “computers making independent decisions and taking uncontrolled action” when they think of the word “autonomy.” Instead, they should think of autonomy as a partnership: “all autonomous systems are joint human-machine cognitive systems,” the Board writes. “It should be made clear that all autonomous systems are supervised by human operators at some level, and autonomous systems’ software embodies the designed limits on the actions and decisions delegated to the computer.”
Brookings’ Singer says he’s seeing a conceptual divide take shape along the lines the Board describes. On the one side are robot-makers. On the other is the brass.
“The developers are frustrated by all the misnomers and misperceptions that surround robotics and autonomy, seeing them connected to a few of the bureaucratic roadblocks being put up to block innovations,” Singer says in an email. “In many ways, the discussion of autonomy is caught between two false extremes, either the idea that it only means some fully conscious C3PO or Terminator making all its own decisions or that it is a fiction and everything has to be remotely operated, forever (ahem, remotely piloted aircraft).
“In the first case,” Singer continues, “you’ll hear people talk about machines as if there is no human role whatsoever left and in the later you’ll hear talk about things that robots already do now (take off and land on own, air to air refuel, target acquisition, etc ) described as ‘never, ever possible.’ The reality right now is, of course, happening between those false extremes.” (Full disclosure: Singer supervises the editor of this blog at the Brookings Institution.)
Accordingly, the human-robot partnership simply isn’t as good as it could be. The Pentagon’s robots “have not taken full advantage of proven autonomous capabilities in automated take-off and landing, waypoint navigation, automatic return to base upon loss of communications and path planning,” the Board found. And there are several critical military needs that the Board considers ripe for increased autonomous systems, like “situational awareness; multi-agent communication; information/network management; [and] failure anticipation/replanning.”
Some of those failures are conceptual. Others seem more like engineering flaws. The Board finds “poor design” in unnamed military robots that makes them difficult or unreliable for humans to operate. There’s also a tendency for the Pentagon to focus on building a piece of hardware, which cuts against the “primacy of software” necessary for creating an autonomous system. The result is “a lack of trustthat the autonomous functions of a given system will operate as intended in all situations.” (Emphasis in the original.)
The Board doesn’t say this, but Singer suspects some of this distrust is motivated by bureaucratic intransigence at top military levels. “Change can be scary, especially if you are top dog now. So they’ll often tag the flaws or limits of the first generation of a technology as reason not to change at all,” Singer says. “It’s the parallel to what Major General John Herr of the Army Cavalry argued to the U.S. Congress in early 1939: ‘We must not be led to our own detriment to assume that the untried machine can displace the proved and tried horse… Not one more horse will I give up for a tank.’ Substitute ‘manned’ and ‘unmanned’ and you’d capture a few leaders’ sentiments today.”
Clearly, the Board isn’t looking for any robotic system that renders humans irrelevant. It’s looking for automating more systems that the military uses to make humans operate more efficiently. That leads the Board to urge the Pentagon to focus less on hardware and more on software development, and to create science and technology programs that emphasize, among other things, “natural user interfaces and trusted human-system collaboration.” Whether it’s a tablet or a drone that flies off an aircraft carrier, that’s the best way to get a human to trust a machine.