Are We ‘Wired for War’ With Cylons? Part II: The Rise of Moral Challenges

By: 

In my last posting, I introduced some issues at the intersection of robotics, artificial intelligence (AI), and morality. While I’ve long been interested in this nexus, the most immediate impetus for the posting was meeting Peter Singer, author of the excellent book "Wired for War" about the rise of unmanned warfare, while simultaneously working for the TV show "Caprica" and a U.S. military research agency that funds some of the work in my laboratory on bio-inspired robotics. "Caprica," for those who don’t know it, is a show about a time when humans invent sentient robotic warriors. "Caprica" is a prequel to "Battlestar Galatica," and as we know from that show, these warriors rise up against humans and nearly drive them to extinction.

Here, I’d like to push the idea that as interesting as the technical challenges in making sentient robots like those on "Caprica" are, equally interesting are the moral challenges of making such machines. But “interesting” is too dispassionate---I believe that we need to begin the conversation on these moral challenges. Roboticist Ron Arkin has been making this point for some time, and has written a book on how we may integrate ethical decision-making into autonomous robots.

Given that we are hardly at the threshold of building sentient robots, it may seem overly dramatic to characterize this as an urgent concern, but new developments in the way we wage war should make you think otherwise. I heard a telling sign of how things are changing when I recently tuned in to the live feed of the most popular radio station in Washington DC, WTOP. The station aired commercial after commercial from iRobot (of Roomba fame), a leading builder of unmanned military robots, clearly targeting military listeners. These commercials reflect how the use of unmanned robots in the military has gone from close to zero in 2001 to more than ten thousand now, with the pace of acquisition still accelerating. For more details on this, see Singer’s "Wired for War" or the March 2010 Congressional Hearing on the Rise of the Drones here.

While we are all aware of these trends to some extent, they've hardly become a significant issue of concern. We are comforted by the knowledge that the final kill decision is still made by a human. But is this comfort warranted? The importance of such a decision changes as both the way in which war is conducted and the highly processed information supporting the decision becomes mediated by unmanned military robots.

Some of these trends have been helpful to our security. For example, the drones have been effective against the Taliban and Al-Qaeda because they can do long-duration monitoring and attacks of sparsely distributed non-state actors. However, in a military context, unmanned robots are clearly the gateway technology to autonomous robots, where machines can eventually be in the position to make decisions that have moral weight.

“But wait!” many will say, “Isn’t this the business-as-usual-robotics-and-AI-are-just-around-the-corner argument we’ve heard for decades?” Robotics and AI have long been criticized as promising more than they could deliver. Are there signs that this could be changing? While an enormous amount could be said about the reasons for the past difficulties of AI, it is clear that some of its past difficulties stem from having too narrow a conception of what constitutes intelligence, a topic I’ve touched on for the recent Cambridge Handbook of Situated Cognition.

This narrow conception revolved around what might loosely be described as cognitive processing or reasoning. Newer types of AI and robotics, such as embodied AI and probabilistic robotics, try to integrate some of the aspects of what being more than a symbol processor involves: for example, sensing the outside world and dealing with the uncertainty in those signals in order to be highly responsive, and emotional processing. Advanced multi-sensory signal-processing techniques such as Bayesian filtering were in fact integral to the success of Stanley, the autonomous robot that won DARPA’s Grand Challenge to drive without human intervention across a challenging desert course.

As these prior technical problems are overcome, autonomous decision-making will become more common. Eventually, this will raise moral challenges. One area of challenge will be how we should behave towards artifacts, be they virtual or robotic, which are endowed with such a level of AI that how we treat them becomes an issue. On the other side, how they treat us becomes a problem, most especially in military or police contexts. What happens when an autonomous or semi-autonomous war robot makes an error and kills an innocent? Do we place responsibility on the designers of the decision-making systems, the military strategists who placed machines with known limitations into contexts they were not designed for, or some other entity?

Both of these challenges are about morality and ethics. But it is not clear whether our current moral framework, which is a hodgepodge of religious values, moral philosophies, and secular humanist values, is up to responding to these challenges. It is for this reason that the future of AI and robotics will be as much a moral challenge as a technical challenge. But while we have many smart people working on the technical challenges, very few are working on the moral challenges.

How do we meet the moral challenge? One possibility is to look toward science for guidance. In my next posting I’ll discuss some of the efforts in this direction, pushed most recently by a new activist-form of atheism, which holds that it is incorrect, even dangerous, to think that we need religion to ground morality. We can instead, they claim, look to the new sciences of happiness, empathy, and cooperation for guiding our value system.

Topic: 

Tags: 

Comments

The Cartesian philosophy is developing the concept of the human machine, but I prefer to understand it in relation with things as drones or exoskeletons, because I am not sure that the progress is in the way of an independent robot.

Let me say this...more and more robotic systems are taking away manual labor...and even greater moral dilemma is how combat drones cannot determine most times friend or foe....especially when human control is necessary still...

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <p> <div> <br>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Refresh Type the characters you see in this picture. Type the characters you see in the picture; if you can't read them, submit the form and a new image will be generated. Not case sensitive.  Switch to audio verification.