Two parts of my life recently collided. Two ironies, really. First, after growing up off the grid, without TV (or school, for that matter, but that's another story), with parents who had only disdain for TV and popular culture, I find myself working for a new cable TV show. The show, called "Caprica," is about a time when humans develop advanced robots for warfare. These robots, the "Cylons," eventually develop a mind of their own, begin to resent their enslavement to humans, rise up, and nearly extinguish their makers. This part of the story will be familiar to fans of the TV series "Battlestar Galactica," which Caprica is a prequel to. I work as a technical script consultant for the show. This means I read pre-production scripts with an eye to making the robotics, artificial intelligence, and other such sci-tech in the show more realistic.
Some publicity about my involvement with Caprica led directly to the second irony. Last week, Northwestern University hosted Peter Singer, author of "Wired for War", a recent bestseller about the perils of robotic warfare. I was invited to meet with him for dinner after his talk. The only problem was that it was on the same day that I had a meeting at the headquarters of a military research agency in Washington DC, which is funding some of the research in my laboratory in the area of biologically-inspired robotics. I was able to shift to an earlier flight so I could catch the last part of Singer's evening talk and join him for dinner.
Robotic warfare, as we all know from media reports about drones, is of rapidly growing importance. It is based on research funded by a number of US government military research agencies. Singer (a defense analyst at the Brookings Institute, not the controversial ethicist from Princeton) is not prescribing an end to the development of such robots. Instead, he wants a conversation to begin about how we deal with issues of culpability that arise when the robots we develop make an independent, and faulty, decision to end a human life.
This brings me back to Cylons, and Caprica, a show that envisages a time when robots develop the capacity to be self-aware, make independent decisions to kill, and eventually collude to rebel against us. What is the likelihood of something like this scenario eventually occurring? Will we eventually have to grant moral rights to our inventions, perhaps to avoid such a rebellion? Will our mechanical intelligences supersede us? These are clearly highly speculative questions, more commonly the stuff of science fiction plots than sober consideration. But with the rapid rise of robotic warfare, and the push to make it ever more autonomous and lethal, they warrant a new look. In the next few posts, I'll explore the quandaries and consider some of the more speculative questions that are triggered by the conjunction of the real world of robotic warfare, and the fictional world of Caprica and its resentful robotic warriors.