MATT SIMON, Wired; Want to Get Along With Robots? Pretend They’re Animals
Robotics ethicist Kate Darling surveys our history with animals—in work, war, and companionship—to show how we might develop similar relationships with robots.
"WIRED: That brings us nicely to the idea of agency. One of my favorite moments in human history was when animals were put on trial—like regularly.
KD: Wait. You liked this?
WIRED: I mean, it's horrifying. But I just think that it's a fascinating period in legal history. So why do we ascribe this agency to animals that have no such thing? And why might we do the same with robots?
KD: It's so bizarre and fascinating—and seems so ridiculous to us now—but for hundreds of years of human history in the Middle Ages, we put animals on trial for the crimes they committed. So whether that was a pig that chewed a child's ear off, or whether that was a plague of locusts or rats that destroyed crops, there were actual trials that progressed the same way that a trial for a human would progress, with defense attorneys and a jury and summoning the animals to court. Some were not found guilty, and some were sentenced to death. It’s this idea that animals should be held accountable, or be expected to abide by our morals or rules. Now we don't believe that that makes any sense, the same way that we wouldn't hold a small child accountable for everything.
In a lot of the early legal conversation around responsibility in robotics, it seems that we're doing something a little bit similar. And, this is a little tongue in cheek—but also not really—because the solutions that people are proposing for robots causing harm are getting a little bit too close to assigning too much agency to the robots. There's this idea that, “Oh, because nobody could anticipate this harm, how are we going to hold people accountable? We have to hold the robot itself accountable.” Whether that's by creating some sort of legal entity, like a corporation, where the robot has its own rights and responsibilities, or whether that's by programming the robot to obey our rules and morals—which we kind of know from the field of machine ethics is not really possible or feasible, at least not any anytime soon."