Tags

,

As of now, society does not think of robots as moral agents. They are not “persons,” in the philosophic sense, lacking the liberty we have. They are no different than animals, blindly following their programming and thus incapable of choosing not to harm, and we stopped putting animals on trial centuries ago for precisely that reason.

You can’t really view an animal’s “source code,” though. Genes may pump out or regulate proteins, sure, but what they do and how they do it is too complicated to understand all but the simplest examples. In contrast, programming code is designed to be human readable, hence the existence of coding guidelines. We can verify robots make decisions, based on the environment and prior knowledge. That’s better than we can do for humans, and a good argument for granting them liberty.

And if robots have liberty, they have morality. Increasingly, we’re trying to give robots a moral code.

The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems. […]

Ronald Arkin, an AI expert from Georgia Tech and author of the book Governing Lethal Behavior in Autonomous Robots,  is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a 2007 research paper. Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more inconsistent.

AI robotics expert Noel Sharkey is a detractor. He’s been highly critical of armed drones in general. and has argued that autonomous weapons systems cannot be trusted to conform to  international law.

“I do not think that they will end up with a moral or ethical robot,” Sharkey told Defense One. “For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won’t really care. It will follow a human designer’s idea of ethics.” […]

This week, Sharkey and Arkin are debating the issue of whether or not morality can be built into AI systems before the U.N. where they may find an audience very sympathetic to the idea that a moratorium should be placed on the further development of autonomous armed robots.

This might seem somewhat like counting angels on pinheads; after all, it’s not like private citizens will be able to buy their own murderbots. But these issues will have a big impact on us

Say for example a human-driven car runs a red light and a self-driving car has two options:

  1. It can stay its course and run into that car killing the family of five sitting in that car
  2. It can turn right and bang into another car in which one person sits, killing that person.

What should the car do? […]

… can you imagine a world in which say Google or Apple places a value on each of our lives, which could be used at any moment of time to turn a car into us to save others? Would you be okay with that?

And so there you have it, though the answer seems simple, it is anything but, which is what makes the problem so interesting and so hard. It will be a question that comes up time and time again as self-driving cars become a reality.

Advertisements