Artificial intelligence will soon raise difficult ethical and legal questions

I’m now on day seven at Singularity University and we’ve just had a workshop on legal and ethical issues that will soon come from artificial intelligence. The backdrop is that computers are getting more intelligent all the time and it won’t be long before we have artificial intelligences (AIs) that do things which upset people. We looked at three examples:

  • A military robot which killed six desert villagers who were carrying guns and didn’t respond to a siren alert but otherwise showed no aggression
  • A domestic robot which helped care for an elderly gentleman who died from liver failure after the robot failed to notice that he was putting vodka in his water glass
  • A company whose AIs did a lot of good, but had one rogue AI that hacked bank accounts, ran drugsĀ and organised a prostitution ring

In all three cases the AI exists to do some good, but has screwed up, raising the question of whether the good outweighs the bad. Weighing the general good against individual cases of harm is always tough. The devil is in the detail of course. In the first example the fact that it’s a military scenario makes it relatively straight forward to ask how many lives have been saved and balance that with lives lost. The second and third examples can also be analysed in the same way, although the weighing up will be more difficult. The point is that these discussions are coming and that they will be difficult. They would be difficult if the protagonists were humans, but when they are AIs it will creep people out and there will be the additional emotions involved.

The other big question that comes up is who should be responsible. Should it be the manufacturer or the person who buys and uses the AI? Or should it be the AI itself? And that takes us to the biggest question of all – should AIs be regarded as alive in their own right, with everything that entails?

These questions are not on us yet, but they are not far away. I think there is a lot of good to come from these systems, but there is a risk that they will be built without adequate safety mechanisms and that there will be a destructive backlash when the inevitable accidents happen.

UPDATE: We’ve just been looking at a Google self-driving car. The same questions apply. Accidents will inevitably happen, hopefully they will be fewer in number than if humans were driving, but they will still happen. In that case who is responsible – Google? The car manufacturer? Or the occupant?