Saturday, October 05, 2013

Machine morals?

Most geeks know of Asimov's three rules for robots.

Isaac Asimov's "Three Laws of Robotics"

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But an ABCAustralia podcast discusses how we need to put morality into robots.
As machines become smarter and more autonomous, they are bound to end up making life-or-death decisions in unpredictable situations. And that will present them with ethical dilemmas. Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? So do we need moral for machines to enable them to make such choices appropriately—in other words, to tell right from wrong?

No comments: