Isaac Asimov's "Three Laws of Robotics"
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But an ABCAustralia podcast discusses how we need to put morality into robots.
As machines become smarter and more autonomous, they are bound to end up making life-or-death decisions in unpredictable situations. And that will present them with ethical dilemmas. Should a driverless car swerve to avoid pedestrians if that means hitting other vehicles or endangering its occupants? So do we need moral for machines to enable them to make such choices appropriately—in other words, to tell right from wrong?
No comments:
Post a Comment