The Commited MAY 2015 | Page 85

TED EGE COLLEGE / 8-A Robots, Ethics and Asimov’s I, Robot Nisan GÜNGEN The three rules of robots and robot production rules in the future World of our planet gives Asimov’s readers a new and safe perspective of future world. But we still find ourselves in an atmosphere that nothing is certain and safe about artificial intelligence. Let me remind you the rules: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. I think they are so important. Because if we assume that we are all familiar with the three laws we can hardly understand what they mean. In her book ‘Evidence’ Susan CALVIN says ‘The three rules of robotics are the essential guiding principles of many of the world’s ethical systems. With these rules we assure that robots are not harmful but good for all people in the World. But there’s something about the Laws that almost everyone gets wrong: people think of the Three laws as software that’s just programmed in to the robot’s brain—you could program the laws and have a good robot or not program the laws and have an evil robot. But check out when Calvin and Peter Bogert discuss the issue in “Little Lost Robot”: if you modify the Three laws, you would be left with “complete instability, with no nonimaginary solutions to the artificial minds of robots. The Laws are not just programs; they’re a necessary part of how you build an artificial brain. “A positronic brain cannot be constructed without” the laws. So, if you leave the laws out, you don’t get an evil and intelligent robot, but rather a crazy robot, or just a pile of metal. So, in Asimov’s robot stories, the Three Laws are not just a guarantee that the robots are good. They seem to indicate that there is some connection between goodness and stability/sanity—or even between goodness and intelligence. That is, it’s impossible to be truly intelligent unless you’re truly good.