Apparently, this may real sometime soon. Scientists at the Universidade Nova De Lisboa, in Portugal, and the Universitas Indonesia, in Indonesia, are researching artificial intelligence and the application of computational logic to create machines capable of making decisions. The researchers are investigating prospective logic as a way to program morality into a computer. They say prospective logic can model a moral dilemma and determine the logical outcomes of all possible decisions, potentially allowing for machine ethics. Machine ethics would enable designers to develop fully autonomous machines that can be programmed to make judgments based on a human moral foundation. For example, the researchers say that machine ethics could help psychologists and cognitive scientists find a new way of understanding moral reasoning in people, or extract fundamental principles from complex situations to help people decide what is right and wrong. The researchers have developed a system capable of working through the "trolley problem," an ethical dilemma proposed by British philosopher Philippa Foot in the 1960s in which a runaway trolley is about to hit five people tied to the track, but the subject can hit a switch that will send the trolley onto another track where only one person is tied down. The prospective logic program can consider each possible outcome based on different scenarios and demonstrate logically what the consequences of its decisions might be. The next step would be to give each outcome a moral weight so a prototype might be developed to make the best judgment as to whether to flip the switch.
Read more here
Thursday, 27 August 2009
Tuesday, 4 August 2009
Responsible Robots? How about responsible humans first?
They say charity begins at home, and indeed this should be the case in the world of robotics as well, at least according to these two Reseachers who envisions a new set of laws different from what Asimov gave us. Ohio State University professor David Woods and Texas A&M University professor Robin Murphy have drafted three laws of responsible robotics that they claim are more realistic and applicable to the future than science fiction writer Isaac Asimov's laws. Woods says he believes Asimov intended to use the laws as a literary device, and did not expect engineers to design robots to follow the laws to the letter. "We wanted to write three new laws to get people thinking about the human-robot relationship in more realistic, grounded ways," he says. Woods and Murphy's three laws place responsibility on the shoulders of human beings rather than robots, and target the human organizations that develop and deploy robots. The researchers sought ways to guarantee high safety standards. Woods and Murphy's first law decrees that humans may not implement robots without the human-robot work system being in compliance with the highest legal and professional standards of safety and ethics. The second law dictates that robots must respond to people as appropriate for their roles, while the third law requires robots to be imbued with enough situated autonomy to protect themselves as long as such protection enables the smooth transfer of control and does not clash with the other two laws. "You don't want a robot to drive off a ledge, for instance--unless a human needs the robot to drive off the ledge," Woods says. "When those situations happen, you need to have smooth transfer of control from the robot to the appropriate human."
Just to highlight this further, here are Asimov's laws:
A robot may not injure a human being, or through inaction, allow a human being to come to harm.
A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
For more info see this.
Subscribe to:
Comments (Atom)