Husband and wife researchers Michael and Susan Anderson at the University of Connecticut are working on machine ethics, a combination of their expertise in computer science and philosophy. It’s a relatively new field (the Andersons organized the first international conference on machine ethics in 2005), but one that will have tremendous impact on social humanoid robots moving forward. Artificial agents that work closely with people, especially robots, will face a dizzying number of scenarios made all the more complicated due to their inherent moral ambiguity. Recently scientist and author Sam Harris has suggested in his book “The Moral Landscape” that science should help us determine what is moral; a controversial notion for a debate dominated by philosophy and religion for millennia.
For now, the researchers are using a machine learning algorithm that determines a robot’s course of action when reminding someone to take their medication. In Japan, NEC’s communication robot PaPeRo has already undergone field trials with patients in hospital performing this sort of role (see here). The Andersons are using Aldebaran Robotics’ NAO for their platform (watch a video about the NAO developer program), taking the place of what could be Romeo later this year (a full-scale humanoid that is being purposely built for just such a task). It brings the medication to the patient and reminds them that they should take it. For the robot to be effective, it needs to know when to respect the personal autonomy of the patient if they refuse, and when to contact a human caregiver or doctor to step in if there is a danger to the patient.
Susan Anderson believes that ethical robots will not only expedite society’s acceptance of partner robots, but also that they might reinforce ethical behavior in people. Some have suggested that a hospital ward run by robots would be nothing short of nightmarish, but it seems all too often we hear of extreme cases of abuse and/or neglect at nursing homes by human staff. I would argue that it is also possible that people may become frustrated by a robot’s seemingly perfect behavior, and begin to associate ethical decision making with the artificial. It’s a murky subject, but certainly food for thought.