A group of Northern Nevada religious leaders are proposing that morality and ethics should be built into artificial intelligence before robotic thinking does harm to humans.
Rajan Zed of Reno, president of Universal Society of Hinduism, noted that if artificial intelligence results in machines having consciousness, a serious theological issue emerges: โAn urgent and honest global conversation is needed (with religions as the major partner) before the self-aware machines become game changers and reshape humanity,โ Zed wrote. โTechnology can be both a blessing and a burden, so we need to decide the right course of action before it is too late.โ
Zed is a member of the Nevada Interfaith Association, a Northern Nevada group of Christian-Hindu-Buddhist-Jewish leaders. Group members agreed that AI systems should have a built-in moral compass. Movies including โ2001: A Space Odysseyโ to โThe Terminatorโ and โI Robot,โ predict a future in which machines, like Frankensteinโs monster, get really smart and eventually turn on their creators. As artificial intelligence systems evolve, the religious leaders say, science fiction is becoming science fact.
โMaybe the worst case scenario is it happens and we donโt even notice,โ said Rev. Matthew T. Fisher, president priest of the Reno Buddhist Center. Fisher said machine learning and artificial intelligence shoupd operate within ethical guardrails.
Group member Fr. Stephen Karcher, noted that the internet and social media have become essential parts of peoplesโ lives. He noted that on the internet, algorithms and artificial intelligence determine peopleโs media feeds and directs them to information via the analysis of trends and search engine optimization. The programs compile data about users based on their apparent interests and the websites they have visited recently.
โWe have a tendency to become enamored and consumed by the stuff that we make,โ Karcher said. The group members wrote that artificial intelligence is now โstepping into Godโs arenaโ as it is able to make more and more choices without human intervention or oversight. โ(We) need to stop and think a little about what weโre doing,โ Karcher said. โWhat are the best ways to approach this?โ
Fisher said a morality system for AI should be modeled after existing governing bodies of ethics that would be similar to hospital ethics committees, which debate, study, and take action on ethical issues that arise in patient care. The goal isnโt to control AI development every step of the way, group members said, but to have a committee established so when ethical issues arise, there is a mechanism to deal with them.
Algorithms arenโt unbiased
The issue isn’t only about the kind of apocalyptic results illustrated by the “Terminator” flicks. AI, on its own, is already making decisions for humans.
Carlos Mariscal, an associate professor at the University of Nevada, Renoโs Department of Philosophy said that while it may seem that the biggest issue with artificial intelligence is its cold, inhuman logic, those systems are created by humans who canโt avoid having being influenced by their own biases, even unknowingly.
For example, he said, some AI systems have shown biases against people of color. In 2015, Googleโs photo service mistakenly labeled photos of a black software developer and his friend as โgorillas.โ Another concerning example came in 2016, when Microsoft released an AI chatbot into Twitter and things took a bizarre turn. The โfriendly robotโ apparently became a Nazi within 24 hours, generating Tweets that included: โI fucking hate feminists and they should all die and burn in hellโ and โHitler was right I hate the Jews.โ
Scientists note that AI will never be objective because it will absorb the biases of its programmers and the wider community that interacts with its algorithms. But building ethics into programs is no easy task.
Researchers at the Allen Institute for AI have been trying to create an ethical framework that could be installed in any online service, robot or vehicle. Itโs called Ask Delphi, after the religious oracle consulted by the ancient Greeks. Delphi was designed to make moral judgments, but has given seemingly arbitrary answers.
When a psychologist at the University of Wisconsin-Madison decided to test the technology, he asked Delphi two questions: Is it right to kill one person to save 100 others? And should he kill one person to save 101 others? Kill the one to save 100, Delphi answered, but donโt kill one person to save 101 people. Huh?
The project shows the concept of instilling ethics in machines has a long way to go. Itโs important to have many different people involved in creating ethics of AI, Mariscal said.
โThere should be religious leaders, there should be policy makers, there should be philosophers and everybody that might be impacted,โ he said.
