PHOTO ILLUSTRATION/ADOBE STOCK

A group of Northern Nevada religious leaders are proposing that morality and ethics should be built into artificial intelligence before robotic thinking does harm to humans.

Rajan Zed of Reno, president of Universal Society of Hinduism, noted that if artificial intelligence results in machines having consciousness, a serious theological issue emerges: “An urgent and honest global conversation is needed (with religions as the major partner) before the self-aware machines become game changers and reshape humanity,” Zed wrote. “Technology can be both a blessing and a burden, so we need to decide the right course of action before it is too late.”

Zed is a member of the Nevada Interfaith Association, a Northern Nevada group of Christian-Hindu-Buddhist-Jewish leaders. Group members agreed that AI systems should have a built-in moral compass. Movies including “2001: A Space Odyssey” to “The Terminator” and “I Robot,” predict a future in which machines, like Frankenstein’s monster, get really smart and eventually turn on their creators. As artificial intelligence systems evolve, the religious leaders say, science fiction is becoming science fact.

“Maybe the worst case scenario is it happens and we don’t even notice,” said Rev. Matthew T. Fisher, president priest of the Reno Buddhist Center. Fisher said machine learning and artificial intelligence shoupd operate within ethical guardrails.

Group member Fr. Stephen Karcher, noted that the internet and social media have become essential parts of peoples’ lives. He noted that on the internet, algorithms and artificial intelligence determine people’s media feeds and directs them to information via the analysis of trends and search engine optimization. The programs compile data about users based on their apparent interests and the websites they have visited recently.

“We have a tendency to become enamored and consumed by the stuff that we make,” Karcher said. The group members wrote that artificial intelligence is now “stepping into God’s arena” as it is able to make more and more choices without human intervention or oversight. “(We) need to stop and think a little about what we’re doing,” Karcher said. “What are the best ways to approach this?”

Fisher said a morality system for AI should be modeled after existing governing bodies of ethics that would be similar to hospital ethics committees, which debate, study, and take action on ethical issues that arise in patient care. The goal isn’t to control AI development every step of the way, group members said, but to have a committee established so when ethical issues  arise, there is a mechanism to deal with them.

Algorithms aren’t unbiased

The issue isn’t only about the kind of apocalyptic results illustrated by the “Terminator” flicks. AI, on its own, is already making decisions for humans.

Carlos Mariscal, an associate professor at the University of Nevada, Reno’s Department of Philosophy said that while it may seem that the biggest issue with artificial intelligence is its cold, inhuman logic, those systems are created by humans who can’t avoid having being influenced by their own biases, even unknowingly.

For example, he said, some AI systems have shown biases against people of color. In 2015, Google’s photo service mistakenly labeled photos of a black software developer and his friend as “gorillas.” Another concerning example came in 2016, when Microsoft released an AI chatbot into Twitter and things took a bizarre turn. The “friendly robot” apparently became a Nazi within 24 hours, generating Tweets that included: “I fucking hate feminists and they should all die and burn in hell” and “Hitler was right I hate the Jews.”

Scientists note that AI will never be objective because it will absorb the biases of its programmers and the wider community that interacts with its algorithms. But building ethics into programs is no easy task.

Researchers at the Allen Institute for AI have been trying to create an ethical framework that could be installed in any online service, robot or vehicle. It’s called Ask Delphi, after the religious oracle consulted by the ancient Greeks. Delphi was designed to make moral judgments, but has given seemingly arbitrary answers.

When a psychologist at the University of Wisconsin-Madison decided to test the technology, he asked Delphi two questions: Is it right to kill one person to save 100 others? And should he kill one person to save 101 others? Kill the one to save 100, Delphi answered, but don’t kill one person to save 101 people. Huh?

The project shows the concept of instilling ethics in machines has a long way to go. It’s  important to have many different people involved in creating ethics of AI, Mariscal said.

“There should be religious leaders, there should be policy makers, there should be philosophers and everybody that might be impacted,” he said.

Leave a comment

Your email address will not be published. Required fields are marked *