What artificial intelligence can tell us about morality

What does Judaism say about morals, and how does this intersect with ever-changing technology?

By JONATHAN L. MILEVSKY
December 16, 2017 21:33
3 minute read.
A Lynx robot with Amazon Alexa integration on display in Las Vegas.

A Lynx robot with Amazon Alexa integration on display in Las Vegas.. (photo credit: REUTERS)

 
X

Dear Reader,
As you can imagine, more people are reading The Jerusalem Post than ever before. Nevertheless, traditional business models are no longer sustainable and high-quality publications, like ours, are being forced to look for new ways to keep going. Unlike many other news organizations, we have not put up a paywall. We want to keep our journalism open and accessible and be able to keep providing you with news and analyses from the frontlines of Israel, the Middle East and the Jewish World.

As one of our loyal readers, we ask you to be our partner.

For $5 a month you will receive access to the following:

  • A user experience almost completely free of ads
  • Access to our Premium Section
  • Content from the award-winning Jerusalem Report and our monthly magazine to learn Hebrew - Ivrit
  • A brand new ePaper featuring the daily newspaper as it appears in print in Israel

Help us grow and continue telling Israel’s story to the world.

Thank you,

Ronit Hasin-Hochman, CEO, Jerusalem Post Group
Yaakov Katz, Editor-in-Chief

UPGRADE YOUR JPOST EXPERIENCE FOR 5$ PER MONTH Show me later Don't show it again

In the 11th century, a brilliant Islamic thinker by the name of Avicenna came up with a thought experiment, of a person floating in the air without recourse to his or her senses. Since a person in that predicament can arrive at knowledge of the self, Avicenna showed that the soul exists. The ongoing work on artificial intelligence will soon present us with the opportunity to create this experiment.

And if so, the coming days will raise some interesting questions for both moral theorists and halachic (Jewish law) specialists.

Be the first to know - Join our Facebook page.


Assuming there are no prior conditions that limit how such programs relate to human beings – conditions made famous by science fiction writer Isaac Asimov – the goal of this experiment would be to determine if a program would somehow arrive at moral maxims. Ideally, the investigation would begin once the computer starts demonstrating the signs of self-awareness. It is at this point that it would be crucial to start gaining insight into the computer’s thought process, with an eye towards answering a number of key questions: Would it assume that there may be others of its kind? If so, how would it treat such other beings? Would it arrive at a notion of equality, or would it expect preferential treatment? The answers to these questions will offer insight into the question of whether morality is based on universal moral truths or just on social conventions.

A positive answer to those questions would lend credence to the theory that morality is an inherent component of life, just as a negative answer would cast doubt upon it. Of course, it is possible that the computer was acting out of its own best interest. It would therefore be ideal to have a way of recording every part of the thought process, not unlike the way a program that plays chess lists other possible moves before settling on its choice. Through this type of information it can be determined if, out of a Hobbesian arrangement, the program would eschew violence – a practical decision to restrain such behavior based on the possibility that others can similarly lash out, or if there is a deeper ground to its acts of kindness.

On a more fundamental level, there is also the possibility that the program would arrive at morality immediately. In his book Difficult Freedom, French-Jewish philosopher Emmanuel Levinas wrote that moral consciousness is the “experience of the other” and that this is not epiphenomenal, but the very condition of consciousness. That is to say, the awareness of other human beings is the foundation of human consciousness. More importantly, within that awareness, there is a responsibility towards another. Based on that view, it would follow that just in being conscious alone, the program could arrive at the notion of a responsibility toward other beings. One result of this type of discovery is that, instead of worrying about having to pre-program responses to the moral dilemmas it might face – most famously, if it is on a collision course with five human beings, if it ought to swerve out of its way and hit one human being, or continue and hit five – we can be confident that a sufficiently and thoroughly ethical program would be trustworthy enough to make that decision on its own.

This type of research will also raise some interesting questions for Jewish law. These include not only the moral quandary asked above – in which Jewish law generally leans toward the position that one should stay on course rather than actively cause harm to another – but also the question of the status of this program for the purpose of torts. It is doubtful, for example, that Halachah could give the status of a human being to the program. In the 17th century, Rabbi Zvi Ashkenazi already addressed the question of whether a golem (an animate being created from inanimate matter) can join a minyan, a quorum of persons needed in prayer (he ruled against it). But neither can this program be given the status of an “ox,” to the extent that any damage it causes would be subjected to the test of whether that was usual for that type of program and if it had already demonstrated a destructive pattern. After all, this program is not an automaton. The answers to these questions will require some halachic ingenuity.

The writer holds a PhD in Religious Studies from McMaster University.

Related Content

Trump ban
August 18, 2018
Record number of Jewish voters will reject Trump in November

By HALIE SOIFER