OpenAI Funds Research on Moral AI at Duke University


OpenAI, a non-profit organization, is supporting a research project at Duke University in the USA. The project is titled “Research AI Morality” and is part of a larger grant from the AI company to the university. OpenAI is funding a professorship with a grant of one million US dollars over three years. This professorship will focus on the development of moral AI. These three years will conclude in 2025.

There is little known about the research according to TechCrunch. The lead author of the current study, Walter Sinnott-Armstrong, reportedly stated that he is not allowed to discuss his work. Sinnott-Armstrong is an ethics professor at Duke University. He, along with researcher Jana Schaich Borg and Vincent Conitzer, has edited a book on how AI can serve as a moral compass to help make better decisions. This includes algorithmic decisions, such as determining who should receive a kidney donation. The researchers also examine the differences in AI usage between China and the USA, particularly concerning moral aspects. This information is from a press release by the university, which also announced OpenAI’s investment.

Jana Schaich Borg describes herself on her official website as an expert in social cognition, empathy, and moral AI. Her work aims to discover how AI can act in alignment with our values. A quote from Ruth Chang suggests that without this alignment, AI might not be very useful: “If we cannot make AI respect human values, then the next best thing is to truly accept that AI might be of limited use to us.”

The research commissioned by OpenAI also aims to explore predictions about moral decisions in fields like medicine, justice, and economics.

Current generative AI can make decisions within a narrow framework, but it is also highly susceptible to misuse. AI makes decisions based on past human decisions, learning and deriving probabilities from them. AI models are not yet robust enough to prevent misuse entirely. Numerous attack scenarios exist, and ultimately, humans remain responsible for decisions. Schaich Borg, Sinnott-Armstrong, and Conitzer also address in their book how to make AI safe and fair.

Defining morality sufficiently for AI to make decisions within those boundaries is not straightforward. Philosophers have debated what is moral or ethically correct for centuries.