Dr. Jana Schaich Borg is trying to build morality into a machine. Through her work at the Social Science Research Institute at Duke University, she has become an expert in the moral psychology, neuro- and data science of decision-making. She has realised that Artificial Intelligence is merely a lens: the problems we need to solve in algorithms are the problems we should be solving in society.

„Mein Labor versucht zu verstehen, was menschliche Verbundenheit ist“

herCAREER: Tell me about your research. What are the questions you want to answer?

Jana Schaich-Borg: I am a neuroscientist, working closely with data scientists, and my whole career I have been trying to understand how we make social decisions, both consciously and unconsciously. Today, in my lab, we focus on two overarching goals: How could we build morality into AI systems – so they interact with society in a way that we feel is in line with our values? That also begs the question of how we, as a society, use and employ AI in line with our values.

The other part of my research is in some ways the flip side of that. What do we need from humans – what is special in and about our interactions with each other? My lab is trying to understand: What is human connection? Because we want to understand how the development of AI technology could interfere with our human-to-human connections.

herCAREER: How is AI impacting our interactions already?

Jana Schaich-Borg: This field certainly hasn’t been looked at enough. But I am concerned because people are spending hours upon hours interacting with chatbots, instead of interacting with humans. We are heading towards a famine of social interaction.

But there are also ways that aren’t looking so bad, where human-to-machine interaction will be really helpful. For example, some people on the autistic spectrum find interacting with AI much more comfortable. They feel heard in ways they feel humans can’t. There is also evidence that people with PTSD can be more comfortable disclosing things to an AI because they don’t feel judged.

herCAREER: In my understanding an AI-system will always be as biased as the data you feed into it. What are the implications for moral or fair decision-making in such a system?

Jana Schaich-Borg: There will likely be all kinds of biases built into AI systems. But I think you could counteract that technically, you could give different training, and you could correct an algorithm. The real problem is that we don’t.

But there is another, bigger problem: In order to fix a bias in the system, we have to define what we are trying to fix. There are more than 20 official definitions of fairness – it’s going to be very hard to reach a consensus on which of those definitions is to be installed into a system. So this is where it gets really complicated.

herCAREER: So when there is hope that with AI in recruiting or maybe loan applications, we could control biases toward women or marginalised genders…

Jana Schaich-Borg: …we would have to ask ourselves: What would gender fairness look like? Are we looking for equal access? Are we looking to compensate for discrimination in the past and give women a leg up? What does fair look like? That is the hard work we have to do. In the end, AI is just a lens. It is not enough to do this work in AI, we need to ask these questions in society, too.

herCAREER: If we are not capable of reaching a consensus on fairness, on equal access, diversity and inclusion – how can we actually usefully regulate AI? I am referring to the EU AI Act and other regulations underway.

Jana Schaich-Borg: I think it would be naive to think that regulation will resolve the ethical problems with AI. It is always going to be too slow and it is never going to be sufficient. But without regulation – there is no chance. If companies and AI creators don’t feel like they are going to be regulated and monitored, I think things are going to go off the rails. It absolutely needs to be a piece of the puzzle.

herCAREER: So what you’re saying is, AI is magnifying the problems of society and only if those are addressed, we can build moral AI. What is the way forward?

Jana Schaich-Borg: As we are trying to navigate this future with AI, we need to care. We need to get in there. Everybody needs to get involved. But we need to try to view it from a standpoint of our own moral growth, and not as policewomen.

About Jana Schaich Borg

Dr. Jana Schaich Borg is an Associate Research Professor at the Social Science Research Institute at Duke University. She uses neuroscience, computational modeling, and new technologies to study how we make social decisions that influence or are influenced by other people. As a neuroscientist, she analyses the data she collects as a data scientist in interdisciplinary teams.

Dr Schaich Borg’s current research projects focus on developing moral artificial intelligence and understanding social bonding, empathy, and human decision-making processes.

Based on her research areas, she is involved in the development of practical strategies for the ethical development of artificial intelligence. She is skilled at breaking down the implications of complex analytical problems and communicating them to broad audiences in an understandable way.
Together with Walter Sinnott-Armstrong and Vincent Conitzer, she wrote the book “Moral AI – And How We Get There”. The chapters deal with questions such as: What is AI? Is there safe AI? Can AI be fair? And: Can AI incorporate human morality?

The interview was conducted by Kristina Appel.