Omnibus prik

Philosopher on AI: "I’m Concerned About More Powerful Technology We Don’t Understand"

The breakthroughs in artificial intelligence are coming so fast that it can sometimes be difficult to keep up. That’s what philosopher and associate professor Rune Nyrup says. He researches ethical issues with a focus on artificial intelligence at the Department of Mathematics. Fortunately, however, many people are aware of the challenges, and that is "super positive," he says.

Rune Nyrup is a philosopher and lecturer at the Department of Mathematics and a researcher in artificial intelligence and ethics. Photo: Roar Lava Paaske

NEW YEAR'S SERIES

At the turn of the year, Omnibus asked a number of AU researchers across faculties what they considered to be the most remarkable developments in their field in 2025. And we’ve spoken with them about their hopes and fears as they look ahead to 2026. 

In this section, you’ll meet Rune Nyrup, who is a philosopher and lecturer at the Department of Mathematics, where he researches artificial intelligence and ethics.

Previous sections:

Artificial intelligence has suddenly become part of everyday life and working life for many of us, and developments in this area show no signs of slowing down. Philosopher and associate professor at the Department of Mathematics, Rune Nyrup, teaches science theory and ethics to computer science students and researches ethical and epistemological issues with a focus on artificial intelligence. In this section of Omnibus' New Year’s series, you can read about the most noteworthy events in a whirlwind year within his field.

What are your current interests? 

"AI is a very big topic right now, and many different things are happening at the moment that I am interested in. But what concerns me most is what is known as opacity or lack of transparency.

Arctic researcher: “It’s remarkable how quickly you can destroy something that took many decades to build"

When you have a highly complex AI system that has been trained on a large amount of data, and the system has found patterns in that data and then encoded the patterns into a model, it can suddenly become very difficult for humans to understand how this model works. It will be difficult to know when we can trust the systems if we don’t understand how the models work. When we don't understand how they work, it also becomes difficult to predict when they will work and when they won't."

What is the most remarkable thing that has happened within your field this year?

It's probably the language models. They’re really going at full speed. Some very powerful models are starting to appear. Several of my colleagues in computer science now use language models as code partners. They get the model to make a draft, and then they correct the code afterwards. And some of my colleagues at the Department of Mathematics, who are interested in AI for mathematics, are beginning to see some models that could potentially solve research problems in mathematics that mathematicians themselves haven’t been able to solve. The prediction is that we will see examples of this being published over the course of the next year. One thing is to implement AI at a low level of application to automate some processes that we actually know how to do manually, so to speak. Another thing is to have systems that can solve problems at the level of fundamental research, which we didn’t know how to solve before.

This raises the question of whether it could undermine human understanding. If there are algorithms that can solve mathematical problems at a research level, then they can certainly also solve the kinds of problems that one encounters when learning mathematics. Then it suddenly becomes a question of how we teach our students, upper secondary school students, and primary and lower secondary school pupils to use AI without it preventing them from developing their own skills. There are many teachers who are scratching their heads because they don't know how to solve this challenge. To my knowledge, no one has come up with any particularly good solutions yet."

What worries you most when you look ahead to 2026?

"I am concerned that there will be even more breakthroughs. The amounts being invested in this technology are truly enormous. It would be surprising if they didn’t yield any return. But I worry that suddenly there will be a whole host of new breakthroughs that we haven't even begun to think about, because we are still dealing with the last generation of AI. It’s difficult to put into words exactly what that breakthrough could be, but I am concerned that we are acquiring increasingly powerful technology that we don’t really understand and don’t know how to integrate into our society in a beneficial way.

What gives you reason for hope and optimism?

It’s encouraging that there are many people within the technical sciences and technology who are aware of these challenges and concerned about them. “In the history of science, there have been times when scientists have washed their hands of responsibility and said, ‘We just make the tools, it’s up to others to decide how they’re used” This is much less the case here. There are major areas of research within computer science that deal with how to design AI in a safe way that we can understand. I experience this in my own work as well. It’s not always the case that scientists want to talk to a pessimistic philosopher, but in this case, they are very keen to talk and collaborate with me. That's super positive."

This text is machine translated and post-edited by Lisa Enevoldsen.