REPLY: Maybe we don't disagree that much - just on one point
Dean Maja Horst and Vice-Dean Niels Lehmann respond to criticism in an opinion piece written by Professor of History Mary Hilson.
In a debate, it’s often rewarding to refine the viewpoints. The risk, however, is that you end up framing things so sharply that you create too much distance between the options you actually have. With our column about Generative AI (GAI) in Politiken, we may have been guilty of this.
Mary Hilson's opinion piece suggests that some of our formulations may give rise to misunderstandings, which are important for us to correct. To us, the disagreement seems far smaller than Hilson considers it to be. As far as we can see, it can be reduced to a single point. We will return to that, but first, we'll highlight what we do not disagree on.
OPINION: "I don't accept the premise that implementation of AI is predetermined and unstoppable"
A specific task
Hilson claims that we’ve suggested that “history students shouldn’t do analyses of sources from scratch.” We cannot recognise this image, because we couldn’t dream of preventing teachers from “reading human-written texts with the students”. Of course, classical hermeneutic studies must still be conducted in a humanities subject. It’s in no way about replacing humanistic skills with digital ones, but rather about helping students acquire both.
This is no easy task, as we must invent new ways to conduct in-depth studies in a time of pressure, where the temptation to hand them over to artificial intelligence is great. If we succeed in maintaining balance, however, it may be that we humans become better at hermeneutic work. Because in addition to the fact that we consider it a binding task to ensure the relevance of our graduates in a labour market where AI is gaining more and more foothold, GAI holds enormous potential for improving learning processes.
Critical thinking and digital competencies
Hilson rightly suggests a number of risks associated with GAI, including ethical challenges, sustainability, and the excessive power of tech giants. We too can easily spot pitfalls. To Hilson's list, for example, one should add the risk of destruction of the democratic dialogue.
Therefore, in the article, we emphasise the importance of critical thinking. In an AI era, it’s more important than ever, because we will be flooded with AI-generated communication. If we understand Hilson correctly, she sees us as pessimistic technology determinists who understand the “implementation of AI” as “predetermined and unstoppable.”
Indeed, we don’t believe that GAI can be rolled back. No matter how critical one is of the implementation of AI into all sectors of society, we believe that GAI is here to stay. But for us, it's not about just sitting back and letting technology take over. Quite the contrary. It’s important to work on how to integrate GAI so that we get the most out of it and avoid as many negative impacts as possible. The way to do that is to interact with, use, and reflect on the effects of technology. We don’t believe that either students or teachers can practice critical, reflective use without having acquired a broad set of digital competencies.
Here we disagree
We’ve now reached the point where we truly seem to disagree: Understanding how transformative AI will be for society as a whole and for the education system more specifically. Hilson finds that we overestimate the quality of the outputs GAI can deliver. For one thing, she doesn't buy the premise that technology has become unstoppable.
Here we look at the matter quite differently. We are regularly surprised by how advanced GAI actually is and how high of a quality the outputs from GAI are. At least when you make an effort with your prompts. A few years ago, it was probably true, as Hilson claims, “that AI-generated texts are inherently generic and superficial.” But today, AI is a basic technology that will soon be found in almost all the technologies we use in everyday life.
Therefore, we believe that we are failing our students if we don’t prepare them to work with GAI. Technology has already made its way into the job market, and many of the tasks that our students previously did in their first jobs are well on their way to being transferred to GAI. Therefore, our graduates must gain the skills that enable them to compete with GAI and to work together with technology by contributing what only humans can deliver.
We must find the solutions together
Like Hilson, we would like to be able to hurry slowly, while also dealing with all the other intricate issues that late modernity confronts us with. Therefore, in this article, we ask the public to understand that it takes time to work on the problems. But if the future is that GAI can create products (at least) at the level we can produce ourselves, the future seems to be now.
There are no simple answers to these challenges. And the answers hardly look the same across the faculty of Arts' very wide range of subjects. In the feature article, we only state some relatively general recommendations. There’s a huge need for us to find solutions together – both those that can be used across disciplines and those that are completely specific to individual disciplines. The latter require professional insight and are therefore best found in local subject groups.
We’ve tried to get the ball rolling. Thank you, Hilson, for picking up the bat, giving us the chance to clarify our starting points.
This text is machine translated and post-edited by Lisa Enevoldsen.