Omnibus prik

OPINION: "I don't accept the premise that implementation of AI is predetermined and unstoppable"

Professor of History Mary Hilson misses both greater ambitions and deeper reflections on the ethical, environmental and educational consequences of the introduction of artificial intelligence at the university in a feature article written by Dean Maja Horst and Vice-Dean Niels Lehmann.

Mary Hilson is a professor of history at the Faculty of Arts. Photo: Private

This is an opinion piece, the views expressed in the column are the writer’s own.

"The digital transformation we are facing is enormous in its scope..." This is what Dean Maja Horst and Vice-Dean Niels O. Lehmann from the Faculty of Arts write in a Feature article published in the newspaper PolitikenFebruary 25. I agree with them that AI tools, and especially chatbots, create new challenges in relation to how we teach and examine students in the humanities. Therefore, it’s appreciated that they’re starting the debate. But we are not in agreement. 

Horst and Lehmann believe that "artificial intelligence can write better assignments than most students". I don't recognise this gloomy claim at all! In fact, I have great confidence that many of the students I meet at the university can write significantly better than machines. It’s common knowledge that AI-generated texts are inherently generic and superficial. And as pointed out by the two writers, they are also often unreliable. We know very well that chatbots 'hallucinate' and make mistakes because they don’t 'think' like a human does. 

REPLY: Maybe we don't disagree that much - just on one point

CHATBOT STUPIDITY

Fortunately, Horst and Lehmann believe, we humanists have the necessary critical skills to assess the quality of a text. Yes, but I don't think we're going to train these skills if we focus solely on examine texts produced by chatbots. Horst and Lehmann suggest that our history students shouldn’t write analyses of sources from scratch, but assess and improve AI-generated analyses. Could you imagine a more boring and demotivating way to teach history? And what about all the thousands of historical sources that haven’t been digitised and are therefore beyond the reach of chatbots? In my teaching, I’d like to continue to read human-written texts with the students – both older and newer, handwritten and printed, in contemporary Danish and in other languages. I think that's much more exciting and meaningful than spending time decoding something stupid written by a chatbot. 

DOESN'T ACCEPT THE PREMISE

Horst and Lehmann believe that AI competencies will be crucial for the labour market in the future. It may well be that it will be one of several important skills, but exactly what the need will be in just a little longer term, no one can know, because the situation is constantly changing at a furious speed. It may well be that they‘ll demand the opposite. The more the chatbots take up room, the greater the need for humans, who possess classical humanistic abilities to read in depth, to understand contexts, and to write and communicate purposefully. As a historian, I know that battles over new technologies are nothing new. I simply don't accept the premise that the implementation of AI is predetermined and unstoppable. Of course, we cannot pretend that it doesn’t exist, but the rapid growth should call for caution and patience while we find out which way developments will take us. Here I agree with Horst and Lehmann that we need to learn together with the rest of society. But this learning doesn't have to mean "embracing transformation" without questioning. 

MAYBE I'M A HOPELESS IDEALIST

In this regard, I am disappointed that Horst and Lehmann doesn’t say anything about the significant ethical challenges of using AI chatbots. Should we really invite even more big tech players – with their sometimes quite doubtful values and directions – into our university? And it’s extremely unclear to me how to reconcile the environmental impact in the form of the high energy consumption every single time we use a chatbot with the university's ambitions to "work for sustainable development for Denmark and for the world".  

I makes me sad to read about Horst's and Lehmann's gloomy visions, even though they are presented in an optimistic language. I think we can and must be more ambitious than just being a university that delivers material to the machine by talented citizens who end up dancing to the tune of the tech giants. I’ll maintain the humane aspect by the humanities, which is about humanity and our fellow human beings, both in the past, present – and future. Perhaps I’m hopelessly idealistic, but I’d rather let my teaching be characterised by these values than by Horst and Lehmann's quite dystopian visions.

It’s natural to compare AI with ultra-processed food. You can buy an ultra-processed cake consisting of mysterious chemical combinations – you can also learn how to bake the cake yourself, so you know what ingredients and processes are involved. It requires craftsmanship and experience. The same applies when writing scientific texts, and I’d like to continue to help the students become competent in that skill. The students must not only be able to review a text – they must also be able to write them. 

Read the reply from Horst and Lehmann: REPLY: Maybe we don't disagree that much - just on one point