Zach*, a teaching assistant at a London-based Russell Group university, first heard about AI-generated material in his department in 2024.

Two of his fellow PhD candidates responsible for teaching undergraduates had used AI to generate most of their teaching slides. They decided to adopt it after hearing its praises sung by a friend who worked in the corporate world.

The rumour did not disturb Zach, who sometimes uses ChatGPT to create memes for his students. However, it did spur him to check with the university on its AI policy.

“The policy was basically no policy. They just said please check with your department,” said Zach.

Credit: Joshua Hoehne, Unsplash

Artificial Intelligence v. Academic Indifference

Zach’s teaching experience in higher education is not unusual. Many universities have created guidelines around AI use for students, but few have offered the same guidance for their teaching staff.

A survey of 33 universities by JISC data analytics in 2024 revealed that nearly a quarter of all teachers are using AI tools. However, only 18 per cent were offered training in AI usage and 13 per cent were directly given AI tools by their institution.

Leading Critical AI researcher Dr Dan McQuillan from Goldsmiths called the trend “unsurprising” under the UK’s “AI-pilled government”.

He criticized higher education providers for embracing the hype around a technology whose workings and impact are a “disservice” to learning.

“We’re in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque,” he said during a seminar at the Goldsmiths Centre for Philosophy and Critical Thought.

The risks of this experiment are growing. Earlier this month, a group of students from the University of Staffordshire expressed outrage at a lecture which was entirely created and delivered by AI.

James, a student on the course, confronted the lecturer mid-lecture and told him: “I know these slides are AI-generated, I know that everyone in this meeting knows these slides are AI-generated… I do not want to be taught by ChatGPT.”

Staffordshire University defended its practice by saying that teachers could use “a variety of tools”, so long as “learning outcomes” were maintained.

Caption: Anonymous complaints about AI from Ratemyprofessors.com

In the United States, where academic AI use has reached up to 40 per cent, The New York Times found faculty members using AI for everything from feedback to lectures, much to the displeasure of students.

While schools justify AI-powered teaching on the grounds of preparing students for an AI-powered workplace, lecturers like Dr Talia Hussain at Loughborough University link it to the financial pressures facing faculty members.

“Preparing a module could be really time-consuming. For every taught hour, you could be spending four or five hours designing that taught hour,” she told Raven.

The time spent on preparation is largely unpaid because most contracts offer just 1-2 hours of compensation for prep work, far below what’s needed to create an engaging lesson.

Part-time lecturers also face the prospect of being unable to reuse their work should the contract ends, leading to wasted effort.

“Before the casualisation of lecturing work, a lecturer would have a full-time job. They would teach the class year after year so the time investment in preparing coursework pays off,” she said.

However, AI usage threatens more than the classroom experience. Assistant Professor Lin Hongxuan from the National University of Singapore worries that increasing AI usage will undermine the entire peer review structure in universities.

“Double blind peer review is a massive unpaid, and unrewarded service. People do it cause they are asked to, or as a sort of noblesse oblige,” he said

Since burden of peer review often falls to postdocs, junior faculty, PHD students and other vulnerable members of the faculty, AI use is becoming increasingly common as a solution for the time-strapped.

Recent scandals have found LLM prompts hidden within journal articles, suspiciously ChatGPT-like responses from academic publishers and dozens of journal articles suspected of having little human input.

Caption: Hidden prompts written in white text. Designed to sway potential peer review done by AI.

This creates a serious problem within the ‘living, breathing heart’ of academia, explained Professor Lin.

“Peer review is everything. Without it, we are Penguin, we are Bloomsbury. It’s just opinion.” he stated, “And it is in this very fragile part of the academic ecosystem where AI is making its presence felt.”

“Over time, it is very hard to say how this will distort research in science,” he added.

Dr McQuillan concurs. He maintains that academia’s critical edge is just as fragile, especially in a marketised higher education landscape, with jobs and promotions at stake.

“Performative radicalism is a paper tiger, and managerial metrics can affect careers,” he said, “In this, many academics are no different from bureaucrats.”

Welcome to the Uber-versity

Perhaps the greatest irony is that many academics find joy in using AI tools. Many of those interviewed recognise its utility and creative potential.

Dr Talia Hussain has experimented with Adobe Express, using it to extrapolate patterns from images of bird plumage. She also uses AI to communicate with art students who speak English as a second language.

Over in Singapore, Professor Lin is collaborating with a computational linguist. Using an open-source LLM, he is analysing revolution-era newspapers en masse, to better understand the anti-colonial struggle in Indonesia.

“This sort of distant reading is very hard to do as an individual, so I am actually very excited,” he said.

Their joy in ‘dabbling’ with AI is undercut by the economic realities of the higher education system. Faced with what Dr McQuillan calls “relentless and ongoing precarisation”, these AI tools are often used by universities to optimise assessment, rather than to engage or connect.

“Given the overlapping effects of AI and increasingly authoritarian politics,” he said, “I imagine the near future for higher education will involve an algo-cultural suppression of unwelcome topics and promotion of spurious ones within a fully Uber-ised employment structure.”

It’s also a system which entirely misses the point of education, in Dr Hussain’s view.

“You know what they need from the person marking,” she said, “It is not a summary of feedback, it is that engagement, guidance and connection.”  

Or as one anonymous university student put it after receiving AI-prompted feedback on their final project: “I understand the pressures on lecturers right now that may force them to use AI, but it just feels disheartening.”

* Name has been changed to protect the interviewee’s identity.
**Raven reached out to the PHD students cited in the article, but they declined to be interviewed.

Updated 9 Dec 22:10.