The anti-AI performance at Cambridge
Abril Duarte González examines how resistance to AI is less about preservation of academic integrity and more so a case of plain denial
At Cambridge, I’ve noticed a pattern: compared to friends from home, school, and even my own family, people here can be almost desperately anti-AI. There’s a kind of puritanical aversion to anything AI-generated. I’ve had course mates tell me they don’t even know what the chat interface of ChatGPT looks like. Whether that’s true or just a performance of academic snobbery isn’t really the point. What’s interesting is that in an institution like this, people are so resistant to something that is very obviously not going away.
Now, this isn’t to mischaracterise my stance. Technology (like any kind of change) comes with risks. Cheating and plagiarism are the obvious ones, but they’re only one part of the picture. You could, in theory, go through an entire degree outsourcing work to AI. There are also more extreme concerns: stories of AI-induced psychological issues, the rise of ‘AI slop’ in art, and the fear that it erodes critical thinking or genuine creative intention. And then there’s the environmental cost, which is hard to ignore: the sheer electricity demand of large language models, the associated carbon emissions, and the enormous amounts of water required to cool the hardware are real, material consequences.
“Burying your head in the sand doesn’t protect you from what’s coming”
So yes, there are good reasons to care whether something was written by an LLM, but I do think that normally part of the aversion comes from AI as a perceived threat. Some of that is simple: people don’t like change. There have always been people like this – those who would have resisted the shift from manuscript to print, from letters to email, or from landlines to mobile phones. But there’s also a deeper layer. For some, AI threatens what they do, as well as the value of it, whether that’s a hard-earned Cambridge degree or an aspirational future career. That anxiety is understandable. If copying and pasting from an LLM is easier than reading, writing, or thinking, then of course it raises questions about effort and fairness. People who didn’t have access to these tools may expect others to go through the same intellectual labour they did. In that sense, we are often reacting to something legitimate: a frustration with cheaters and free riders.
But boycotting AI altogether, or dismissing anything it produces purely because it’s machine-generated, doesn’t really solve that problem. Burying your head in the sand doesn’t protect you from what’s coming. I’m not going to list all the ways AI is helpful. Some people quietly admit to using it to interrogate their essays before supervisions, asking it to simulate that familiar Socratic questioning. Others use it for SPAG checks, emails, LinkedIn posts, even recipes.
“The issue isn’t the technology; it’s the user’s refusal to exercise judgement”
More broadly, technology has never straightforwardly undermined creativity in the way people fear. Factory automation didn’t eliminate work, photography didn’t kill painting and the internet didn’t kill print. These arguments assume that human creativity comes as a fixed quantity, when in reality it tends to expand and adapt. There’s a contradiction in fearing AI as the end of creativity when it is itself a product of human creativity. So maybe the question is less about the tool, and more about how people choose to use it.
Personally, I don’t think the answer is abstinence – I think it’s discipline. AI, like any tool, requires skill. It rewards thoughtful use and punishes laziness. You can see that in the output: overuse leads to generic phrasing, stylistic repetition, that slightly hollow tone we’ve all learned to recognise. At that point, the issue isn’t the technology; it’s the user’s refusal to exercise judgement. The mind behind the prompt still sets the standard. Which is why the real danger might not be AI itself, but intellectual passivity and overreliance without reflection. Even then, what if AI doesn’t create passivity so much as it reveals it?
There’s also, I think, a certain kind of snobbery at play: a reluctance to engage with machine learning tied up with a strong sense of intellectual self-reliance or of knowledge as something personal and individually curated. A complete refusal to even consider how it might be used productively starts to look like a form of gatekeeping: an insistence that academics alone, through lectures, supervisions, and monographs, are the only legitimate source of knowledge. AI could challenge that academic scarcity, and with it, a certain kind of authority.
Some of the backlash in faculty newsletters or warnings before coursework submissions, suggesting students can’t be trusted or that teaching is no longer worth it, feels less like principled resistance and more like a mixture of moral panic and wounded prestige. At the same time, there’s something oddly contradictory in how we talk about AI. On one hand, we reduce it to a set of practical possibilities; on the other, we inflate it into something almost mythical like a looming, hyper-intelligent force with vaguely malicious intent. Both framings miss something. We shouldn’t ignore the environmental, psychological, and educational implications, but we also don’t need to collapse into panic or a sense of inevitable decline. If we approach AI with nothing but contempt and fear, we risk missing what it might actually offer. And more importantly, we avoid the harder question – not whether to use it, but how to use it well.
News / International students complain of ‘impossible choice’ under Cambridge travel disruption policy28 March 2026
News / Rowers voice growing concerns about ‘red boat man’ 27 March 2026
News / Downing to demolish restaurant for new student accom27 March 2026
Comment / Not all is beautiful in the death of the ‘golden ticket’26 March 2026
News / Uni awarded University of Sanctuary accreditation26 March 2026









