Reframing the anti-AI ‘performance’
Ruby Livingston re-examines the opposition to AI, arguing that it’s rooted in legitimate political concern
The effective accelerationist movement (e/acc) wants to let humanity deregulate itself down a posthuman superhighway towards a singularity, the point where AI begins to improve itself beyond human control. e/acc holds that the universe operates as an optimisation process, seeking the conversion of free energy into entropy at accelerated rates. In this view, guardrails on technology and markets are not only pointless but unnatural – intelligences which lack our fleshly inefficiencies are inevitable, and should be welcomed. “Stop fighting the thermodynamic will of the universe,” says one of the movement’s proponents, who for some reason goes by @BasedBeffJezos.
I’m definitely not suggesting that Abril Duarte González subscribes to e/acc, a movement championed post-ironically by a weird mix of niche San Francisco substackers and Silicon Valley elites like Marc Andreessen. But e/acc came to mind for me, reading González’s description of AI’s blameless scientific inevitability. “Technology (like any kind of change) comes with risks,” she writes, “burying your head in the sand doesn’t protect you from what’s coming.” The puzzle of González’s article is why “people are so resistant to something that is very obviously not going away.”
“The issue at stake is AI’s entangled relationship with systems of social domination which it upholds and relies on”
I am sympathetic to González’s critique of a “puritanical” aversion to all potential uses of AI. It reminds me of the way journalists co-opted the phrase ‘Trump derangement syndrome’ to skewer the wholesale, uncritical rejection of any policy decision made by the first Trump administration. AI has boosted cancer detection rates in the UK by eight percentage points, it has allowed sufferers of motor neurone disease to once again speak with their own voices. It could improve forecasting of the extreme weather events induced by climate change and can improve the prevention of agricultural disease outbreaks by registering early indicators invisible to the human eye. Clearly, not every single application of AI is bad.
I think the issue is the measure by which we criticise AI. Arguing against its usefulness, however unreliable the models may be at the moment, is ultimately a fool’s errand. González is right that AI is probably going to get better and it’s probably not going to go away. This is why, I think, she sees opposition to AI among the bright young minds of Cambridge as a trendy pretence which must conceal some deeper rationale. I would suggest alternatively that people are weighing up a less technocratic calculus.
Apart from the big doomer threat of the singularity, people generally don’t boycott AI purely on the grounds of what it can do. González’s concern with AI’s capacity to enable free riding is, in my experience, remote from the reasons people dislike it. Rather than its novel functional power, the issue at stake is AI’s entangled relationship with systems of social domination which it upholds and relies on in turn.
“AI doesn’t operate within a vacuum, and it is social conditions which determine both the likelihood, scale and nature of these risks”
González is quick to dutifully caveat her argument with a list of “good reasons” someone might be opposed to AI: you can cheat with it, it might send you into psychosis, it makes bad art, and it uses a lot of water. Yet she brushes past these harms as though they are mere lines used to justify a meaner, simpler impulse, a kind of reflexively defensive ‘academic snobbery’ which cannot abide the democratisation of legitimate knowledge. In this telling, the Luddites took a hammer to the shearing frame compelled by a selfish animal fear of technological progress, rather than a recognition that the relations of production were irrevocably altering, and not in their favour.
It is true that new technology produces new risks and new opportunities alike. But obviously AI doesn’t operate within a vacuum, and it is social conditions which determine both the likelihood, scale and nature of these risks, and who benefits from these opportunities. AI’s capacity to distil useful insights from unfathomably huge quantities of data has genuinely exciting possibilities, but we live in a world which incentivises its use for mass surveillance of the US population over fighting climate change.
“Fuelled by a drive as blind and sure as the one which powered the transition from archaea to eukaryote”
Likewise, AI doesn’t just use a lot of water – the data centres required to sustain it leave tap water undrinkable in the poorest parts of Virginia and Georgia. AI doesn’t just flood the Instagram explore page with softcore slop – in 2023, 98% of all deepfakes were pornographic images, 99% of which targeted women and girls. AI agents covertly stereotype African American English (AAE) speakers more negatively than any human stereotype about African Americans ever experimentally recorded, and the negative mental health effects of AI are most pronounced in those already at psychological risk. LLMs are trained by scraping massive amounts of copyrighted creative work acquired without consent or compensation, only to put those same creative workers at risk of displacement. Appallingly, the commercial AI models which write the rich world’s emails have also identified strike targets for the Israeli military in Gaza; OpenAI’s opportunistic new deal with the Pentagon has opened the door for similar experiments in warfare in Iran.
By all accounts, AI is set to disproportionately enrich a set of high-income individuals while leaving everyone else at risk of a life in the “permanent underclass.” AI capital circles a closed loop between Nvidia, Microsoft, OpenAI, Oracle, Anthropic and CoreWeave, skewing incentives and threatening dire economic consequences for the rest of us when the bubble bursts. The UN Development Program now warns of the impending “Next Great Divergence” in global wealth distribution. The pattern here is clear – the use of AI supercharges existing inequalities.
I don’t want to get evangelical about it – use AI as much or as little as you want, that’s your decision. My point is that for many people, making that decision involves weighing up more factors than whether AI is capable of writing a better essay than an academic. The terrain of the debate is not the space of pure utility González describes. The e/acc vision of technocapital’s propulsive forward momentum, fuelled by a drive as blind and sure as the one which powered the transition from archaea to eukaryote, is not interested in the conditions of human social organisation. I am. I think most people are too, and I think reframing the debate this way makes opposition to AI seem less like some kind of virtue-signalling performance, and more like a realist approach. Given that the world works the way it does, the march of progress cannot be politically innocent. Are you prepared to hitch your wagon to the consequences unleashed along the way?
News / Construction begins on controversial Queens’ accommodation6 April 2026
News / Cambridge MP joins academics in funding cuts criticism8 April 2026
Comment / I’m keeping the em dash7 April 2026
Lifestyle / Blind Date: ‘I lived to tell the tale!’5 April 2026
Features / The rise of the June Event 5 April 2026









