AI isn’t worth the environmental costs
Nessa Yip argues that AI’s harm to the climate outstrips its potential benefits
ChatGPT has become a household name in record time. Just a week ago, Varsity reported on the University’s official guidelines on the usage of Generative AI (GenAI). After reading their, and other Russell group universities’, guidelines one can sense their tentative, quasi-welcoming attitude. Their statements are appreciably nuanced and reflect the ethical and legal grey territory GenAI is mired in, advising students and staff to adhere to more department-specific recommendations.
Interestingly, while the University generally welcomes GenAI with quivering arms, some departments have turned and walked the other way – the HSPS department implored against the use of GenAI in 2024, a decision I agree with.
I’ve deliberated on the extent to which my aversion to GenAI was justified, and have talked to students who use it to varying degrees. Perhaps some will say I shouldn’t knock it until I’ve tried it, but that’s the thing – I don’t want to try it.
“We should keep these massive carbon and water footprints in mind”
This has shocked, and sometimes frustrated me, as someone who centres their work around an ethos of “working smarter, not harder”. I kept questioning whether I was stubbornly not “keeping up with the times” out of principle, while the world moves at an ostensibly faster pace. But after weighing up different arguments and doing my research, I’ve come to a conclusion, at least for now. I think we’ve rushed to accept this technology far too hastily and indiscriminately without carefully considering the ramifications, especially environmental, all for the sake of efficiency.
Perhaps the biggest reason some environmentalists, myself included, oppose GenAI is its energy-guzzling development pipeline. We’ve heard about large data centres – buildings that house hardware – that require cooling by essentially being bathed in water. This is because of the power-intensive process of training the model and computing inferences. What’s more, developers are keen to churn out newer and newer models, a turnover rate that sees previous energy invested into the previous model go to waste. The large language model (LLM) market is cutthroat, each racing to a better mode; naturally, strategies to optimally improve energy efficiency aren’t at the top of their priorities.
With most environmental issues, it may be unfair to assign individual burden to users of GenAI, given that select tech companies are the active perpetrators of unsustainable technology development. Regardless, I strongly believe we should keep these massive carbon and water footprints in mind. It’s not a strictly numbers game to me – yes, using GenAI on an individual level probably isn’t going to change much in the grand scheme of things. But we should be aware of and retain the bigger picture.
“I kept questioning whether I was stubbornly not ‘keeping up with the times’ out of principle”
Besides its environmental burden, ChatGPT might not be as efficient as it is lauded. ChatGPT, or more generally, GenAI, repackages content it has seen before during training. It doesn’t analyse, nor fact-check. Using GenAI would appear to double the workload if one were to produce critically analysed work with verified sources. It seems like a small return for a hefty environmental price.
I’m also worried about the potential erosion of my attitude towards learning as a result of this technology. An example I’d raise is employing GenAI to generate code, a scenario I’ve seen rise in frequency in the research industry. The actual cycle of learning, failing and improving is a tenet I’ve tried to actively instill in my learning throughout university, and especially in the face of GenAI. Of course, some may argue that GenAI makes coding accessible to those who aren’t versed in, or are beginners to, coding, which I can empathise with.
Ultimately, where you stand on the issue is your choice – my hope is to reinstate the importance of making decisions that align with your learning ethos, and weighing up the perceived benefits of GenAI against its environmental cost.
Using GenAI to generate “inspiration” for work that requires original creative thought seems problematic to me. The University has highlighted GenAI’s potential to “suggest sentences or paragraphs which are used to inspire a student’s own line of thought”, while advising that credit to GenAI isn’t required for work that doesn’t include significant contribution from its usage. Several questions naturally follow: how does one define a significant contribution?
Given GenAI’s reputation for its fallible citation system, unwittingly plagiarising another author’s thought is not an unimaginable scenario. Just as how credit is given where credit is due in academia, where arguably “inspiration” is usually taken, shouldn’t the same apply to “inspiration” taken from GenAI? To rush headfirst into this technology without consideration is to find ourselves enmeshed in grey areas. And as we attempt to navigate through them, data centres continue to drain expanses of water and take up ever more land. Climate change is a ticking time bomb – it doesn’t wait for us to resolve our quandaries over GenAI, or for us to realise the environmental impacts of it.
News / Controversial women’s society receives over £13,000 in donations14 November 2025
Features / Beyond the Pitt Club: The Cambridge secret societies you have never heard of16 November 2025
News / A matter of loaf and death17 November 2025
News / Trinity master to step down17 November 2025
Comment / Anti-trans societies won’t make women safer14 November 2025









