The world is buzzing with talk of artificial intelligence (AI). If you can get to grips with platforms like ChatGPT, suddenly you have access to a whole host of shortcuts and even job opportunities. Here at university, there are few places where this is more pertinent, as thousands of us labour to find new research gaps or write our next essay. Indeed, a Varsity survey at the start of Easter Term found that over 60% of Cambridge students have used AI to help with their work, up 14% from two years ago. Many faculties have yet to catch up with this pattern. In the meantime, the writers here explore their views on the place of this new technology in our learning, and our lives. Daisy Stuart Henderson and Luca Chandler debate the extent of the impact of AI on human creativity, whilst Teymour Taj and Elsie McDowell focus on the need to wake up to its impact, on the study of STEM and on the environment. If open-source AI are the tools of the present, rather than the future, it is time we too opened up about them.
Daisy Stewart Henderson: AI cannot replace the humanities, nor the human
Recent surveys have found that 92% of university students in the UK make use of AI. There is no doubt that this will impact the study of the humanities. But, ultimately, these subjects remain pretty much what it says on the tin. They are the study of people, by other people, seeking to understand the human experience beyond their own lives. So why are we all so worried about AI’s encroachment?
The term ‘humanities’ comes from fifteenth-century Latin, referring to, as one might expect, the study of humanity as opposed to divinity. Its early use by the Renaissance humanists starkly puts our concerns into perspective. A machine may mesh together an encyclopedic knowledge of emotions, or religious beliefs, or conceptions of virtue, in theory. But when you imagine ChatGPT as a humanist, painting the Mona Lisa, or writing Utopia, the notion that we are on the cusp of an AI Renaissance is comical.
“I can’t see a world in which ChatGPT steals the jobs of philosophers”
My fellow History students and I are regularly reminded that true objectivity in our discipline is impossible. Ultimately, and unavoidably, we all view the lives of others through the lens of ourselves. The details that stay with us, that move us and that evoke our empathy, are highly individual, shaped by personal experiences and values that cannot be divorced from our studies. And nor should they be.
I can’t see a world in which ChatGPT steals the jobs of philosophers, or historians, or novelists. And to be brutally honest, if all the essays you write, or the poems you read out, are written by AI, I’m not sure your services will be a great loss to society, anyway. After all, their humanity is the reason we engage with these works.
AI can’t replace the humanities, because AI can’t laugh or grieve, dream or doubt. It can’t look at a child’s drawing and cry. It can’t change its life after reading a poem. The humanities are about being human – and that is something no machine will ever be.
The above paragraph is what ChatGPT had to say when I asked it to write an article with the same title as this one. Even it is self-aware. To be honest, it took the words right out of my mouth. But ultimately, this article could only have been written by me. Because when I think about the fact that AI can’t laugh or grieve, I think about these not as concepts, but as people, and experiences. I remember the giddy hilarity of a specific moment with friends, or the weight of a specific funeral. I think of a specific drawing, and a specific poem. Even the most mundane of supervision essays is a mosaic of the elements that make us who we are, that shape how we interpret the world, and, in one way or another, determine the words we write. Maybe I’m overthinking it, or getting too emotionally invested. But isn’t that the whole point?
Luca Chandler: We cannot let the machines take over our creativity
In E.M. Forster’s prescient 1909 story The Machine Stops, humanity lives underground, nourished by a vast and intelligent system that satisfies every need, including the need to think. Human contact is replaced by glowing screens; original thought is replaced by recycled ideas. The catch? When the machine inevitably breaks down, so does everything else.
From a contemporary perspective, its message rings eerily clear. Forster holds up a cautionary mirror to the age of interdependence and hyperconnectivity. As you may have already guessed, I choose to read the parallels, in relation to AI’s influence on our creative capacities. Today, AI tools are being integrated into our creative processes at breakneck speed. For many of us (myself included) they’re part of how we study, brainstorm, and even write.
There’s an obvious appeal to AI-powered creativity. It removes the dreaded blank page. It offers endless prompts, polish, and pseudo-wisdom. It can imitate style, mimic originality, and even generate poetry that passes for decent. Especially for students under pressure, it’s a tempting shortcut or a comforting co-writer.
Yet, the more we rely on the machine to ‘create’ for us, the less we engage in the messy, vulnerable work of making something new. Fashioning and sculpting creativity is about the process, not just the outcome. The frustration and doubt that eats away at our patience is fundamentally productive. We cannot build these critical skills if we outsource our creative labour.
“There’s an obvious appeal to AI-powered creativity”
Forster’s Machine didn’t fail because it was badly built. It failed because people forgot how to live without it. That’s the risk we run when AI becomes our first stop instead of our last resort. We risk trading originality for efficiency, and insight for coherence. While the output might look convincing, it can lack essential subjectivity.
Call me a romantic, but I still believe subjectivity is the beating heart of human creativity. AI can shuffle what’s already been said, repackage beauty, mimic wit: but it can’t feel anything it generates. As Daisy rightly points out, machines cannot suffer or laugh, and we should keep this insight at the heart of our everyday interactions with AI.
As we welcome these tools into our lives we risk mistaking their fluency for something deeper. If we start aligning our own processes with theirs by constantly optimising and streamlining, we begin to hollow out the very things that make us irreplaceable: intuition, empathy, imagination, care.
The danger isn’t that AI will surpass us. It’s that we’ll forget what it means to be more than efficient. That we’ll dull our instincts, outsource our insights, and gradually become as mechanistic as the tools we build. So yes, AI can’t feel. This should not comfort us, however, we should use this as a warning. The more we rely on it, the more fiercely we must protect our capacity to feel.
Teymour Taj: The University’s fear of AI shouldn’t get in the way of our learning
First, the calculator was going to destroy our maths ability. Then, the Internet was going to end academic integrity. And today, if you believe the doubters, AI will doom our critical thinking skills forever. Don’t get me wrong, I do think that if AI is used lazily with the aim of getting it to do your work for you, it can be detrimental to education. However, treating it simply as an effective tool can make us all much better students.
What does the University have to say? Well, not much. On its website, the sections on a potential “Generative AI literacy course” and “AI and Assessment Policy Framework” are marked “coming soon”. The page on academic misconduct simply carries a one-sentence warning: “A student using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct, unless explicitly stated otherwise in the assessment brief.” Hardly a ringing endorsement of educational AI use. I believe the University should encourage its students to use AI, letting us study more efficiently in a way that works for us.
AI allows students to adapt their studies to their own learning style, which may not overlap with the way that their course is taught. As an Astrophysics student, I have access to past exam papers dating back twelve years, but no answers to go with them. I would wager that most STEM students, like myself, learn best from marking their own work and understanding where they went wrong. Despite repeated requests to course organisers, they have not budged. ChatGPT has been a lifeline for me to generate model answers with surprisingly high accuracy, allowing me to mark my own work and revise effectively.
“AI can make us all much better students”
In turn, AI provides instant feedback, helping me get ‘unstuck’ on a problem sheet question or wrap my head around a tricky concept quickly rather than waiting weeks for a supervision. With the University paying rates of almost £20 per head for an hour of supervision, it would be far more efficient to encourage self-marking assisted by AI. Supervisions could be refocused towards stretching students’ understanding of the subject, rather than correcting small mistakes on question sheets.
If the University does not start promoting AI in education, the consequences could be dire. AI is being integrated into the workplace at breakneck speed: unfamiliarity with the new technology could make the prospect of hiring Cambridge grads less appealing. But the problem starts before students have even left Cambridge. The current guidelines are unclear and unrealistic. If one student interprets ‘generated by artificial intelligence’ as any use of AI software, while another thinks that it only refers to material copy-and-pasted from a chatbot, clearly the first student will be at a disadvantage.
With every Google search now providing an ‘AI summary’ and tools like grammar checking and code auto-complete being integrated as a default setting into software, AI-generated content is becoming impossible to avoid. It is high time for the University to step up to the plate and embrace the benefits of AI, for the sake of its students and its future.
Elsie McDowell: Cutting corners will hurt the planet
AI is the latest incarnation of ‘sexy tech’ that is supposed to revolutionise our daily lives. I do not dispute that it has the potential to be a force for good in some instances – think improving the mapping of the projected impact of future wildfires, or streamlining detection of methane emissions. That said, AI often replaces less carbon intensive and more simple actions with unnecessarily high-tech ones.
Take the vast quantities of water and energy needed to sustain our individual AI usage, for instance. Just 100 words generated by ChatGPT uses the equivalent of a small bottle of water, and enough electricity to power 14 LED light bulbs for an hour.
Yet, we are increasingly normalising the use of generative AI, especially ChatGPT, for the most mundane and simple of tasks. Writing emails, coming up with essay titles, or even just the weather forecast; all of these tasks would take up such small amounts of time and energy if we used the tools already available to us.
“The energy required to train generative AI is enormous”
Of course, no online activity is completely without environmental impact. A single Google search produces approximately 0.11g of CO2, and the projected 376 billion emails that will be sent in 2025 will account for around 0.3% of global emissions. And, as with any discussion of individual environmental impact, it is important to note that our scope for action will always be limited by the choices of corporations and governments.
That said, I do think that our individual usage of AI is one of the most clear examples of an individual choice that does have environmental ramifications that are completely within our control. The energy required to train generative AI is enormous, as are the quantities of water needed to cool the hardware used to sustain it. Not only are new data centres needed to power our usage of such programs, but the power requirements of AI are increasing the amount of energy used by preexisting data centres too.
AI is set to be the defining technological change of this century, and it has the potential to transform our lives and economies for the better. Hopefully, the chatbots of the future will be more energy efficient and less water-intensive than it is now. But in order to achieve these lofty aims, we need to use it wisely. If and when we choose to use AI, it should not be for a task that could have easily been completed otherwise in a far less resource-intensive way.
Discussions around individual energy usage are always difficult. But, in a world that is already feeling the effects of climate change, the last thing we need is yet another reason to accelerate our energy consumption.
Maddy Browne – Conclusion
As Teymour points out, we have been scaremongering about technology since the start of the Industrial Revolution. All of the fears that these writers have raised are not new. Yet, they do point to a growing sense that, as our technology accelerates, AI presents a bolder, bigger threat, something unique to our history. The latest models can be vital in recognising the patterns that we are still learning to grasp, from identifying cancers to marking problem sheets. This is aside from when it is conversely accelerating our own self-destruction of the planet, as Elsie reminds us. The bigger problem is whether AI starts to not only replicate but damage the way humans think. Luca and Daisy both warn that, even to varying degrees, human creativity is at stake if we do not learn the risks of AI. Instead, regardless of when and how we use these tools, we must push for comprehensive regulation at university and governmental levels. Artificial intelligence threatens to make us less intelligent and intuitive, so long as we submit to normalising its everyday use. Perhaps every time we ask ChatGPT to write an essay title, it gets a little harder to do it ourselves.