"We would partner with companies whose reputations are less than virtuous"Jordan Inglis for Varsity

Joshua –

If the University of Cambridge seeks to remain a modern, forward-looking institution then yes, AI usage should not just be allowed, but encouraged. There is no better way to ensure equal participation than providing a market-leading AI tool like ChatGPT-5 free to all students.

Whether you like it or not, AI usage is now commonplace within universities, but our current relationship with it remains confusing at best. There are seemingly endless AI tools available and yet a widespread lack of understanding of how they work. Sprinkle in understandable distrust of their outputs, and you have a divided society. Many supervisors or lecturers will disgrace you for even contemplating its usage, while others actively use AI as a learning tool. This results in some students who are AI literate, and others who are afraid to use it, potentially falling behind. Let’s embrace AI: a strong central policy with a recommended AI tool will help all students, regardless of subject, to benefit.

Access to AI tools risks becoming yet another dividing line between the haves and the have nots. Considering that the University seeks to provide a consistent education to all students regardless of background, wealthy students’ ability to pay for productivity boosting AI tools, will only build upon the inequalities that students arrive with. With ChatGPT subscriptions ranging from £20 to £200 a month, the University must strive to prevent this disparity from distorting academic outcomes.

Not only students would benefit from AI access; from college gardeners planning future flower beds, to alumni offices estimating event costs – the range of tasks AI can support is seemingly limitless. For researchers, the value is even greater. AI tools that can summarise research – or even curate ‘living’ evidence databases – could make evidence synthesis faster, cheaper and more accessible. Given that such reviews can sometimes cost upwards of £100,000, investing in these tools is essential.

AI usage is also increasing in the workplace, with many companies encouraging usage and often offering their own company specific AI tools. Being AI literate is growing in importance in the job market, but using AI responsibly is not trivial. It takes time and experience to understand when an output is to be trusted, and knowing how to write an appropriate prompt is a skill in itself. Students should have the chance to develop these skills in a safe, controlled environment, where any over-reliance of AI only temporarily affects their own learning and progress – making university the perfect place to practice using AI responsibly.

“Access to AI tools risks becoming yet another dividing line between the haves and the have nots”

It’s reasonable to worry that encouraging AI usage will lead to its use as an answer sheet or essay supplier, relinquishing students from the requirement to think. But all Engineering students know that self-control is both necessary, and possible. For years, we’ve had access to CamCribs, a website with solutions to almost every possible supervision question. While easy to misuse, one probing question in a supervision quickly exposes over-reliance. The same applies to AI; it’s easy to abuse, but with the existing framework of supervisions, any over-dependent students can be identified, and guided to improve their learning habits.

Providing AI tools to all students is not about replacing learning, but updating how we learn to reflect modern skills and opportunities. Fair access to AI will ensure all students are prepared for both their time at Cambridge, and the wider world ahead.

Jasper –

I am no neo-Luddite; I use AI regularly, and it is irrefutably valuable if used correctly. But if Cambridge were to follow Oxford’s lead, it would mark a troubling race to the bottom. Rather than supporting students’ critical thinking or those in need, we would partner with companies whose reputations are less than virtuous. This is not an upgrade, but a dire decision which should shock all of us.

Research already suggests that overreliance on AI can lobotomise the very skills that a Cambridge education is built upon: creative and critical thinking. While I am sure that most students could exercise some self-restraint, the University should not be funding or promoting something which can erode our most critical faculties and lead to sloppy work and sloppier marking. It would aid the continuation of a pattern we have already witnessed with earlier technologies; our growing inability to detach from our phones, and now AI, has steadily weakened our capacity to think deeply and engage meaningfully. Not simply endorsing, but paying for AI is like injecting it into a petri dish of poor academic habits, where it is more likely to spread and infect than to aid.

Giving AI to all students isn’t a serious attempt at levelling the playing field: it’s a display of frivolous spending at a time of financial hardship. Last year, the University ran a deficit of over £50 million. How could another expense on AI possibly be justified? Rather than AI masquerading as mobility, the University could spend money on support for students with learning difficulties, on access for disadvantaged groups, library resources, or even repairs to accommodation. This spending would be more meaningful and its impact tangible for the students who actually need support. AI is not a solution for inequality; it’s not even a bandage. We need to invest in our people, not in programmes.

“Overreliance on AI can lobotomise the very skills that a Cambridge education is built upon”


READ MORE

Mountain View

Travelling to Trump's America

Finally, we need to consider the nature of such an agreement. This is more than just a financial transaction; it’s a data pipeline enabling the University to hand over thousands of hours of research to companies that are infamous for their data scraping tactics, possible without the weight of imposed guardrails. Linking student accounts and the reputation of the University to any company in such an open manner risks the integrity of research, the security of our data, and potential privacy violations.

The constant demand to remain innovative is understandable in an exceptionally competitive education environment. This, however, would be a profound misstep. Undoubtedly, AI will shape our futures, but it is not the University’s role to further its adoption. In a time when our University is in financial strife, when we deeply need to cultivate human ingenuity, and our proposed partners are ethically and legally questionable, our course is clear. Oxford can chart its own ill-fated course, but here we must invest in people, not programs. We need to empower students to define the future of this University, not AI.