De-coding AI with Dr. Alexander Carter
Sam Bernald-Ross sits down with the Cambridge lecturer trying to understand how AI thinks

It’s hard to walk through a library in Cambridge without catching a glimpse of the black background and white text of ChatGPT, flooded with panicked instructions in the midst of a last-ditch effort to scrape together an essay. Earlier this year, the Financial Times reported that nine out of ten students use AI to aid their academic work, a number that had risen 40% in a year. Cambridge is far from exempt from this phenomenon, with a Varsity survey in April finding that over 60% of students here are doing so too. So, with the release of ChatGPT 5 last month, one question is pushed to the fore for all of us as students: where’s the line for AI use?
Maybe to understand this, we need to understand what AI really is. This is where Dr Alexander Carter, fellow and lecturer in philosophy at Fitzwilliam College, and academic director of philosophy and interdisciplinary studies at the Cambridge University Institute for Continuing Education, comes in. His research focuses on understanding AI’s creative and thinking processes – or more importantly its lack thereof. He puts it to me simply: “AI doesn’t think like us; it thinks like we think we think. AI is our thinking reflected back at us through a kind of polished surface”. This leads to a discussion of his current research into the view that truly creative AI is impossible: “It’s to imagine I could gain something creative by looking in a mirror”. So, for Carter, all an AI can do is imitate, meaning all the inimitable things about us as humans can’t be reproduced, and one of those is creativity. It’s “exactly our biological natures” that allow us to be truly creative – bio-logical, he points out, not logical-logical, the foundations of artificial intelligence. “The genius for ambiguity that we have is this capacity to exist in a world that is neither one nor zero. An AI cannot do this”.
“I’m not a luddite”. And AI is, “quintessentially technology”
Dr Carter is quick to dispel the idea that he is ‘anti-technology’ in the old-fashioned sense – “I’m not a luddite”. And AI is, he notes “quintessentially technology […] putting strategy out into the world to precede anything that we might do”. But that’s the “biggest problem […] what it’s preceding now is our thinking”. In using these AI tools to work or learn for us, we exacerbate what Dr Carter labels “the race to the middle” – bringing human and artificial intelligence to the same level, by, in essence, making humans worse whilst the AI gets a little better.
And these aren’t just theoretical musings for the armchair intellectual, but genuine considerations for how we go about our AI use. Our capacities for ambiguity, spontaneity, things central to our creativity, cannot be found in an AI “so bound up with thought and being logical that once imposed on education it kind of nullifies the non-logical, non-rational stuff that education is all about”. It’s a seemingly odd point Dr Carter puts forward here; that really learning sometimes involves not thinking quite so much, whilst an AI by definition is only thinking. And this is well reflected in his own academic approach. “Philosophers often think too much,” he tells me, an interesting contrast to the stereotyped chin-stroking Oxbridge philosophy don.
So how should we be using these tools as students? Is it alright to use them to plan an essay, but not write it; to just summarise an article or two?
“AI doesn’t think like us; it thinks like we think we think”
His answer is perhaps surprising, and antithetic to the quick-fix nature of common student ChatGPT use; AI, he argues, should make problems for us, not solve them. “If you’re using it to target a solution, to simplify the thought processes and clarify something, STOP. If you’re using it to mystify, to problematise, to point you in multiple different directions and give you more to do, great”.
AI can be excellent, Dr Carter argues, when improving AI improves how we think. A major issue with our AI use is the instant satisfaction: “The whole point is that making mistakes is what leads to learning as well… getting the correct or a satisfactory answer every single time… that’s the issue”.
He stresses that he doesn’t want to entirely discourage students from using genAI tools. He acknowledges that it would be “absurd”, given the likelihood of an AI-driven future, to totally discard it from use in education. AI isn’t really causing our problems, but itself is “a symptom of a much deeper problem…we have been teaching our education algorithmically for decades”. He challenges you to find a really creative essay written by a student, saying that it would take some digging out. And that is a sign that “we need to be teaching people different sets of skills to what we currently teach”. He depicts a quite radical picture: “We have to completely re-envisage what education is about”.
And perhaps this is exactly the point about our use of AI as students. Without motivation to be really creative in our work, we can turn to the instantly satisfying solution for the sake of sheer expediency. But that can come at the expense of “fundamentally eroding” our capacity for thought. It might make life faster for us, but as Dr Carter puts it, your favourite meal will not be improved by “putting it in a blender and drinking it”.
Features / Cambridge SU: has-been or never-was?
16 September 2025Arts / A walking tour of Art Deco Cambridge
16 September 2025Comment / Cambridge South is right to be ambitious
13 September 2025News / AstraZeneca pulls £200 million in funding for Cambridge research site
15 September 2025Features / Meet the Cambridge students whose names live up to their degree
9 September 2025