Perhaps it is possible that one day, ChatGPT is able to generate convincing essays that score at least a 2:2 in the Cambridge tripos … Jernej Furman / Flickr

We’ve probably read and heard enough about the revolutionary ChatGPT.

For many of us, it has already been integrated seamlessly into our daily lives. Besides the fact that OpenAI’s website chooses to somehow, coincidentally operate at full capacity one hour before our assignment deadlines, it is arguably pretty helpful. From summarising long texts, writing emails that avoid making us sound like pushovers, to even writing internship applications, ChatGPT has plenty of roles to fill within a university student’s life.

Almost half of Cambridge students have used ChatGPT to complete university work merely five months after its launch on the OpenAI platform. The University has taken a harsh stance against using ChatGPT during tripos examinations, considering the usage of its content “academic misconduct” and thereby attesting to its relevance and competence in replacing certain assessed skills. Meanwhile, the pro-vice-chancellor for education has expressed his disagreement with blanket bans on AI software like ChatGPT, arguing for the recognition of adapting our “learning, teaching and examination processes so that we can continue to have integrity” while acknowledging the use of AI tools.

“Having a search engine that directly answers our questions is dangerous if used as a replacement for our independent thinking”

While students continue to use ChatGPT to aid their studies, numerous experiments with the AI platform have suggested that it provides unreliable information and gives rise to, among other things, a lack of professionalism: a lawyer was found citing non-existent cases in court, after using ChatGPT to prepare his filings. However, beyond plagiarism and professional misconduct (yes, there are worse things in life), there’s an arguably more insidious and long-term effect of having such a powerful AI search engine.

If you’ve ever heard of the “Google effect”, you can likely predict that a similar effect arises out of ChatGPT. The Google effect refers to digital amnesia: the tendency to forget information that can be found readily online by using the internet. This phenomenon was first described and studied in 2011, where it was concluded that people do not tend to remember information if they believe it will be easily looked up later. If the information is saved, people are much more likely to remember where the information is stored, rather than the information itself. Interestingly, we tend to remember either the fact or the location, but rarely both.

As we use our smartphones more, they serve as “extensions of our brain” to which we can outsource information. Let’s be honest ­– how many of us rely on Snapchat to remind us of our friends’ birthdays? And how many of us could recite our closest friends’ phone numbers without looking at our contacts? While human beings have been outsourcing our memory to various other avenues for centuries (such as in diaries and books), modern technology takes it a step further as search engines online tend to provide an endless stream of information. In fact, research by George Millar in 1956 found that the average person is only able to keep roughly seven items in their working memory – yet today, that number has fallen to four.

“Perhaps it is possible that one day, ChatGPT is able to generate convincing essays that score at least a 2:2 in the Cambridge Tripos”

Now, these concerns have been aired since the rise of Google and the widespread accessibility of the internet, but we give little thought to them today (even I am guilty of Googling “effects of ChatGPT on human memory” for this article). In light of this, some argue that our dependence on ChatGPT would not be materially different from our reliance on Google, and that its concerns and controversy will fade in time to come.

Regardless, there is a prominent distinction to be recognised between advanced AI models like ChatGPT and our mundane search engines: when we use Google, we search for relevant information to address a specific question or task. Yet when we use ChatGPT, we get the answer itself. Having a search engine that directly answers our questions is dangerous if used as a replacement for our independent thinking, which is why I intuitively limit my use of ChatGPT. Such an overreliance on a bot’s highly specific knowledge and analytical skills may eventually give rise to the assumption that solutions are merely a chat message away, gradually eroding our problem solving abilities. What will then happen when we do not have our devices at hand?


READ MORE

Mountain View

Cambridge’s unsung heroes of science

However, not all hope is lost for ChatGPT enthusiasts and last-minute assignment completers. ChatGPT is not all-powerful and often simplifies complex problems, given its innate lack of ability to think critically. As OpenAI continues to roll out version updates of the platform, perhaps it is possible that one day, ChatGPT is able to generate convincing essays that score at least a 2:2 in the Cambridge tripos. Nevertheless, we should be actively aware of the impacts of using ChatGPT. Allowing our minds to rely on itself to recall information and facts will prevent a deterioration of our memory, which serves as an important avenue for us to, well, function adequately as human beings.

Most importantly, memory skills are the foundation to preserving our ability to think creatively and express our ideas. While AI can create convincing examples of human writing and expression, it can never truly be as emotionally and psychologically complex as us. By using ChatGPT to supplement our learning, instead of replacing it, we can only then embrace technology as an enabler of our creative thinking and problem-solving skills, and not an inhibitor.