The new guidance largely reflects the university’s prior stance on AI, following the pro-vice-chancellor for education telling Varsity in 2023 that ChatGPT should not be bannedRyan Teh for Varsity

The University of Cambridge has released new guidance for students and staff on the use of Generative AI (GenAI), which states students may use AI in personal work.

While noting AI may “provide useful responses,” the guidance outlines a set of general principles for the use of Generative AI to best mitigate against “legitimate ethical concerns” associated with its use.

Additionally, it acknowledges that the wide variety of study at the University means that “appropriate use is better defined locally by Department, Faculty or College depending on the context”.

The newly-released guidance states: “Students are permitted to make appropriate use of GenAI tools to support their personal study, research and formative work.” However, it advises that students should acknowledge AI’s use when it has contributed to a significant unchanged portion of the work.

Individuals should “always check with a member of staff” or “check specific usage guidance with relevant areas of the University.” For staff, it reads similarly, “Staff are permitted to make appropriate use of GenAI tools to support their own work”.

The new guidance largely reflects the university’s previous stance on AI, after the pro-vice-chancellor for education told Varsity in 2023 that ChatGPT should not be banned, as “we have to recognise that this is a new tool that is available”.

In the principles for usage outlined, the University emphasises that the data and information utilised by GenAI is not sufficiently referenced which can thus lead to both “inaccuracies” and “difficulties citing valid evidence”.

AI may utilise data containing “social biases and stereotypes”, and also that there are potential privacy dangers in using AI in that the tools may utilise or store individuals’ own information. It advises individuals to “not share anything personal or sensitive”.

The guidelines also pointed to the “severe environmental impact” of GenAI, noting its intensive power and water usage, and advised students as a result to be careful about extended usage of the tool. Students should use the tool only “as and when necessary and be as efficient as possible to mitigate these effects”.

More detailed set of guidance reminds students of the statement from Plagiarism and Academic Misconduct guidelines released in 2023, that “using any unacknowledged content generated by artificial intelligence within a summative assessment as though it is their own work constitutes academic misconduct,” unless the assessment brief specifically states otherwise.

Meanwhile, examiners may use AI “in the processing and formulation of their own comments and feedback,” such as using it to consolidate their notes and rephrase comments. However, they are not permitted to submit student work to GenAI in any case, and must not use the tools to analyse student work or provide feedback.

In response to the new guidelines, Cambridge Climate Society told Varsity, “We are concerned that the University is engaging in a similar attempt of shifting the onus of environmental action onto students and staff while eliding its own role in the development, and financing of environmentally and socially harmful technologies”.


READ MORE

Mountain View

Chat GPT usage up 14% since 2023

They added: “We appreciate that the University administration is mindful of the serious environmental and social impacts that extensive AI usage can engender, and we broadly support the inclusion of advice which is aimed at discouraging inefficient usage practices. That said, the current framing perpetuates the narrative that sustainability is an individual burden, a narrative employed by carbon majors to displace their responsibilities.”

Earlier this year, a Varsity survey found over 60% of its students had used ChatGPT in their work, a 14% increase from 2023.

This follows an ‘AI’ category for academic misconduct being introduced for the first time in 2023/24, with examiners reporting three cases of AI-linked academic misconduct.

It also comes after the University launched “AI clinics” earlier this year, to assist academics and students in the effective use of AI for their research, claiming the tools can help enable a “new wave of scientific discovery”.