Cambridge University: simultaneously the 2nd, 12th and 34th best uni?Katie Kasperson with permission for Varsity

When you were applying to university, chances are you looked at a league table. With so many universities and course offerings out there, it’s only natural; how else would you decipher what’s ‘good’? Sure, there’s Oxbridge and the Russell Group, and some other reputable universities ordained by word-of-mouth. But in lieu of the Internet’s trusty legion of league tables, how else are you, a bright-eyed yet busy 17-year-old, supposed to sort the Haves from the Have-Nots from the Have-Beens?

To be quite honest, I’m not fully sure, but they seem to have managed just fine up until twenty years ago.

In 2003, Shanghai Jiao Tong University in China compiled the first international university rankings system of what it considered to be the 500 best universities in the world, known as the Academic Ranking of World Universities or simply the Shanghai Ranking.

“League tables don’t even try to pretend that they’re objective or steadfast, because they know that consumers no longer expect it from them”

The following year, Times Higher Education took notice and followed suit, announcing its top 200. In due course, so did many others: the Leiden ranking, Webometrics, the U-Multirank. As a result, in the two decades since, journalists, chancellors and students have all spent hundreds of hours dissecting, deliberating and debating these results, despite the vast majority not knowing or even questioning how they were formulated.

University ranking has become a business, and a very profitable one at that, with both universities and students alike taking notice. However, with more and more rankings competing to co-exist within the HE market, there’s dwindling incentive for these metrics to be objective, so long as they generate digital clicks and stimulate university investment. In fact, it’s more profitable for them to be different, to use diverging methodologies and create confusion, because it enables that otherwise conflictual co-existence.

For example, the Times Higher Education Table and QS World University Rankings were originally in partnership with one another, until they split in 2009 and QS was replaced by Thomas Reuters until 2014. The league table powers-that-be agreed upon rankings until suddenly they didn’t, or more likely until the market was big enough and profitable enough for them not to.

League tables don’t even try to pretend that they’re objective or steadfast, because they know that consumers no longer expect it from them. So, as a result, they’re all different. For example, the Sorbonne University ranks in the top 250 according to Webometrics, yet the top 100 according to Times Higher, and even the top 50 according to Shanghai. Or Dartmouth College, one of the US’s prestigious Ivy League colleges, is relinquished to the top 400 by Shanghai and top 250 by QS, yet manages to make the top 150 in Times Higher.

When it comes to Cambridge, QS World Rankings have proclaimed us as 2nd, while the Leiden ranking went with 34th. It’s 12th according to Webometrics and joint 5th over at Times Higher. So of course Cambridge will splash QS all over its social media, while the Leiden ranking – an equally rigorous statistical endeavour – will be brushed under the carpet.

Every ranking system claims to have the best methodology, yet every ranking system remains subject to criticism. For example, the Shanghai Ranking gets flack for its favouritism of English-language research journals, leaving non-English-speaking universities at a disadvantage. However, both QS and Times Higher incorporate a reputational survey into their ranking method, which perhaps turns the whole scheme into a popularity contest. Similarly, the U-Multirank, produced by the European Commission, was initially accused of being a means to favour European institutions – although a lot of this criticism was driven by the Times Higher Education, who aren’t exactly neutral on the matter.

“It seems that methodology improvement has become a convenient euphemism for people- pleasing.”

Meanwhile, the Times are using last year’s National Student Survey results in their Good Universities Guide 2024, which seems like something that should be better advertised than it is – perhaps with the guide itself, rather than in a separate article. After all, last year’s cohort of students represent an entirely different life cycle of industrial action and Covid-19 restrictions from what incoming freshers can expect.

However, even less advertised is the fact that when it comes to Oxford and Cambridge, the Sunday Times is using NSS scores from all the way back in 2016. Even when the data has been adjusted by averages, this surely seems dubious. Pre-Covid, pre-strikes, pre-cost-of-living-crisis. Completely out-dated when it comes to university today and how students feel about it. This is on account of the ongoing, years-long boycott of the National Student Survey by Oxford and Cambridge students, which itself tells a whole story about how graduates feel towards the datafication of their university experience. After all, to what extent can you quantify a whole three years of learning and living, conveniently reduced to a quick-glance table?


READ MORE

Mountain View

The BBC’s own University Challenge

But what they really hope their methodology is best at is telling the market, the profit drivers, what it wants to hear. So when LSE wasn’t included in QS’s top 40, they changed their whole methodology to ensure that it was, abandoning their previously favourable treatment of strong STEM provision.

It seems that methodology improvement has become a convenient euphemism for people- pleasing. What Cambridge really comes second at is convincing the world, or at least QS, that it’s its best university. Maybe we should take our reputation, or at least the flashy numbers attached to it, with a pinch of salt.