Artificial intelligence is becoming ever more widespread in recruitingLouis Ashworth

When Carla Hill, a final year English undergraduate at Selwyn, recently applied to Unilever’s ‘Future Leaders’ graduate programme, she was troubled by the use of online games in the application.

“I don’t always feel you are being taken seriously,” she told Varsity.

She says she “doesn’t particularly trust” the scientific backing, which “seems very generalised”.

Unilever’s Future Leaders programme receives 250,000 applications annually for only 800 places, and around 1.8 million applications a year globally for just 30,000 jobs. It claims this saved 100,000 hours of time assessing candidates, and $1million last year.

But her concerns are common at a time when artificial intelligence (AI) is becoming ever more widespread in recruiting.

AI – broadly, the ability of computers to perform tasks normally requiring human intelligence – is increasingly becoming the norm in recruiting.

A report [behind paywall] by the Institute of Student Employers last summer surveying 153 global companies determined 59% now use psychometric tests and 10% gamified assessments.

The top five sectors Cambridge students graduate into are health and research, banking, manufacturing and marketing, teaching, and service industries, according to the latest data available.

David Ainscough, Deputy Director of the Cambridge Careers Service, says “particularly with big recruiters, managing the volume of applications is one argument for automating the process”, as well as removing human error.

“One of the major complaints we have is firms just don’t get back to students quickly enough … the fact you can get a very quick response [with AI] is generally well regarded”.

But he cautions “the lack of familiarity coupled with the uncertainty of what being tested for … have led to quite a lot of concerns”.

Proponents of AI argue it improves the efficiency of the process by up to three times, as well as improving retention rates and the application experience.

One example of a firm using ‘gamified’ assessments is accounting giant PwC, which hired over 38,000 graduates last year globally, and in the UK whittles down around 40,000 applicants a year to 1,500 [behind paywall].

After the CV and initial screening, candidates play games, officially termed ‘behaviour-based assessments’, which test for suitable personality traits.

Robert Newry, CEO and Co-Founder of Arctic Shores which runs the tests, explained that they identify the 5-10 “most important” traits from existing employees and managers – “typically” sampling over 100 – matched to around 12,000 data points taken from each candidate in the assessment.

Juliet Merelie, a student from Newnham who got an internship last year, praised the flexibility of using an app and said it “felt less formal and pre-rehearsed” than normal assessments while adding “even expert psychometric assessments aren’t proven to be able to accurately capture personality”.


READ MORE

Mountain View

Artificial intelligence: future or fantasy?

Concerns have been raised that the use of AI may neglect individuality, accuracy and data rights – and lack transparency. One student at King’s said: “I don’t know if it produces any accurate metrics about how good an employee someone will be. People who are used to playing computer games … are at a much greater advantage of doing better”.

She added such assessments can also be “more stressful than normal tests [due to] the games testing reaction times, which got a bit overwhelming”.

Another at Selwyn argued “applicants have little room to demonstrate individuality, by possessing a trait which isn’t coded into the system, and also means companies limit themselves to a cognitively homogeneous set of successful applicants”.

Mr Newry insisted “the role of our experienced occupational psychologists is to review the data for its context, suitability and breadth”, stating his firm “conducted 18 months of research and validation on all the tasks … to the quality standards of organisations like the British Psychological Society”.

The Selwyn student also worried “organisations can very quickly accumulate information and create a complex applicant profile” based on the assessments, a concern shared by Ahmed Zaidi, Chief Technology Officer and Co-Founder of Catalyst AI, a Cambridge-based AI consultancy.

“The raw data is something definitely needing to be discussed. Who owns that data, how it’s sorted and used is a question that is relevant and pertinent,” he said.

PwC’s Senior Manager of media relations Ellie Raven said she “can understand the concerns” but stressed PwC is “white-on-white” on the relevant EU law, the General Data Protection Regulation (GDPR).

She added, “We use some of it for things like seeing if we are getting the [desired] diverse range of applicants … it’s definitely not transferred to any third parties or anything”.

GDPR imposes no time limit for deleting graduate applicants’ data and PwC’s website states it retains data of successful applicants – and for unsuccessful applicants, “for a reasonable period of time to deal with any matter which may arise in connection with your application … and for our legitimate business purposes (for example, to make sure we do not contact an individual about a role they have already applied for)”.

Further still, a recent report found only 28% of companies are fully GDPR compliant.

Zaidi also commented that “we won’t know until a few years down the line when someone bothers to do a controlled experiment on whether these games even work” in the longer term.

“Unless academics decide to venture down this path and research it, it’s very unlikely that you’ll see any research,” he argues.

“Companies will say it’s proprietary and will not reveal any of the information on these models”, meaning we may only know about any issues as models become outdated.

Students are equally worried about this. One recent applicant to a ‘Big Three’ consultancy firm said he’d “really appreciate more transparency in the feedback process.”