The LCFI has previously researched the discrimination of AI systems towards older adultsmikemacmarketing

Researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI) have been awarded €1.9m (£1.6m) by German philanthropic foundation Stiftung Mercator to develop a better understanding of how AI can undermine “core human values”.

The donation received by the LCFI is part of a wider package of €3.8m (£3.2m) that will allow them and their partners to work with the AI industry to develop anti-discriminatory design principles. The LCFI team will create tool kits and training for AI developers to prevent existing structural inequalities, including gender, class, and race, from becoming entrenched within emerging technology.

The new research project, “Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures” comes as the European Commission negotiates its Artificial Intelligence Act, which will require AI systems to be assessed for their impact on fundamental rights and values.

Dr Stephen Cave, Director of LCFI, claimed that “there is a huge knowledge gap. No one currently knows what the impact of these new systems will be on core values, from democratic rights to the rights of minorities, or what measures will help address such threats.”

He continued: “AI technologies are leaving the door open for dangerous and long-discredited pseudoscience,” pointing to facial recognition software that claims to identify “criminal faces.” He further argues that such assertions are akin to Victorian ideas of phrenology and associated scientific racism.

The LCFI team will include Cambridge researchers Dr Kerry Mackereth and Dr Eleanor Drage. Mackereth will be working on a project that explores the relationship between anti-Asian racism and AI, while Drage will be looking at the use of AI for recruitment and workforce management.


READ MORE

Mountain View

New Cambridge centre to study life in the Universe

Drage said: “AI tools are going to revolutionise hiring and shape the future of work in the 21st century. Now that millions of workers are exposed to these tools, we need to make sure that they do justice to each candidate, and don’t perpetuate the racist pseudoscience of 19th century hiring practices”.

Cave added: “It’s great that governments are now taking action to ensure AI is developed responsibly, but legislation won’t mean much unless we really understand how these technologies are impacting on fundamental human rights and values.”

Other work carried out by the LCFI has included research into the discrimination of AI systems towards older adults, as well as helping the BBC commission “less clichéd, more accurate and more representative” AI imagery.