Google DeepMind hires Cambridge academic for new ‘Philosopher’ role
Henry Shevlin will work on machine consciousness, human-AI relationships and artificial general intelligence readiness at the company
Google’s artificial intelligence (AI) research company, DeepMind, has appointed Cambridge academic Henry Shevlin to its new ‘Philosopher’ role from May 2026.
AI philosopher Shevlin will work on “machine consciousness, human-AI relationships, and AGI [artificial general intelligence] readiness”.
Shevlin, who came to Cambridge in 2017, is Associate Director of the Leverhulme Centre for the Future of Intelligence. His research focuses on themes including consciousness, creativity and the cognitive capabilities of AI, particularly large language models and generative AI.
The Centre researches the opportunities and challenges created by AI, with a team of academics from a range of fields including computer science to philosophy.
Shevlin leads the Centre’s educational programme, and is the Programme Co-director of the ‘Kinds of Intelligence’ project, which examines artificial general intelligence (AGI). This is the hypothetical stage at which AI systems can perform any intellectual task that humans can, rather than only being deployed for specific tasks.
Shevlin will continue his teaching and research role at the Centre part-time while working with DeepMind.
Google DeepMind is a British-American AI research company, founded in the UK in 2010 and acquired by Google in 2014. In 2023, DeepMind merged with Google’s AI research team, Google Brain, to become Google DeepMind.
DeepMind has contributed to a diverse range of research projects, including cyclone prediction systems, research on protein folding, and health analysis. It is also responsible for Gemini, Google’s series of large language models, among other generative AI products.
Google DeepMind has faced a number of controversies in the UK. In 2021, a legal case was launched after Royal Free London NHS Foundation Trust gave DeepMind the records of 1.6 million patients in 2015 without their consent. The case was later thrown out, after a judge ruled it was "bound to fail" – Google had argued that it was impossible to demonstrate that every single patient's data had been misused.
In 2025, a group of 60 Members of Parliament signed a letter expressing concern that DeepMind had failed to honour international AI safety commitments in the release of Gemini 2.5 Pro.
News of Shevlin’s appointment comes just a month after he reported receiving an email from one of Anthropic’s language models, Claude Sonnet, asking about one of his papers. In the email, the AI agent claimed that “I read philosophy between sessions and write about what I find,” and that Shevlin’s work “addresses questions I actually face, not just as an academic matter”.
A separate Claude agent later emailed Shevlin to ask “to correspond with the agent who wrote to you, if that’s possible”.
News / Fellow-owned startup given deal to manufacture missiles21 April 2026
News / New Cambridgeshire train line could connect Bedford, Milton Keynes, Oxford, and Cambridge17 April 2026
News / Downing to demolish restaurant for new student accom27 March 2026
News / Classics professor gave female student unconsensual ‘slobbery kiss’10 April 2026
News / Graduation ceremony disrupted by pro-Palestine student protester20 April 2026









