This story is a part of the 2025 TIME100. Learn Jennifer Doudna’s tribute to Demis Hassabis right here.
Demis Hassabis discovered he had gained the 2024 Nobel Prize in Chemistry simply 20 minutes earlier than the world did. The CEO of Google DeepMind, the tech big’s synthetic intelligence lab, acquired a telephone name with the excellent news on the final minute, after a failed try by the Nobel Basis to search out his contact info prematurely. “I might have gotten a coronary heart assault,” Hassabis quips, had he discovered concerning the prize from the tv. Receiving the respect was a “lifelong dream,” he says, one which “nonetheless hasn’t sunk in” once we meet 5 months later.
[time-brightcove not-tgx=”true”]
Hassabis acquired half of the award alongside a colleague, John Jumper, for the design of AlphaFold: an AI software that may predict the 3D construction of proteins utilizing solely their amino acid sequences—one thing Hassabis describes as a “50-year grand problem” within the area of biology. Launched freely by Google DeepMind for the world to make use of 5 years in the past, AlphaFold has revolutionized the work of scientists toiling on analysis as diverse as malaria vaccines, human longevity, and cures for most cancers, permitting them to mannequin protein buildings in hours reasonably than years. The Nobel Prizes in 2024 have been the primary in historical past to acknowledge the contributions of AI to the sphere of science. If Hassabis will get his manner, they gained’t be the final.
AlphaFold’s influence might have been broad sufficient to win its creators a Nobel Prize, however on the earth of AI, it’s seen as virtually hopelessly slender. It might probably mannequin the buildings of proteins however not a lot else; it has no understanding of the broader world, can’t perform analysis, nor can it make its personal scientific breakthroughs. Hassabis’s dream, and the broader trade’s, is to construct AI that may do all of these issues and extra, unlocking a future of just about unimaginable marvel. All human ailments will probably be a factor of the previous if this know-how is created, he says. Vitality will probably be zero-carbon and free, permitting us to transcend the local weather disaster and start restoring our planet’s ecosystems. World conflicts over scarce sources will dissipate, giving strategy to a brand new period of peace and abundance. “I feel among the largest issues that face us at this time as a society, whether or not that’s local weather or illness, will probably be helped by AI options,” Hassabis says. “I’d be very fearful about society at this time if I didn’t know that one thing as transformative as AI was coming down the road.”
This hypothetical know-how—identified within the trade as Synthetic Common Intelligence, or AGI—had lengthy been seen as many years away. However the quick tempo of breakthroughs in laptop science over the previous couple of years has led prime AI scientists to radically revise their expectations of when it would arrive. Hassabis predicts AGI is someplace between 5 and 10 years away—a reasonably pessimistic view when judged by trade requirements. OpenAI CEO Sam Altman has predicted AGI will arrive inside Trump’s second time period, whereas Anthropic CEO Dario Amodei says it might come as early as 2026.
Partially underlying these completely different predictions is a disagreement over what AGI means. OpenAI’s definition, as an illustration, is rooted in chilly enterprise logic: a know-how that may carry out most economically helpful duties higher than people can. Hassabis has a distinct bar, one targeted as a substitute on scientific discovery. He believes AGI can be a know-how that might not solely resolve current issues, but additionally give you solely new explanations for the universe. A take a look at for its existence may be whether or not a system might give you normal relativity with solely the data Einstein had entry to; or if it couldn’t solely resolve a longstanding speculation in arithmetic, however theorize a completely new one. “I establish myself as a scientist firstly,” Hassabis says. “The entire purpose I’m doing every part I’ve accomplished in my life is within the pursuit of information and making an attempt to grasp the world round us.”
Order your copy of the 2025 TIME100 problem right here
Learn Extra: DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Warning
In an AI trade whose prime ranks are populated largely by businessmen and technologists, that id units Hassabis aside. But he should nonetheless function in a system the place market logic is the driving drive. Creating AGI would require lots of of billions of {dollars}’ value of investments—{dollars} that Google is fortunately plowing into Hassabis’ DeepMind unit, buoyed by the promise of a know-how that may do something and every part. Whether or not Google will make sure that AGI, if it comes, advantages the world stays to be seen; Hassabis factors to the choice to launch AlphaFold totally free as a logo of its benevolent posture. However Google can be an organization that should legally act in the perfect pursuits of its shareholders, and persistently releasing costly instruments totally free is just not a long-term worthwhile technique. The monetary promise of AI—for Google and for its opponents—lies in controlling a know-how able to automating a lot of the labor that drives the greater than $100 trillion world financial system. Seize even a small fraction of that worth, and your organization will change into one of the crucial worthwhile the world has ever seen. Excellent news for shareholders, however dangerous information for normal staff who might discover themselves abruptly unemployed.
Thus far, Hassabis has efficiently steered Google’s multibillion-dollar AI ambitions towards the kind of future he needs to see: one targeted on scientific discoveries that, he hopes, will result in radical social uplift. However will this former youngster chess prodigy have the ability to preserve his scientific idealism as AI reaches its high-stakes endgame? His monitor file reveals one purpose to be skeptical.
When DeepMind was acquired by Google in 2014, Hassabis insisted on a contractual firewall: a clause explicitly prohibiting his know-how from getting used for navy functions. It was a crimson line that mirrored his imaginative and prescient of AI as humanity’s scientific savior, not a weapon of struggle. However a number of company restructures later, that safety has quietly disappeared. Right now, the identical AI techniques developed underneath Hassabis’s watch are being bought, by way of Google, to militaries resembling Israel’s—whose marketing campaign in Gaza has killed tens of hundreds of civilians. When pressed, Hassabis denies that this was a compromise made so as to preserve his entry to Google’s computing energy and thus understand his dream of growing AGI. As an alternative, he frames it as a realistic response to geopolitical actuality, saying DeepMind modified its stance after acknowledging that the world had change into “a way more harmful place” within the final decade. “I feel we are able to’t take with no consideration anymore that democratic values are going to win out,” he says. Whether or not or not this justification is sincere, it raises an uncomfortable query: If Hassabis couldn’t preserve his moral crimson line when AGI was only a distant promise, what compromises may he make when it comes inside touching distance?
To get to Hassabis’s dream of a utopian future, the AI trade should first navigate its manner via a darkish forest stuffed with monsters. Synthetic intelligence is a dual-use know-how like nuclear power: it may be used for good, but it surely is also terribly harmful. Hassabis spends a lot of his time worrying about dangers, which typically fall into two completely different buckets. One is the potential of techniques that may meaningfully improve the capabilities of dangerous actors to wreak havoc on the earth; for instance, by endowing rogue nations or terrorists with the instruments they should synthesize a lethal virus. Stopping dangers like that, Hassabis believes, means rigorously testing AI fashions for harmful capabilities, and solely steadily releasing them to extra customers with efficient guardrails. It means protecting the “weights” of probably the most highly effective fashions (primarily their underlying neural networks) out of the general public’s fingers altogether, in order that fashions will be withdrawn from public use if risks are found after launch. That’s a security technique that Google follows however which a few of its opponents, resembling DeepSeek and Meta, don’t.
The second class of dangers might appear to be science fiction, however they’re taken significantly contained in the AI trade as mannequin capabilities advance. These are the dangers of AI techniques performing autonomously— resembling a chatbot deceiving its human creators, or a robotic attacking the individual it was designed to assist. Language fashions like DeepMind’s Gemini are primarily grown from the bottom up, reasonably than written by hand like old-school laptop packages, and so laptop scientists and customers are continuously discovering methods to elicit new behaviors from what are finest understood as extremely mysterious and complicated artifacts. The query of how to make sure that they all the time behave and act in methods which can be “aligned” to human values is an unsolved scientific downside. Early indicators of misaligned behaviors, like strategic mendacity, have already been recognized by researchers working with at this time’s language fashions. These issues are solely more likely to change into extra acute as fashions get higher. “How can we make sure that we are able to keep in command of these techniques, management them, interpret what they’re doing, perceive them, and put the correct guardrails in place that aren’t movable by very extremely succesful self-improving techniques?” Hassabis says. “That’s an especially troublesome problem.”
It’s a devilish technical downside—however what actually retains Hassabis up at evening are the political coordination challenges that accompany it. Even when well-meaning firms could make protected AIs, that doesn’t by itself cease the creation and proliferation of unsafe AIs. Stopping that would require worldwide collaboration—one thing that’s turning into more and more troublesome as western alliances fray and geopolitical tensions between the U.S. and China rise. Hassabis has performed a major position within the three AI summits held by world governments since 2023, and says he wish to see extra of that type of cooperation. He says the U.S. authorities’s export controls on AI chips, supposed to forestall China’s AI trade from surpassing Silicon Valley, are “nice”—however he would like to keep away from political decisions that “find yourself in an antagonistic type of state of affairs.”
He may be out of luck. As each the U.S. and China have woken up in recent times to the potential energy of AGI, the local weather of world cooperation —which reached a excessive watermark with the primary AI Security Summit in 2023—has given strategy to a brand new type of realpolitik. On this new period, with nations racing to militarize AI techniques and construct up stockpiles of chips, and with a brand new chilly struggle brewing between the U.S. and China, Hassabis nonetheless holds out hope that competing nations and corporations can discover methods to put aside their variations and cooperate, at the very least on AI security. “It’s in everybody’s self-interest to be sure that goes effectively,” he says.
Even when the world can discover a strategy to safely navigate via the geopolitical turmoil of AGI’s arrival, the query of labor automation will rear its head. When governments and corporations not depend on people to generate their wealth, what leverage will residents have left to demand the substances of democracy and a snug life? AGI may create abundance, but it surely gained’t dispel the incentives for firms and states to amass sources and compete with rivals. Hassabis admits he’s higher at forecasting technological futures than social and financial ones; he says he needs extra economists would take the potential of near-term AGI significantly. Nonetheless, he thinks it’s inevitable we’ll want a “new political philosophy” to arrange society on this world. Democracy, he says, “is just not a panacea, by any means,” and might need to provide strategy to “one thing higher.”
Automation, in the meantime, is already on the horizon. In March, DeepMind introduced Gemini 2.5, the newest model of its flagship AI mannequin, which outperforms rival fashions made by OpenAI and Anthropic on many standard metrics. Hassabis is at present arduous at work on Undertaking Astra, a DeepMind effort to construct a common digital assistant powered by Gemini. That work, he says, is just not supposed to hasten labor disruptions, however as a substitute is about constructing the mandatory scaffolding for the kind of AI that he hopes will sooner or later make its personal scientific discoveries. Nonetheless, as analysis into these AI “brokers” progresses, Hassabis says, count on them to have the ability to perform more and more extra advanced duties independently. (An AI agent that may meaningfully automate the job of additional AI analysis, he predicts, is “a number of years away.”) For the primary time, Google can be now utilizing these digital brains to regulate robotic our bodies: in March the corporate introduced a Gemini-powered android robotic that may perform embodied duties like enjoying tic-tac-toe, or making its human a packed lunch. The tone of the video asserting Gemini Robotics was pleasant, however its connotations weren’t misplaced on some YouTube commenters: “Nothing to fret [about,] humanity, we’re solely growing robots to do duties a 5 12 months outdated can do,” one wrote. “We’re not engaged on changing people or creating robotic armies.”
Hassabis acknowledges the social impacts of AI are more likely to be important. Individuals should discover ways to use new AI fashions, he says, so as to excel professionally sooner or later and never threat getting left behind. However he’s additionally assured that if we ultimately construct AGI able to doing productive labor and scientific analysis, the world that it ushers into existence will probably be ample sufficient to make sure a considerable improve in high quality of life for everyone. “Within the limited-resource world which we’re in, issues in the end change into zero-sum,” Hassabis says. “What I’m eager about is a world the place it’s not a zero-sum sport anymore, at the very least from a useful resource perspective.”
5 months after his Nobel Prize, Hassabis’s journey from chess prodigy to Nobel laureate now leads towards an unsure future. The stakes are not simply scientific recognition—however doubtlessly the destiny of human civilization. As DeepMind’s machines develop extra succesful, as company and geopolitical competitors over AI intensifies, and because the financial impacts loom bigger, Hassabis insists that we may be on the cusp of an ample financial system that advantages everybody. However in a world the place AGI might deliver unprecedented energy to those that management it, the forces of enterprise, geopolitics, and technological energy are all bearing down with growing stress. If Hassabis is correct, the turbulent many years of the early twenty first century might give strategy to a shining utopia. If he has miscalculated, the long run may very well be darker than anybody dares think about. One factor is for positive: in his pursuit of AGI, Hassabis is enjoying the highest-stakes sport of his life.