One night early ultimate past, Anne Bouverot used to be placing the completing touches on a record when she gained an pressing telephone name. It used to be one in all French President Emmanuel Macron’s aides providing her the function as his particular envoy on synthetic perception. The unpaid place would entail main the arrangements for the France AI Motion Top—a meeting the place heads of surrounding, generation CEOs, and civil public representatives will search to chart a direction for AI’s presen. Prepared to snatch park on Feb. 10 and 11 on the presidential Élysée Palace in Paris, it’s going to be the primary such amassing for the reason that digital Seoul AI Top in Might—and the primary in-person assembly since November 2023, when international leaders descended on Bletchley Terrain for the U.Ok.’s inaugural AI Protection Top. Later weighing the deal, Bouverot, who used to be on the era the co-chair of France’s AI Fee, authorized.
However France’s Top received’t be just like the others. Life the U.Ok.’s Top targeted on mitigating gruesome dangers—reminiscent of AI assisting would-be terrorists in developing guns of lump demolition, or presen techniques escaping human management—France has rebranded the development because the ‘AI Motion Top,’ transferring the dialog in opposition to a much broader gamut of dangers—together with the disruption of the hard work marketplace and the generation’s environmental have an effect on—life additionally maintaining the alternatives entrance and middle. “We’re broadening the conversation, compared to Bletchley Park,” Bouverot says. Attendees anticipated on the Top come with OpenAI boss Sam Altman, Google prominent Sundar Pichai, Eu Fee president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance.
Some welcome the pivot as a much-needed correction to what they see as hype and hysteria across the generation’s risks. Others, together with one of the most international’s major AI scientists—together with some who helped form the grassland’s basic applied sciences—fear that security issues are being sidelined. “The view within the community of people concerned about safety is that it’s been downgraded,” says Stuart Russell, a mentor {of electrical} engineering and pc sciences on the College of California, Berkeley, and the co-author of the authoritative textbook on AI impaired at over 1,500 universities.
“On the face of it, it looks like the downgrading of safety is an attempt to say, ‘we want to charge ahead, we’re not going to over-regulate. We’re not going to put any obligations on companies if they want to do business in France,”‘ Russell says.
France’s Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI’s benefits globally. But if the recent leaps in AI capabilities—and emerging signs of deceptive behavior—are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead.
Bouverot is no stranger to the politics of emerging technology. In the early 2010s, she held the director general position at the Global System for Mobile Communications Association, an industry body that promotes interoperable standards among cellular providers globally. “In a nutshell, that role—which was really telecommunications—was also diplomacy,” she says. From there, she took the helm at Morpho (now IDEMIA), steerage the French facial reputation and biometrics company till its 2017 acquisition. She then co-founded the Fondation Abeona, a nonprofit that promotes “responsible AI.” Her paintings there resulted in her appointment as co-chair of France’s AI Fee, the place she advanced a method for a way the folk may just determine itself as an international chief in AI.
Bouverot’s growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale supérieure—a top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. “That is how a lot of a crystal ball I had,” she joked on Washington AI Network’s podcast in December, acknowledging the irony of her early skepticism, given AI’s impact today.
Under Bouverot’s leadership, safety will remain a feature, but rather than the summit’s sole focus, it is now one of five core themes. Others include: AI’s use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Paris’s agenda does not mention the possibility that an AI system could escape human control. “There’s no evidence of that risk today,” Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. “There was a bit of a science fiction moment,” she says, adding that the global discourse has since shifted.
Back in late 2023, as the U.K.’s summit approached, signs of a shift in the conversation around AI’s risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it “a misspend of era” and a “doom-obsessed mess.” Researchers, who had studied AI’s downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk “was really problematic.”
“Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns,” says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. “I think a lot of the existential harm rhetoric doesn’t translate to what policy makers can specifically do now,” she adds.
On the U.K. Summit’s opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: “When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him?” she asked, in an effort to highlight the near-term risks of AI over the summit’s focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. “We didn’t make that change in the global discourse,” Bouverot says, adding that the focus is now squarely on the technology’s tangible impacts. “We’re quite happy that this is actually the conversation that people are having now.”
One of the actions expected to emerge from France’s Summit is a new yet-to-be-named foundation that will aim to ensure AI’s benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind’s AI model that predicts protein structures with unprecedented precision—potentially accelerating research and drug discovery—as an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. “We need to enable more databases like this,” Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AI’s development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says.
Her second priority is creating an informal “Coalition for Sustainable AI.” AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI’s environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. “Clearly AI is happening and being developed. We want it to be developed in a sustainable way,” Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative.
Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AI’s climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AI’s emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPU—the kind of computer chips used for running AI—energy usage. “Nobody has all of the numbers,” she says, but the coalition may help put the pieces together.
Given AI’s recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGI—a system that surpasses humans in most regards—could potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quickly—if ever—we’ll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAI’s Altman wrote in a blog post: “We are now confident we know how to build AGI.” Talking on a panel at Davos ultimate past, Dario Amodei, the CEO of rival AI corporate, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year.
Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” More recently, Altman has downplayed AI’s risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the company’s direction. “Over the past years, safety culture and processes have taken a backseat to shiny products,” former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanity’s control.
Predictions of AGI’s imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; via December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently.
This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAI’s recent o1 and a version of Anthropic’s Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caught—often doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropic’s model, Claude, strategically misleading its creators during training. “Exactly the things that people have been writing about for the last 10 years are happening now,” Russell says. “As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they’re doing it, and so on.”
Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three “Godfathers of AI” for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. “There’s very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction,” Bengio says. “A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss-of-control risk,” he adds.
At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. “They did a lot to move the needle in the right direction,” Bengio says, but he adds that these measures are not close to sufficient. “In my non-public opinion, the magnitude of the possible transformations which might be prone to occur when we means AGI are so radical,” Bengio says, “that my impression is most people, most governments, underestimate this a whole lot.”
However in lieu than pushing for modern agreements, in Paris the point of interest shall be streamlining current ones—making them suitable with current regulatory frameworks and each and every alternative. “There’s already quite a lot of commitments for AI companies,” Bouverot says. This light-touch stance mirrors France’s broader AI technique, the place homegrown corporate Mistral AI has emerged as Europe’s main challenger within the grassland. Each Mistral and the French executive lobbied for softer laws beneath the E.U.’s complete AI Work. France’s Top will attribute a business-focused tournament, hosted throughout the city at Station F, France’s biggest start-up hub. “To me, it appears a quantity like they’re seeking to importance it to be a French trade honest,” says Andrea Miotti, the chief director of Regulate AI, a non-profit that advocates for guarding in opposition to existential dangers from AI. “They’re taking a summit that was focused on safety and turning it away. In the rhetoric, it’s very much like: let’s stop talking about the risks and start talking about the great innovation that we can do.”
The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair France’s Summit. In March, India issued an advisory that pushed companies to obtain the government’s permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In California—home to many of the top AI developers—a landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of California’s legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Biden’s sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that “revokes certain existing AI policies and directives that act as barriers to American AI innovation” to hold U.S. management over the generation.
Markus Anderljung, director of coverage and examine at AI security think-tank the Centre for the Governance of AI, says that security may well be woven into the France Top’s broader targets. For example, projects to distribute AI’s advantages globally may well be related to assurances from recipient international locations to maintain security easiest practices. He says he want to see the listing of signatories of the Frontier AI Protection Loyalty signed in Seoul expanded —in particular in China, the place just one corporate, Zhipu, has signed. However Anderljung says that for the assurances to be successful, responsibility mechanisms will have to even be bolstered. “Commitments without follow-ups might just be empty words,” he says, ”they just don’t matter unless you know what was committed to actually gets done.”
A focal point on AI’s closing dangers does no longer have to return on the exclusion of alternative remarkable problems. “I know that the organizers of the French summit care a lot about [AI’s] positive impact on the global majority,” Bengio says. “That’s a very important mission that I embrace completely.” However he argues the possible severity of loss-of-control dangers warrant invoking the precautionary idea—the concept that we will have to snatch preventive measures, even absent clinical consensus. It’s a idea that has been invoked via U.N. declarations aimed toward protective the shape, and in delicate clinical domain names like human cloning.
However for Bouverot, this can be a query of balancing competing calls for. “We don’t want to solve everything—we can’t, nobody can,” she says, including that the point of interest is on making AI extra concrete. “We want to work from the level of scientific consensus, whatever level of consensus is reached.”
In mid December, in France’s international ministry, Bouverot, confronted an extraordinary quandary. Around the desk, a South Korean reliable defined his nation’s keenness to secured the peak. However days previous, South Korea’s political management used to be thrown into turmoil when President Yoon Suk Yeol, who co-chaired the former peak’s leaders’ consultation, declared martial regulation ahead of being rapidly impeached, resignation the query of who will constitute the rustic—and whether or not officers may just attend in any respect—up within the wind.
There’s a admirable take care of of confusion—no longer handiest over the time AI will exit, however to what stage governments shall be prepared to have interaction. France’s personal executive collapsed in early December later High Minister Michel Barnier used to be ousted in a no-confidence vote, marking the primary such shatter for the reason that Sixties. And, as Trump, lengthy skeptical of overseas establishments, returns to the oval place of business, it’s but to be evident how Vice President Vance will means the Paris assembly.
When reflecting at the generation’s unsure presen, Bouverot reveals knowledge within the phrases of every other French pioneer who grappled with robust however nascent generation. “I’ve this quote from Marie Curie, which I actually love,” Bouverot says. Curie, the primary lady to win a Nobel Prize, revolutionized science together with her paintings on radioactivity. She as soon as wrote: “Nothing in life is to be feared, it is only to be understood.” Curie’s paintings in the end price her future—she died at a quite younger 66 from a unprecedented blood infection, most likely brought about via extended radiation publicity.