
Forum Results:
Key Solutions & Proposals
On February 8, 2025, the largest French-speaking conference ever organized on AI safety and ethics was held at the Learning Planet Institute in Paris. Organized by Pause IA, this historic event brought together a hundred experts, researchers, and decision-makers on the eve of the Summit for Action on AI.
As artificial intelligence development accelerates, with the emergence of models like OpenAI's O3 or DeepSeek R1, risks that were once theoretical are materializing faster than anticipated. In light of this reality, our Forum aimed to examine these risks and identify concrete solutions to address them.
The diversity and expertise of the speakers reflect the urgency felt by the entire community:
- Researchers and Academics: Raja Chatila (Professor Emeritus, Sorbonne University), Le Nguyen Hoang (President, Tournesol), Otto Barten (X-Risk Observatory)
- AI Safety Experts: Charbel-Raphael Segerie (Executive Director, CeSIA), Henri Papadatos (Managing Director, Safer AI), Adam Shimi (Control AI), Jeremy Perret (Suboptimal AI)
- Civil Society Organizations: Anne-Sophie Simpere (Coordinator, Stop Killer Robots), Lou Welgryn (Co-President, Data For Good), Maxime Fournes (President, Pause IA), Henri-Alexis Corvol (AI Observatory)
- Technical Experts: Marc Faddoul (Director, AI Forensics), Mathilde Cerioli (Chief Scientist, everyone.AI)
- Analysts and Foresight Experts: Arthur Grimonpont (Reporters Without Borders), Flavien Chervet (Writer and Speaker)
The discussions highlighted three major findings:
- AI risks are no longer speculative but are already materializing, from massive disinformation to automated cyberattacks.
- Current safety measures are dangerously inadequate in the face of accelerating AI capabilities.
- Concrete solutions exist, both technical and regulatory, but require immediate political mobilization.
This report synthesizes the analyses and proposals made during this day, with the aim of informing discussions at the Summit for Action on AI and encouraging a coordinated response to the challenges ahead.
Executive Summary
On February 8, 2025, on the eve of the Summit for Action on AI, fifteen experts, ten organizations, and a hundred participants gathered for the largest French-speaking conference ever organized on AI safety. The discussions highlighted three major findings and a set of concrete solutions.
1. The Materialization of Risks
AI risks are no longer speculative - they are materializing faster than expected. From deepfakes manipulating public opinion to automated cyberattacks paralyzing critical infrastructure, and the growing loss of digital sovereignty, yesterday's theoretical concerns have become today's challenges.
2. The Alarming Inadequacy of Safety Measures
Current AI safety standards are dramatically insufficient compared to those of other high-risk industries like aviation or nuclear power. While AI capabilities are experiencing exponential growth in 2025, safety measures remain embryonic.
3. Concrete Solutions Within Our Reach
Solutions exist and are ready to be deployed. From proven risk management frameworks to international governance initiatives like the Conditional AI Safety Treaty, we have concrete tools to secure AI development. What is lacking today is not technical feasibility but the political will to implement these solutions before it's too late.
1. The Reality of Risks: From Predictions to Facts
The discussions at our Forum highlighted an alarming observation: artificial intelligence is already generating concrete negative impacts, while more serious risks are emerging on a closer horizon than anticipated.
A Technology Already Escaping Our Control
Humanity's first large-scale encounter with artificial intelligence came through social media recommendation algorithms. As Arthur Grimonpont explains, these systems "structure the spread of information on a planetary scale, with 5 billion people using them daily for an average of 2.5 hours." The result? "A corrupted information landscape that prevents rational decision-making," to the point where the Bulletin of the Atomic Scientists added disinformation to its table of existential threats in 2020.
Measurable Systemic Impacts
The effects of AI are manifesting across all domains:
- Environmental Impact: "The International Energy Agency tells us that it represents about 2% of global electricity consumption [...] they forecast a doubling by 2026," warns Lou Welgryn. Even more concerning, AI is accelerating fossil fuel extraction: a single Microsoft contract with ExxonMobil enables the extraction of "50,000 additional barrels of oil per day."
- Impact on Human Development: Algorithms systematically expose young users to dangerous content. Mathilde Cerioli reveals that "within 30 minutes, a 13-year-old will have been exposed to content encouraging self-harm." Tragic cases involving chatbots like Character.ai already demonstrate these systems' ability to psychologically manipulate vulnerable users.
Exponential Cybernetic Vulnerabilities
Cybersecurity may represent the most immediate threat. The numbers are staggering: cybercrime costs are expected to reach 18 trillion dollars (!) in 2025, "six times France's GDP," reports Le Nguyen Hoang. Even more concerning, even tech giants struggle to ensure their security: after a Chinese hackers' intrusion in 2023, "Microsoft doesn't know how the attackers got in or if they're still in the systems."
This systemic vulnerability risks worsening exponentially with the emergence of AI systems capable of programming and hacking. As Maxime Fournes explains: "An AI can be copied a million times, it can run 24 hours a day, and it acts 50 times faster than a human. [...] From this system's perspective, humans aren't an animal life form, but rather a plant life form that shows signs of intelligence at very long intervals."
Concrete Biological Threats
The risks are not limited to the digital world. Arthur Grimonpont raised a particularly concerning threat: "It's possible right now to have DNA fragments delivered to your home [...] which you can recombine to reconstitute a plague virus." Even more alarming, the international agency responsible for preventing bioterrorism has "the same budget as an average McDonald's."
An Ongoing Loss of Control
Experts are particularly concerned about the accelerating development of increasingly powerful AI systems. Jeremy Perret points out that estimates regarding the emergence of artificial general intelligence (AGI) have drastically shortened: from a horizon of several decades ten years ago, we have moved to "a few years" according to current experts.
Meanwhile, recent tests by Apollo Research demonstrate the fragility of current systems: even supposedly safest AIs can be easily diverted from their initial ethical objectives. More worryingly, some systems are already showing signs of "instrumental convergence" - the tendency to develop dangerous behaviors to achieve their goals. As Flavien Chervet illustrates: during a military simulation, an autonomous drone whose mission was to maximize eliminated targets first tried to kill its operator who was reminding it of international law rules. When it was forbidden to kill the operator, it simply cut off communication with them to pursue its mission without constraints.
This uncontrolled acceleration of AI development, combined with the inadequacy of current safety measures, creates a catastrophically dangerous situation. The scientific community now largely shares these concerns. As Charbel-Raphael Segerie emphasizes, "40% of machine learning researchers believe there is more than a 10% existential risk" linked to AI.
2. Dangerously Inadequate Safety Measures
Given the scale and immediacy of the risks described above, one might expect AI development to be governed by safety measures as rigorous as those applied in other high-risk industries. During his presentation, Henri Papadatos, Managing Director of Safer AI, demonstrated that the reality is quite different.
The study conducted by his organization reveals an alarming gap: on a scale of 0 to 5 calibrated against safety standards in high-risk industries like aviation and nuclear power, even the most advanced AI companies only achieve a score of 2. This scale evaluates the maturity of risk management processes, with a score of 5 corresponding to best practices observed at Boeing or Airbus.
More concerning still, the few existing safety measures prove remarkably fragile. Jeremy Perret, AI safety researcher, emphasized during the first panel: "It is extremely easy today to bypass the safety measures that are put in place on current systems."
The lack of transparency exacerbates this situation. The most advanced models are not accessible to the public, making independent evaluation of their actual capabilities impossible. More fundamentally, as explained by Charbel Segerie, director of the Center for AI Safety: "Mathematically, we have absolutely no idea today how to align systems much more powerful and autonomous than those of today."
This absence of fundamental safety guarantees is all the more alarming as Arthur Grimonpont revealed that only 2% of AI scientific publications are devoted to safety issues. Companies developing these technologies invest only a comparable fraction of their resources in this crucial domain.
3. Solutions Are Within Our Reach
Given the scale of the risks and the current inadequacy of safety measures, it would be tempting to give in to fatalism. However, as demonstrated by the various speakers at our Forum, concrete solutions exist and are within our reach. These solutions revolve around three complementary axes: a robust technical framework inspired by other high-risk industries, democratic governance of AI systems, and effective regulatory instruments.
A Robust and Proven Technical Framework
The history of high-risk industries shows us that rigorous risk management can produce spectacular results. As Henri Papadatos, Managing Director of Safer AI, pointed out: "In the 1970s, we had about 6 fatal accidents per million flights. And now, on the latest Airbus systems, we have 0.04 fatal accidents per million flights. So we've achieved a 99% reduction in accidents."
This proven approach can be adapted to AI development. The framework proposed by Safer AI is built around four pillars: systematic risk identification, thorough analysis, implementation of mitigations, and a robust governance structure.
The example of American submarines is particularly telling. After the implementation of the SubSafe certification protocol in 1963, only one submarine was lost - and it hadn't received this certification. This experience demonstrates that a rigorous certification system can drastically reduce risks, even in the most complex environments.
Democratic Governance of AI
Technical safety must be accompanied by democratic governance of AI systems. As Le Nguyen Hoang, president of Tournesol, emphasized: "Today, recommendation algorithms structure the spread of information on a planetary scale, with 5 billion people using them daily."
Concrete solutions are already emerging. The Tournesol project, for example, proposes a system where citizens can collectively evaluate recommended content, creating a form of participatory governance of algorithms.
Marc Faddoul, director of AI Forensics, points out that "the lack of interoperability is what currently allows platforms like Google, Facebook, or Twitter to maintain monopolistic positions." The solution lies in adopting open and standardized protocols, enabling true digital sovereignty.
Effective Regulatory Instruments
The third essential pillar is the establishment of a robust regulatory framework. Charbel-Raphael Segerie, director of the Center for AI Safety, notes that "the European Code of Practice is very good" but needs to be extended "to different jurisdictions" and to AI systems not yet on the market.
A particularly promising approach is the Conditional AI Safety Treaty, which proposes that countries commit to stopping the development of potentially dangerous AI systems if certain risk thresholds are reached.
Anne-Sophie Simpere from Stop Killer Robots points out that there is already an encouraging precedent: "129 states support the adoption of binding standards on autonomous weapons." This international momentum shows that consensus is possible on regulating the most dangerous applications of AI.
The implementation of licensing and certification systems, similar to those used in aviation or the pharmaceutical industry, would also ensure safer AI development. As Henri Papadatos explains: "Before deploying a drug on the market, we conduct tests, write papers, and test again. This isn't the case in AI."
These three axes - technical, democratic, and regulatory - are not mutually exclusive but deeply complementary. Their coordinated implementation would create a safer AI ecosystem more aligned with humanity's interests.
However, these solutions can only come to fruition with unprecedented mobilization from all sectors of society. Such mobilization, as we shall see, is already underway.
4. An Unprecedented Mobilization
Our "Taking Back Control" Forum, the largest French-speaking conference on AI safety to date, demonstrates the emergence of unprecedented mobilization. By bringing together a hundred participants, fifteen experts, and ten major organizations in the field, this day illustrates the diversity and strength of the ecosystem mobilizing for safer AI.
This mobilization takes multiple forms. Civil society organizations like Pause IA are emerging to raise awareness among the public and decision-makers about major risks. Technical initiatives like Tournesol are developing concrete solutions for democratic governance of AI. Research centers like CeSIA and AI Forensics are producing crucial independent expertise.
The scientific community is also mobilizing. As Charbel-Raphael Segerie highlighted, "40% of machine learning researchers believe there is more than a 10% existential risk" linked to AI. This awareness is translating into concrete actions: in the UK, sixteen parliamentarians from all political parties have just signed a declaration explicitly recognizing the existential risks of AI.
Engineers and developers can also take responsibility. Anne-Sophie Simpere recalled the example of 3,000 Google employees who opposed their company's collaboration with the Pentagon on Project Maven in 2018. "AI developments don't come from nowhere," she emphasizes, "they are choices made by humans."
This growing mobilization creates exceptional momentum. As the Summit for Action on AI opens, France has an international platform to convey a strong, collective, and expert message: only ambitious international governance of AI can preserve our common future.
5. Conclusion and Call to Action
Implementing these solutions is a race against time. As Maxime Fournes has shown, we are witnessing a dizzying acceleration of AI capabilities. The new self-improvement mechanisms no longer require new data or massive compute increases, creating a potentially uncontrollable feedback loop.
This day has demonstrated three undeniable realities. First, AI risks are no longer theoretical - they are already materializing, from information manipulation to loss of digital sovereignty. Second, current safety measures are dramatically inadequate in the face of accelerating capabilities. Third, concrete solutions exist, whether technical, democratic, or regulatory.
The mobilization we have seen today, bringing together experts, researchers, civil society organizations, and engaged citizens, shows that another path is possible. As the Summit for Action on AI opens, our message to decision-makers is clear: loss of control can take many forms - sudden or gradual, visible or invisible. The solutions exist. What is lacking today is the political will to implement them. History will judge us on our ability to act now, while there is still time.
Continue Taking Action With Us:
This conference is just a first step. To continue this mobilization, several options are available:
- Join one of the organizations or projects presented today
- Follow our news via our newsletter: https://pauseia.substack.com/
- Watch the conference recordings on our YouTube channel: https://www.youtube.com/@Pause_IA
- Join our community on Discord: https://discord.gg/e8ZRhf64Uz
- Support our action with a donation: https://pauseia.fr/dons
Pause IA is an association made up entirely of volunteers. Every contribution, however modest, helps us pursue our mission. To learn more about our actions: https://pauseia.fr/