European lawmakers, Nobel winners, former heads of state, and AI researchers called for binding international AI rules.
They launched the initiative Monday during the UN’s 80th General Assembly in New York.
Signatories urged governments to establish “red lines” by 2026, banning AI applications deemed too dangerous.
Italian ex-Prime Minister Enrico Letta, former Irish President Mary Robinson, MEPs Brando Benifei and Sergey Lagodinsky joined ten Nobel laureates and tech leaders.
They warned that without global standards, AI could trigger pandemics, disinformation, human rights violations, and loss of human control.
Over 200 prominent figures and 70 organizations support the campaign, including leaders from OpenAI, Google DeepMind, and Anthropic.
AI Threats to Mental Health
Studies show AI chatbots like ChatGPT, Claude, and Google’s Gemini give inconsistent or unsafe suicide guidance.
Researchers warned these failures could worsen mental health crises, and several deaths have been linked to AI interactions.
Supporters cited these incidents to stress the urgent need for clear international limits on AI use.
Maria Ressa cautioned AI could drive “epistemic chaos” and systematic human rights abuses without safeguards.
Yoshua Bengio emphasized that the rapid development of powerful AI models risks societal harm societies cannot yet manage.
Toward a Binding International Treaty
Signatories urged governments to create an independent body to enforce AI rules and prevent harm.
They proposed prohibiting AI systems from launching nuclear attacks, conducting mass surveillance, or impersonating humans.
They warned fragmented national or EU AI regulations cannot manage technology that crosses borders by design.
Supporters hope UN negotiations begin quickly to prevent “irreversible damages to humanity,” aiming for a global treaty by 2026.
Countries including the US, China, and EU members draft AI rules, but signatories say only a global agreement can ensure consistent enforcement.