Khabor Wala Desk
Published: 23rd September 2025, 9:42 AM
On Monday, technology experts, politicians, and Nobel laureates called on governments worldwide to swiftly establish “red lines” for artificial intelligence (AI)—boundaries deemed too dangerous for the technology to cross.
More than 200 prominent figures, including 10 Nobel Prize winners and scientists from Anthropic, Google DeepMind, Microsoft, and OpenAI, endorsed a letter released at the start of the United Nations General Assembly session.
The letter emphasised the dual nature of AI: “AI holds immense potential to advance human wellbeing, yet its current trajectory presents unprecedented dangers. Governments must act decisively before the window for meaningful intervention closes.”
The campaign’s creators argue that internationally agreed “red lines” are essential to prevent catastrophic misuse of AI.
Examples of Proposed AI Red Lines
| Risk Category | Examples of Prohibited Uses |
| Military / Lethal Uses | Command of nuclear arsenals; lethal autonomous weapons systems |
| Surveillance & Social Control | Mass surveillance; social scoring of citizens |
| Cybersecurity & Deception | Cyberattacks; impersonation of individuals |
| Societal & Global Threats | Mass disinformation; manipulation of vulnerable populations, including children |
The letter calls for these red lines to be codified by governments by the end of next year, citing the rapid pace of AI development.
Signatories warned of multiple escalating risks, including:
“Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years,” the letter concluded.
The message reflects growing concern that AI regulation has not kept pace with technological progress, and that global coordination is urgently needed to safeguard human welfare and international security.
Comments