The absence of coordinated strategies to tackle the potential risks of advanced artificial intelligence is a pressing concern. Drawing parallels with nuclear arms control treaties could provide a framework for AI governance. These treaties offer valuable lessons in managing technological advancements with existential implications.
Analogies to nuclear arms control highlight the need for global frameworks to discourage dangerous AI developments, similar to the 'nuclear taboo' established by the Treaty on the Non-Proliferation of Nuclear Weapons. Emphasis should be placed on reciprocal risk reduction, creating pragmatic starting points for major powers to manage AI safety. This approach involves stigmatising unethical AI practices and establishing norms through multilateral treaties, potentially fostering international cooperation on AI safety and control.




