Published Feb 11, 2024
Humans, unsurprisingly, aren’t great at predicting technology more than a few years out. The best we can do still is to simply extrapolate from current trends, learn from our mistakes in the past and evolve. Making accurate predictions about the future is hard, but making predictions about AI is harder, why? - A Question I’m yet to explore.
Much of the conversations and concerns around and about AI are about non-hypothetical potentially dangerous capabilities, such as sophisticated deception and situational awareness, that have not been demonstrated in AI yet. Can we evaluate and predict such harmful capabilities in existing AI systems to form a foundation for policy and further technical work?
Impact of US nuclear strategists on nuclear strategy in the early Cold War? What types of experts provided advice on US nuclear strategy? How and in what ways did they affect state policy making on nuclear weapons from 1945 through to the end of the 1950s (and possibly beyond)? How could they have had a larger impact? What pathways technical experts have been able to shape state policymaking in relation to this critical technology. Knowing whether nuclear strategists had any impact should update us on the extent we might be able to have an impact on AI governance (or the development of other crucial technologies for that matter).
Today, 9 countries possess nuclear arsenals: the United States, Russia, France, China, the United Kingdom, Pakistan, India, Israel, and North Korea. In total, the global nuclear stockpile is close to 13,000 weapons. source Which is a contrast to many other technologies that emerged out of fear and desperation during WWII and Cold War, as they all largely spread across the world. Not Nuclear weapons though.
Was it really inevitable? Hardly. The knowledge and technologies, even the patented ones, are quick to escape the boundaries. The US came up with many different international oversights of nuclear technologies to maintain its monopoly, yet it failed. In a foreign policy paper, “Predicting Proliferation: The History of the Future of Nuclear Weapons” source, Yusuf predicted that nuclear proliferation was inevitable.
As for implications for society, AI is in the same murky situation as Nuclear Science was in the 30s. Major nuclear scientists knew that Nuclear technology posses harm in long-term, but never predicted that it could take a blasphemous form that it took in Hiroshima and Nagasaki, or the leak at Chernobyl.
After a point, it became painfully clear that having global organizations and treaties (like NPT) may have made the process harder, but not impossible. Even the third world countries were willing to let their people die of hunger but were committed to test their bombs.
World (largely western world, since the Global South and Asia-Pacific world was still finding it hard to keep afloat post independence from colonialism) was pessimistic about this during the early nuclear age and it cost us world peace. Big tech is again pessimistic about AI harm and it might cost us something bigger this time.
I would read in Hindustan Times about the upheaval it caused in Indian politics and society at large when India became a nuclear state under the BJP regime and was immediately put under sanctions. But did they really work? Did US sanctions on Russia in 2022 really work? Europe bore the (economic and demographic) brunt of that conflict, only second to Ukraine. Sanctions made it worse. Which is a start reminder that we need to rethink multilateral collaborations and technology regulations. Especially, since the Global South is emerging as a major leader and rightfully demanding its seat at the table.
Like how powerful Nuclear Technology diffuses away, in a world as interconnected as ours, the proliferation of AI Technology has catalyzed a paradigm shift across industries and societies, promising unprecedented advancements and efficiencies. However, with these advancements comes a concomitant ethical imperative to ensure the equitable distribution of benefits, safeguards against discriminatory practices, and the responsible deployment of AI systems.
The dialogue on AI ethics, regulation, and responsibility has garnered significant attention globally, including the developing nations from the Global South. Technology and problems go beyond national borders, yet most regulatory structures are at the national level, which needs to change. AI oversight needs to be culture specific.
Despite its flaws and vulnerabilities, the nuclear nonproliferation regime may provide useful guidance as new technologies like AI come into their own. The conventional wisdom is that powerful technologies will fizzle out, despite all surveillance. It is the inevitable fate of revered technology.
“Technology happens because it is possible”
J. Robert Oppenheimer, Father of the Atomic Bomb
Last year, Open AI published that “we are likely to eventually need something like an IAEA for superintelligence efforts.” source [IAEA is short for International Atomic Energy Agency - promotes peaceful use of nuclear energy.] The Center for AI Safety issued a memo saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” source
Like the Nuclear puzzle, norms remain important because AI Governance relies so heavily on state intent. Putting a leash on transformative AI that has gone berserk feels less tangible as perhaps controlling Nuclear proliferation, but there are interesting parallels to be drawn.
Like the 40s approach to Nuclear tech, in early days, technology and innovation too flourished when there were less to no regulations, a hands-off approach. It gave space to scientists and entrepreneurs to dream big and go beyond the norms. However, with increasing digitalisation and big-data, healthy governance, wise, calibrated interventions makes AI tenable. It standardizes rules of the game for a level playing field. But we need this risk-reward tradeoff. Applied at the right time, in the right manner, can boost institutions and businesses all while ensuring public safety.
Another challenge is the global nature of AI. AI systems can be developed in one country, deployed in another, and used by individuals or organizations in a third. This raises complex jurisdictional issues that can make it difficult to enforce AI governance policies, much like enforcing Nuclear Power regulations.
The taboo around nuclear technology came about once the harms became evident - ‘45 - human suffering in Hiroshima and Nagasaki. In contrast, AI has widespread applications in favor of humanity and civil society, thus it is undesirable to block AI development, if even possible in the first place. But building norms for developing AI with blockers (‘intervention’) against clear misuse and those that will likely escape human control.
For nuclear weapons the limiting factor is highly enriched uranium and plutonium. With AI, it is likely the computational resources. Current models take hundreds of millions of dollars to train, and the requisite server clusters can be located and observed. Plus, the chips that were used to train advanced AI are scarce, costly and trackable - thus rendering a way for AI governance (“chips-first regulation.”)
Can past cases of software failures in military systems shed light on this issue? This question’s stakes are high. In the past, accidents in technological systems generated many “near-miss” nuclear crises during the Cold War.In the future, analysis of AI-linked accidents could provide another lens into thinking about the risks associated with artificial general intelligence (AGI).
We could also explore how organizations learn from past incidents to reduce the risks of faulty human-machine interactions. For instance, some have pointed out that there has not been another similar incident with Aegis systems in the thirty years since Vincennes Scharre 2018. How have risky human-machine interactions been dealt with in that time? How we never had any major nuclear disaster after Chernobyl and how the legal recourse post that alerted nations worldwide and shaped future policy work.