Published Jan 30, 2024
Scientists in the 80s believed that transformative nanotech might be imminent and might lead to the extinction of humanity if managed poorly. These concerns also seem to have spread into popular culture, to some extent, and to have been at least a peripheral presence in policy discussions. Some early work on aligned superintelligence (e.g. by the Singularity Institute) was partly motivated by concern about nanotech risk.
Some potential AI governance research questions that could be worth addressing:
Some interesting research questions that come to my mind drawing from Genomics and Synthetic Biology are:
The goal of exploring the larger universe captured public imagination while catalyzing many science and engineering breakthroughs. Arguably, AI is now at a similar stage of development as Space exploration was in 1958 when the UN formed its committee for the peaceful exploration of outer Space, which led to the Outer Space Treaty (initially proposed by the US, UK and former Soviet Union in 1967, and since ratified by 107 countries). This treaty has been instrumental in providing the impetus and principles to underpin national guidelines and legislation in countries that have invested in developing their own Space programs, covering a range of matters including “planetary protection” measures to prevent contamination of celestial bodies and Earth by foreign organisms. Space treaties and international collaborations can teach us global scientific cooperation and inclusive technological progress. India’s landmark success of Chandrayan’s landing on Moon’s south pole in 2023 marked a new feat for the entire global south. NASA’s Artemis Accord set global goals for collective and collaborative Space R&D and technology sharing. Not only is AI a vital component for Space research but Space research and policies can teach us healthy AI governance too.
Compute is a very promising node for AI governance. Why? Powerful AI systems in the near term are likely to need massive amounts of compute, especially if the scaling hypothesis proves correct. Furthermore, compute seems more easily governable than other inputs to AI systems (talent, ideas, data, algorithmic innovation), because it is more easily detectable (it requires energy, takes up physical space, etc) and because it’s supply chain is very concentrated (which enables monitoring and governance) source