Published Jan 05, 2024
The discourse on AI ethics, regulation, and responsibility has garnered significant attention, predominantly driven by stakeholders from the Global North, US in particular, excluding the other and major half of global users. Much of the tech reporting is centered around Silicon Valley’s profiteering and fluctuations, innovations, research, and its repercussions. AI systems and algorithms may be developed in Silicon Valley but much of the training and consumer market base lies in the “other” nations, trained in third, deployed in fourth world countries. It is only imperative that conversations and solutions concerning AI safety and global governance not only include, but are led by voices from the “other” nations. Such as, India leading the Indo-Pacific hints towards the new world order and the West needs to align itself with the Global South before it’s too late to the occasion.
Countries who have less power (some former colonies, some modern day colonies) have less sovereignty over their domestic economic and tech policies, data privacy regulations, and thus bear the brunt of massive data collection and weaponization. They provide cheaper labor and thus get saddled with menial data labeling. Even the biggest tech players open up shop in those third-world countries, hire their best minds at sub-par salaries, deploy those best minds for implementing the projects commissioned by the higher-ups in California, have those systems deployed onto the third-world populations, with revenues reaching the owners, not the makers. Thus, detaching the products from the producers. The effect of that reflects in poor AI governance.
Much like Nuclear trade, third world nations are unable to own such systems and technologies because they lack resources to develop their own AI ( even first-world nations only sell the product, not share technology to deter self-sufficiency) and thus must cope with AI not constructed for them.
Meanwhile, countries with more power (former colonial powers or today’s hegemonies) disproportionately reap the technology’s economic rewards.
For a long time, third-world countries have been considered only as a market, not an equal collaborator. Much like the US big pharmas or packaged food industry who may make and patent the drugs made but test, experiment, and sell in third world countries, tech companies also lobby for altering policies for monopolizing markets in those third world countries, such as Facebook.
For AI governance that goes beyond borders and is ethical in the truest sense, it is important that it includes voices of all nations and races. To secure their committed participation, it is important to include them diplomatically.
In this whole dialogue, as a researcher, I’m interested in questions like: Do states promote their national interest by attempting to create national AI champions? How does one country having a national AI champion affect the national interests of other states? These questions can help inform us on how strongly incentivised states will be to pursue ambitious AI industrial policy approaches and whether there is a strong national interest-case against doing so?
Contemporary AI systems are collective, data intensive systems that essentially rely on massive aggregated input data from many people. In turn, these systems are likely to affect a great number of people in their deployment. Outcomes are being decided upon by a very small group of people (developers and policymakers) under high uncertainty. Even if there was a way to make the “best” choice for the world, there is not enough legitimacy or trust that comes from this setup of disproportionate power. Furthermore, public input will end has already been necessary to solve normative questions such as “what principles should guide ChatGPT’s behavior” and is a critical, albeit underappreciated, input into most safe AI deployment pathways. To ensure that the best and most socially beneficial decisions are made, more focus needs to be placed on developing the best processes for incorporating public input/will into decision making on AI systems in advanced AI labs.
From an economic perspective, among the greatest challenges that transformative AI may pose are increases in inequality - if advanced AI systems can perform work far more efficiently than humans, workers may no longer be able to earn a living wage, whereas entrepreneurs and the owners of corporations may see their incomes rise significantly.
Inequality has been rising and been a significant challenge for policymakers for decades. But increases in inequality are not an unavoidable by-product of technological progress. Instead, as long as humans are in control, whether progress leads to greater inequality or greater shared prosperity is our collective choice. Ensuring that transformative AI leads to broadly shared increases in living standards is the most important economic dimension of the AI alignment problem.
Different stream of research topics that can arise in this pursuit: