Published Jan 05, 2024
I first became fascinated by the AI system bias mitigation debate in 2017 when I stumbled upon a tech-inspired art project source by Joy Buolamwini, Founder @Algorithmic Justice League. It was then when she discovered that the algorithm behind her software was rooted in implicit biases and, even worse, creating the potential for direct and lasting harm.
One of my early encounters with ethics and fairness in AI as a real large-scale discourse was when I joined Google, around the time Timnit Gebru got fired from Google for criticizing some of the LLMs Google was building.
As I got increasingly indulged in the dialogue, I noticed that all major multimedia outlets, The New Yorker, the Financial Times, Washington Post have published essays arguing that AI represents a catastrophic threat to humanity. It got a cover story in Time. AI pioneers and Turing Award winners Geoffrey Hinton and Yoshua Bengio publicly stated their concern. UK Prime Minister Rishi Sunak called AI an “existential risk.” Our parents were scared. We were (and are) worried that AI will shake the foundations of a stable, democratic society. Since the pandemic, AI-driven misinformation campaigns, autonomous devices, and racial profiling continue to alarm governments.
Even Elon Musk, the normally blusterous personality appeared cautious about the future of artificial intelligence, back during a talk at MIT in 2014. The then-nascent technology was, according to Musk, humanity’s “biggest existential risk” and his investments in AI firms sought to control risks in its development.
I was admittedly one of the people who viewed the wave of concerns around AI as a transient hysteria. Ever since I started reading about AI, I would mostly, if not only, come across 2 extremes - one that is over-pumped with the “AI to better humanity”, another that is alarmingly frightened about “existential risk from AGI.” That’s all. And I, given my helpless and often exhausting habit, wasn’t convinced with either. I wasn’t on board the one-way train to doomsville, nor was I over-optimistic about AI imitating and eventually replacing human intelligence. If history taught us anything, the universe is meant to restore its equilibrium eventually. Every emerging technology has its own set of limitations if we care to dig deep enough, however, those limitations are not an excuse to escape progress.
I wanted to find a middle ground, which is when I realized I cannot rely on publishers and media houses and ideas propagated by conglomerates in big tech. How can we trust people whose careers and livelihoods depend on investing in AI — or investing in protecting us from AI — not to exaggerate what the technology is capable of? What about the opportunity costs?
Thus, I started exploring independent newspapers and self-reliant researchers more. I realized that AI, if governed and tamed better, can be non-terrifyingly powerful.
Over the decades, driven by a desire for rigor, we have been surprised to uncover complex cognition across the animal kingdom: first in our closest primate relatives, then in more distant creatures like crows and parrots, and most recently in invertebrates like the octopus and the honey bee. The progression from an overly cautious denial of complex mentality to a more sophisticated understanding of animal minds is one of the great accomplishments of 20th century science. And it holds lessons for how humanity can approach the most critical intelligence explosion since the Paleolithic — that is, AI. An entirely different form of intelligence.
The study of AI lacks coherent methods. AI capabilities are superhuman in some ways and dangerously limited in others. And no one is yet sure what to make of something so human but alien at the same time.
What lessons does the past century of research in animal cognition hold for how to think about today’s AI systems? We can deploy transfer learning models to the innate structure of animal intelligence. We can use pre-trained and pre-learnt connections in animal cognition to aid and enhance learning capabilities of AI systems.
AGI or Superintelligent AI have always felt like technical jargons to me. As Yann LeCun says, even humans don’t have general intelligence. One of the major logicians and philosophers of all time, Kurt Gödel once said: “The more I think about language, the more it amazes me that people ever understood each other.” So I believe that we inflated the idea of machine intelligence to escape accountability for not being proactive enough to predict the possible use cases early. Thus, I believe the “complexity” of the AI system is not the bottleneck, it’s the national technological development priorities and systems that fall short.
LLMs can make a developer’s job easier and faster. As these technologies continue to improve, understanding their impact on the labor market will be essential. Forecasting the trajectory of developer roles provides early insight into the future of work. We can use historical analogies and economic insights as a rough guide to answer the question on everyone’s mind: Will LLMs lead to mass displacement across developer roles?
I have to agree that current LLMs don’t live up to the hype, and I am yet to be sold on any particular story of certain doom. That said: it’s undoubtedly worth studying how AI can be harmful, thus arriving on how to make it more sensitive and “safe.”
Well, everybody, ranging from government bodies and policy-makers, private corporations and AI developers, to individual users and civil society organizations.
Every standard book on Artificial Intelligence defines it as machine intelligence as opposed to human intelligence. In contrast, I believe AI machines reflect and imitate human intelligence more than we’d like. The good and the bad parts, both. There has been a lack of science in the criminal legal system across the world. We have risk prediction algorithms in place in criminal investigations to determine a ‘score’ for an offender to predict probability of a repeat offense and the danger factor of that individual. These algorithms have been founded to be biased against minorities as well as lower income groups. Face image detection algorithms have been found to be discriminatory against non-caucasian racial groups such as those from Islamic faiths, Sikh, african, south asian, latin descents. In the criminal legal space, we must have an ethical obligation to build recognition algorithms, DNA evidence matching technology, identification algorithms, risk assessments instruments that are fair and inclusive and representative of the whole spectrum of the human race.
India’s religious AI chatbots are speaking in the voice of god — and condoning violence source Claiming wisdom based on the Bhagavad Gita, the bots frequently go way off script, which is deeply offending and demeaning for India’s religious sentiments.
“If you have a face, you have a place in the conversation about AI“
Dr. Joy Buolamwini, MIT researcher.
Thus, naturally, we need laws and policymakers in-place to govern this machine-form of humans; a technical guardrail. AI governance is a socio-technical system - it’s a nuanced interface where technology and society meet, intertwine, and co-evolve. It naturally brings its own set of ethical dilemmas and complexities, and a whole range of issues such as accountability, transparency, legalities.
If you consider “AI Governance” as your root problem statement, I see 2 child nodes:
With AI Safety and AI ethics as precursors to AI Governance.
From a technical standpoint, the term “safety” is best defined as an AI agent that consistently takes actions that lead to the desired outcomes, regardless of whatever those desired outcomes may be. Essentially, making a “safe” AI agent should not be conflated with making an “ethical” AI agent. The respective terms are talking about different things.
AI Safety is a technical problem of building a safe agent is largely independent of what ‘safe’ means because a large part of the problem is how to build an agent that reliably does something, no matter what that thing is, in such a way that the method continues to work even as the agent under consideration is more and more capable.
We need to understand AI Governance as one would understand the abolitionist argument against the American criminal justice system. One from the famous figure in America’s civil rights movement, Angela Yvonne Davis, that the system is not broken, it is working exactly how it was meant to work. “How it was meant to work” is the problem.
Thus, knowing what to fix is the first step.
Years ago, I attended a talk on AI Governance and x-risk by Haydn Belfield, a Research Assistant at University of Cambridge’s CSER (Center for the Study of Existential Risk.) More than being just a general purpose technology, AI could also be transformative in a sense that it can change the course of our civilization in more dramatic ways than one. Much like the Industrial and Commercial revolution, AI can potentially save humanity from extinction or re-engineer large-scale demographics. AlphaFold for biotechnology is only one such example. Now that we have realized that AI could be a big deal, governing how AI incorporates in our society is also a big deal.
Categorical Concerns | Accident | Misuse | Structural Risks |
---|---|---|---|
Short-term | Concrete Problems in AI | Malicious use of AI | Flash crash → Flash War (such as those in stock market, military defense systems) |
Long-term | AI goals aligned with humans | Superintelligence | Race for hegemony |
As intellectually superior as AI systems may seem, they are limited and narrow in their scope, even the most advanced ones. They may be able to defeat their human counterparts at a game or a computation, however, in truth, that’s the only task they excel at. For example, AlphaGo: the best board game master in the world, but that’s all it can work on. It can’t move unfettered from one task to another the way that a human can. That’s why AGI is a far-sighted goal.
If used right and realized to its full potential, technology has the ability to benefit humanity in ways more than one. Marshall Burke, an economist at Stanford University, predicts that AGI systems would ultimately be able to create large-scale coordination mechanisms to help eradicate some of our most pressing problems in society, such as poverty and food insecurity.
Every Machine Learning model, by its very nature, learns from its mistakes, and improves iteratively. But, self-improvement AI systems are far ahead of traditional ML mechanisms in that they morph into a different design and create new AI agents in the process of learning, recursively. That’s too dramatic a development because it is hard to trace and tame and govern. It gets smarter and becomes increasingly smarter at making itself smarter – an exponential growth.
How do we trust all transformations of such a system are safe? How do we know the new AI agents it created are reliable? Is it too convoluted to predict ahead of time what those transformations could be? How do we regulate such systems without forbidding self-learning at all? How do we build AI agents that are trustworthy enough to build reliable and safe AI agents, can we delegate the decision making to the parent AI yet? All of these investigations require a deeper understanding of what made these systems safe or unsafe in the first place and earlier (presumably) safe systems gave birth to new seed systems.
I don’t have answers nor literature study to back those answers, but the key is to make assumptions explicit, and, for the sake of explaining it to others, to be clear about the connection to the real-world safe AI problems.