Typification
Policy instruments of 3 types
- (Preemptive) Remunerative powers (market interventions): Economic interventions or incentives, markets to reward actors for taking desired actions. Traditional examples include incentives such as subsidies or tax credits economic property rights. These instruments have played a major role in modern policymaking, from reducing emissions through carbon taxes to the development of AI platforms. But gives too much leeway to the firms with no accountability. Thus, leads to unpredictable results.
- (Retributive) Coercive powers (regulatory measures): Legal Instruments, Government of Legislative bodies. Failure to comply with mandated requirements can result in sanctions, whether criminal, civil, or administrative
- Normative powers (voluntary actions): By opting out of using legal or economic coercive measures to achieve outcomes, governments using these regimes attempt to reach their goals by influencing and motivating actors through communication and persuasion – the most lenient form of AI governance.
References: Some noteworthy pieces of AI governance legislation include Idaho’s law requiring that all documents, data, and records use to build or validate pre-trial court risk assessment AI tools “shall be open to public inspection, auditing, and testing
A popular right wing perspective is that ‘Human rights prevent innovation’ and undermine a ‘move fast and break things’ ethos. Whereas, I believe Human rights provide an appropriate basis for standards and processes internationally. That said, I think human rights norms are very loosely defined and thus too vague to govern AI systems. A problem to which I’m yet to obtain a convincing answer.
Despite sincere motivation to behave in ethical ways, abstract principles (transparency, fairness, non-maleficence, responsibility, and privacy) do not give much if any guidance to governments or regulated entities about what, in practice, to do to ensure that principles are met. They are not actionable.
Governance tools like regulatory sandboxes and soft law approaches (like Wi-Fi and LEED) have been implemented to some degree of success in other fields and have been thought to show potential for AI governance as well. - There is no shortage of extremely intelligent ideas
A framework for building reliable AI safety cases | Self-Governance
Whenever a pharmaceutical company rolls out a drug, they are obligated to provide a safety case - a well-documented clinical trial backed with evidence that their drug is acceptably safe. Similarly with Nuclear reactors or automobile manufacturers, and so on. A certification.
For the 2020 conference, the NeurIPS committee introduced a requirement that authors include in their papers a section reflecting upon the broader impact of their work. The idea was to push researchers to consider potential negative societal impacts of AI research. See e.g. Abuhamad & Rheault 2020.
For 2021, this requirement got changed to a checklist that authors need to answer when submitting a paper.
Provable AI safety (paving a way for AI governance): If we could encode a detailed world model and put human values and preferences into a formal language, it could be possible to formally demo and confirm that an (miniaturized) AI system won’t take actions leading to catastrophe, creating proof certificates for small-scale demo AI safety?
These developments could be used as a case study to learn about the pressures that shape institutional change within the AI research community.
Relevant Research Questions
- What events and decisions led to the creation of the broader impacts initiative? What was the motivation for designing the requirement as it was?
- Can we develop a checklist/handbook for AI developers to successfully and reliably build AI safety cases for their models’ training and development and iteratively improve the cases?
- Did criticism of the initiative by AI researchers threaten its continued existence?
Was the initiative successful in pushing AI researchers to consider the possible negative impacts of their work?
- Are conference organizers well positioned to bring about institutional changes within the AI research community?
- We often see that any major social movement results in broader mission statements from companies or research organizations (usually the one who have a stake in politics) which further spreads into popular culture and pushes for policy discussions. We saw that in gender rights in the western world demanding DEI (diversity, equity, inclusion) in the workplace in the STEM sector. What can we learn from their introduction and evolution? Have these impact statements been inspired BY a force of justice or have they inspired a force for just and fair and socio-compatible research? Do they really work beyond a paper formality?
- What can we learn from the self-governance mechanisms put in place by the AI community and AI companies?
- If you were in control of e.g. Google, what corporate self-governance mechanisms would you put in place in order to ensure the company behaves in a socially responsible way in the face of radical technological change? Did companies which adopted the mechanism become less competitive or was it rational even from a profit-seeking perspective?
- Will self-governance provide an example for regulators to learn from, thus improving future regulation? Understanding the relationship between self-governance and regulation may help us to understand where to target our efforts.
- Should we push hard for responsible corporate self-governance first or would better regulation (or the threat of it) improve self-governance anyway?
- Identifying deceptive self-governance (e.g. the tobacco industry’s obfuscation of the link between cigarettes and lung cancer: in this paper, researchers show how conflicts of interest with big tech/pharma clouds the academics’ mind and work and gives space for bias to inform the research and the lost of academic integrity)
- How did regulation and self-governance play off one another to contribute to responsible governance (or the failure to achieve it)? Were there feedback loops involved between the two?
- Does self-governance incentivise corporations, if so, at what cost?