Policy Instruments and Self-Governance Model

Published Jan 26, 2024

Typification

Policy instruments of 3 types

References: Some noteworthy pieces of AI governance legislation include Idaho’s law requiring that all documents, data, and records use to build or validate pre-trial court risk assessment AI tools “shall be open to public inspection, auditing, and testing

A popular right wing perspective is that ‘Human rights prevent innovation’ and undermine a ‘move fast and break things’ ethos. Whereas, I believe Human rights provide an appropriate basis for standards and processes internationally. That said, I think human rights norms are very loosely defined and thus too vague to govern AI systems. A problem to which I’m yet to obtain a convincing answer.

Despite sincere motivation to behave in ethical ways, abstract principles (transparency, fairness, non-maleficence, responsibility, and privacy) do not give much if any guidance to governments or regulated entities about what, in practice, to do to ensure that principles are met. They are not actionable.

Governance tools like regulatory sandboxes and soft law approaches (like Wi-Fi and LEED) have been implemented to some degree of success in other fields and have been thought to show potential for AI governance as well. - There is no shortage of extremely intelligent ideas

A framework for building reliable AI safety cases | Self-Governance

Whenever a pharmaceutical company rolls out a drug, they are obligated to provide a safety case - a well-documented clinical trial backed with evidence that their drug is acceptably safe. Similarly with Nuclear reactors or automobile manufacturers, and so on. A certification.

For the 2020 conference, the NeurIPS committee introduced a requirement that authors include in their papers a section reflecting upon the broader impact of their work. The idea was to push researchers to consider potential negative societal impacts of AI research. See e.g. Abuhamad & Rheault 2020.

For 2021, this requirement got changed to a checklist that authors need to answer when submitting a paper.

Provable AI safety (paving a way for AI governance): If we could encode a detailed world model and put human values and preferences into a formal language, it could be possible to formally demo and confirm that an (miniaturized) AI system won’t take actions leading to catastrophe, creating proof certificates for small-scale demo AI safety?

These developments could be used as a case study to learn about the pressures that shape institutional change within the AI research community.

Relevant Research Questions