
Google Policy Agenda unveils AI regulation wish list
Google has published an AI Policy Agenda paper outlining a vision for the responsible use of AI and suggestions for how governments should regulate and encourage the industry.
Google AI Policy Agenda
Google announced the publication of an AI policy agenda with proposals for responsible AI development and regulations.
The paper notes that government AI policies are being shaped independently around the world, and calls for a coherent AI agenda that strikes a balance between protecting against harmful consequences and avoiding innovation.
Google writes:
“Getting AI innovation right requires a policy framework that ensures accountability and enables trust.
We need a holistic AI strategy that focuses on:
(1) unlocking opportunities through innovation and inclusive economic growth;
(2) ensuring accountability and building trust; And
(3) Protection of global security.
A coherent AI agenda must advance all three goals—and not one at the expense of the other.”
Google’s AI policy agenda has three main goals:
- Opportunity
- Responsibility
- Security
Opportunity
This part of the agenda calls on governments to encourage AI development by investing in:
- Research and Development
- Creation of a frictionless legal environment that gives free rein to the development of AI
- Plan educational support for training an AI-enabled workforce
In short, the agenda calls for governments to step out of the way and get behind AI to move the technology forward.
The political agenda states:
“Countries have historically excelled at maximizing access to technology and using it to achieve important public ends, rather than trying to constrain technological advances.”
Responsibility
Google’s political agenda argues that the responsible use of AI depends on a mix of government legislation, corporate self-regulation and input from non-governmental organizations.
The political agenda recommends:
“Some challenges can be addressed through regulation to ensure that AI technologies are developed and deployed in line with responsible industry practices and international standards.
Others require fundamental research to better understand the benefits and risks of AI and to learn how to deal with them, as well as to develop and deploy new technical innovations in areas such as interpretability and watermarking.
And others may require new organizations and institutions.”
The agenda also recommends:
“Encourage the adoption of common approaches to AI regulation and governance and a common lexicon, based on the work of the OECD.”
What is OECD?
The OECD is the OECD.AI Policy Observatory, supported by corporate and government partners.
The government actors of the OECD include the US Department of State and the US Department of Commerce.
Corporate stakeholders include organizations like the Patrick J McGovern Foundation, whose leadership team is made up of Silicon Valley investors and technology executives who have a vested interest in how technology is regulated.
Google advocates less corporate regulation
Google’s policy recommendation on regulation is that less regulation is better and that corporate transparency could stifle innovation.
It recommends:
“Regulatory focus on the highest-risk applications can also discourage innovation in the highest-value applications where AI can provide the greatest benefits.
Transparency, which can promote accountability and equity, can come at the expense of accuracy, security, and privacy.
Democracies must carefully consider how to strike the right balance.”
Later, it is then recommended to consider efficiency and productivity:
“Call on regulators to consider trade-offs between various policy goals, including increasing efficiency and productivity, transparency, fairness, privacy, security and resilience.”
There has always been, and always will be, a tug-of-war between companies fighting oversight and government regulators trying to protect the public.
AI can solve the humanities’ toughest problems and offer unprecedented benefits. Google is right when it comes to balancing the interests of the public and businesses.
Useful recommendations
The document makes useful recommendations, such as suggesting that existing regulators develop guidelines specifically for AI and consider adopting the new ISO standards currently under development (e.g. ISO 42001).
The political agenda recommends:
“a) Direct sectoral regulators to update existing oversight and enforcement regimes to apply to AI systems, including how existing authorities apply the use of AI and how compliance with existing regulations can be achieved by an AI system using international consensus multi-stakeholder standards such as the ISO 42001 series.
b) Instruct regulators to produce periodic reports identifying capacity gaps that both make it more difficult for affected companies to comply and enable regulators to oversee effectively.”
In a way, these recommendations state the obvious: it is a given that authorities will develop guidelines so regulators know how to regulate.
This statement contains the recommendation of ISO 42001 as a model for what AI standards should look like.
It should be noted that the ISO 42001 standard was developed by the ISO/IEC Committee on Artificial Intelligence, which is chaired by a twenty-year-old technology executive from Silicon Valley and others from the technology industry.
AI and security
This is the part that presents the actual danger of malicious usage to create disinformation and misinformation and cyber-based harms.
Google outlines challenges:
“Our challenge is to maximize the potential benefits of AI to global security and stability while preventing threat actors from using this technology for malicious purposes.”
And then offers a solution:
“Governments must simultaneously invest in research and development and accelerate public and private adoption of AI, while controlling the proliferation of tools that could be abused by malicious actors.”
Recommendations for governments to counter AI-based threats include:
- Develop ways to detect and prevent election interference
- Share information about security vulnerabilities
- Develop an international trade control framework for dealing with companies engaged in research and development of AI that threaten global security.
Reduce bureaucracy and increase government adoption of AI
Next, the paper advocates streamlining government adoption of AI, including increased investment in it.
“Reform government acquisition policies to leverage and promote world-leading AI…”
Examine institutional and bureaucratic roadblocks preventing governments from breaking down data silos and adopting best-in-class data stewardship to unlock the full potential of AI.
Harness data insights through human-machine collaboration and build nimble teams with the skills to quickly build, customize, and deploy AI systems that no longer require a computer science degree…”
Google’s AI policy agenda
The political agenda offers thoughtful suggestions for governments around the world to consider when formulating regulations around the use of AI.
AI is capable of many positive breakthroughs in science and medicine, breakthroughs that can provide solutions to climate change, cure diseases, and extend human life.
In a way, it’s a shame that the first AI products to be made available to the world are the comparatively trivial ChatGPT and Dall-E applications, which contribute very little to the benefit of mankind.
Governments are trying to understand and regulate AI as these technologies are deployed around the world.
Strangely, open-source AI, the most consequential version of it, is only mentioned once.
The only context in which open source is addressed is recommendations for dealing with misuse of AI:
“Clarify the potential liability for misuse of both general and specialized AI systems (including open source systems where appropriate) by various participants – researchers and authors, developers, implementers and end users.”
Given Google’s alleged fear and belief that open-source AI is already defeated, it’s odd that open-source AI is only mentioned in the context of abusing the technology.
Google’s AI Policy Agenda reflects legitimate concerns about over-regulation and inconsistent rules around the world.
But Silicon Valley insiders abound among the organizations tasked with helping both the political agenda and the development of industry standards and regulations. This raises the question of whose interests the standards and regulations reflect.
The policy agenda successfully communicates the need and urgency to develop sensible and fair regulations to prevent harmful consequences while allowing useful innovations to continue to develop.
Read Google’s article on the political agenda:
A Political Agenda for Responsible AI Progress: Opportunity, Responsibility, Safety
Read the AI policy agenda for yourself (PDF)
A political agenda for responsible progress in artificial intelligence
Featured image from Shutterstock/Shaheerrr