Global Regulation of AI: The State of Play
By Onur Bakiner - November 10, 2023
It was not too long ago that “regulation stifles innovation” was the tech industry’s mantra. The sector was capable of fixing its problems, it was argued. Plus, one asked, who wants gerontocrats to regulate something so complex as artificial intelligence? Have you not watched the ridiculous questions directed at Mark Zuckerberg at the 2018 U.S. Senate hearing?
What a long way we must have come in order for the Sam Altmans of the world to ask for legal regulation of AI after a decade of research and activism that called for it kept falling on deaf ears. Talk of AI legislation is everywhere now, but is it happening? This post summarizes global legal developments having to do with the regulation of the risks, harms, and threats arising from AI applications. AI-specific legislation has not advanced far around the world, but AI-adjacent concerns around privacy, online content, and business practices are being addressed by lawmakers in a number of countries.
What is at Stake?
Computer systems using AI components are used in fields as varied as medicine and retail, policing, and education. Accompanying their benefits are potential risks and threats and in some cases actual harms. Algorithms that expedite medical discoveries can also be used to produce chemical or biological weapons. As the recent controversies around chatbots make abundantly clear, AI tools may end up misleading users by generating inaccurate content. This translates to wrongful arrests in policing, disinformation in science and journalism, and malpractice in mental health counseling. Online hate speech is replicated and amplified via AI tools. Recommendation algorithms optimized to encourage user engagement on online platforms disseminate inaccurate, violent or hateful content at a great speed to many more people than conventional media do. Such algorithms can facilitate the formation and growth of hate groups and violent mobs. The collection, storage, and analysis of large amounts of data to power AI tools create incentives to violate data subjects’ privacy rights. In the absence of workplace protections, AI can replace some jobs – whether this necessarily leads to a net job loss, and whether AI performs cognitive and affective tasks as competently as humans are subjects of further debate, of course. Broadly speaking, the technologies that make our lives better, healthier, and more fun can also make it easier to surveil us, take away our rights, and threaten our livelihood.
The Global State of AI-Specific Regulation
Most countries have thus far focused their policies on encouraging the development of AI, rather than regulating it. AI strategy documents envision competitive advantage for a few countries that have historically attracted technology investment, and others hope to not get left behind. Geostrategic competition, especially that between the United States and China, permeates much of the AI discourse. Efforts to address risks, harms, and threats are few in number. The European Union’s (EU) AI Act is undoubtedly the most advanced piece of legislation that addresses AI specifically. Proposed by the European Commission in April 2021, the draft legislation has already passed muster with the European Parliament in early 2023. If the three-way negotiations between the Commission, the Parliament, and the Council of the EU reach a conclusion, the Act can be signed into law in 2023.
Beyond the EU, however, AI regulation has not found much legislative success. Canada’s Artificial Intelligence and Data Act (AIDA) generated societal debate in late 2022 and early 2023; currently, it is being considered by the House of Commons Standing Committee On Technology And Industry. Elsewhere in the world, the picture looks even less encouraging. The United States, home of a record number of proposed bills since 2017, has so far not voted on any AI-related legislation in the House of Representatives or Senate. Senators have developed an interest in the subject, as evidenced by high-profile tech leaders’ visits to the chamber lately, but they prefer to take things slowly. Currently, legislative bills mentioning AI regulation have been introduced in Brazil, Chile, China, Colombia, Mexico, and Venezuela; it is not clear if any of them have progressed.
Legislation Addressing AI-Adjacent Concerns
As discussed in the section on AI risks, harms, and threats, many of the concerns around AI have to do with its amplification of existing privacy, civil-political rights, and consumer rights violations. Therefore, some AI-adjacent concerns, such as privacy, content moderation, and rights protections, can be addressed in the absence of AI-specific national legislation. The General Data Protection Regulation, in effect since 2018, provides a framework for data privacy and protection in the EU. Data protection authorities in member states have been busy supervising the implementation of the Regulation and investigating alleged violations. India’s Digital Personal Data Protection Act and the United Kingdom’s Data Privacy Act, both legislated in 2023, aim to achieve more or less similar goals, as does Canada’s Personal Information Protection and Electronic Documents Act, in effect since 2000, and the Consumer Privacy Protection Act, proposed in 2022. The United States, infamous for the absence of a federal digital privacy law, has seen state efforts to fill in the gaps: following California’s Consumer Privacy Act (2020), Colorado, Connecticut, Indiana, Iowa, Montana, Tennessee, Texas, and Virginia have legislated data privacy or consumer protection laws. A report from 2021 lists Arkansas, California, Colorado, Illinois, Louisiana, New York, Oregon, and Washington, as containing protections for biometric data.
Online content has long been outside the purview of direct legal control. Section 230 of the Communications Decency Act (1996) gives companies broad exemptions from liability for content posted on their digital platforms in the United States – a few exceptions, such as online sex trafficking, notwithstanding. The EU’s Digital Services Act (2022) aims to change that by making what is illegal offline illegal online on designated ‘very large online platforms’ (VLOPs), whether they are based in the EU or do business there. As more countries rush to regulate online content, controversies abound: the United Kingdom’s Online Safety Bill, which received royal assent in mid-September 2023, has drawn criticism for empowering government agencies to surveil messages otherwise protected by end-to-end encryption. Others worry that online content laws risk becoming tools of censorship in the hands of authoritarian and semi-authoritarian regimes or democratic ones with increasingly authoritarian practices.
Some of the recent legislation aims to regulate not only algorithmic decision-making or online content but also the business models that have long characterized the tech industry. Big Tech has long been criticized for its monopolistic tendencies, and for refusing to pay content providers and intermediaries adequately. The EU’s Digital Markets Act (2022) aims to regulate competition in the digital markets by making it illegal for large platforms to abuse their market power. In the United States, the Federal Trade Commission and a number of states sued Amazon for inflating online prices in September 2023, two weeks after the Department of Justice sued Google for anti-trust law violations. Australia’s News Media Bargaining Code (2021) and Canada’s Online News Act (2023) require technology companies to compensate news publishers for the content made available on their platforms – the laws have provoked retaliation by Facebook, which refuses to display their news feeds in both countries - negotiations with Google are underway in Canada.
Conclusion
A few final remarks are in order to conclude this summary of global trends in AI law.
The legal regulation of AI is here to stay, as leaders in more and more countries realize its importance.
Early movers, such as the EU, will enjoy the advantage of setting the tone for what that regulation looks like, as future bills elsewhere are likely to emulate pioneering legislation.
Some future laws, such as the EU’s AI Act, will address the specific challenges resulting from automated decision-making, but AI-related concerns are and will be, handled through privacy, consumer protection, online content, and anti-trust laws, as well.
The major unknown is the implementation phase, in part because technology legislation tends to use intentionally vague and flexible definitions, and in part because there exists no blueprint or precedent for using law to eliminate or mitigate the risks, threats, and harms arising from AI.