The field of artificial intelligence (“AI”) is at a tipping point. Governments and industries are under increasing pressure to forecast and guide the evolution of a technology that promises to transform our economies and societies. In this series, our lawyers and advisors provide an overview of the policy approaches and regulatory frameworks for AI in jurisdictions around the world. Given the rapid pace of technological and policy developments in this area, the articles in this series should be viewed as snapshots in time, reflecting the current policy environment and priorities in each jurisdiction.
We start this series with a look at how the European Union is approaching the governance of AI.
Future of AI Policy in Europe
As the summer doldrums recede into memory, and lawmakers return to work, it is an apt time to reflect on the future of AI policy in the European Union. The EU sees itself in the lead globally on regulating artificial intelligence, with a draft EU AI Act nearing adoption and a draft EU AI Liability Directive in the works. These initial steps will help shape the wider AI governance structure currently emerging across the world.
I. Policy Vision & Approach
The EU’s AI legislative initiatives are part of an overall policy vision of “technological sovereignty,” which it implements through regulations such as the Digital Markets Act and the Digital Services Act. The EU model is likely to be influential in many important markets across the world, given the so-called “Brussels effect” whereby EU regulations often become global rules. The EU is a large market that is often a first-mover when it comes to regulation, and it can be more efficient for international firms to adopt a single compliance standard.
Yet, when the EU AI Act was first proposed two years ago, some viewed it as putting the cart before the horse: focusing on control rather than capability, or in a twentieth-century analogy, seeking to excel at stop signs rather than producing cars. Notwithstanding the perception among some in Europe that it is in a race with the United States on tech and AI, the real competition is between the U.S. and China, with Europe lagging behind significantly in the development of cutting-edge AI and related technology.
Nathaniel Fick, the U.S. ambassador-at-large for cyberspace and digital policy recently made the same argument, suggesting that EU AI regulations could hamper AI’s technological development. Likewise, France’s Digital Minister Jean-Noël Barrot criticized the European Parliament’s draft text on the EU AI Act as “too stringent” and potentially stifling European innovation. President Macron also has sought to focus on the need to build underlying AI technologies, pledging over €7 billion to fund AI research and development. Recently, over 150 European CEOs and tech experts have likewise voiced concern about the EU AI Act’s potential overreach, and urged the EU to become “part of the technological avant-garde.”
Although the EU AI Act is nearly finalized, it is only the first step in a wider regulatory infrastructure emerging in Europe—and globally—that will need to keep competing policy objectives in mind: balancing control with capability, and risk with innovation. Whether Europe becomes the tip of the spear on AI, or a global laggard, will depend at least to some degree on the policy and regulatory choices it makes, which we turn to next.
II. Major Policy & Regulatory Initiatives
The EU is currently in the final stages of landmark legislation on artificial intelligence—the EU AI Act and the related EU AI Liability Directive—which it seeks to complete before next year’s elections for the European Parliament and the selection of a new European Commission.
A. EU AI Act
Proposed by the European Commission in April 2021, the draft EU AI Act is an ambitious piece of legislation that seeks to regulate “high-risk” AI systems, impose transparency obligations on providers of certain non-high-risk AI systems, and prohibit certain AI practices (such as social scoring that leads to detrimental treatment, and the use of subliminal techniques to distort behavior). Notably, it could lead to substantial administrative costs—based on compliance, oversight, and verification costs—for high-risk AI systems, which may add up to 10 percent of the underlying value of the system.
The AI Act also proposes so-called “regulatory sandboxes.” These are controlled environments intended to encourage developers to test new technologies for a limited period of time, with a view to complying with the regulation. Spain, which holds the rotating Presidency of the Council of the EU until the end of December, is hosting one such regulatory sandbox to enable companies and regulators to test procedures and compliance mechanisms to ensure that products meet the standards of the proposed regulation.
The EU AI Act is nearing adoption, with the Council of the EU having adopted its “general approach” in December 2022 and the European Parliament adopting its compromise text in June 2023. This was based on a draft previously approved by the Parliament’s Internal Market Committee and by the Civil Liberties Committee the month before, which incorporated over 3,000 amendments.
Negotiations on the final text (called “trilogues”) have begun among the three EU institutions—the Council of the EU, the European Parliament, and the European Commission—and should conclude over the next couple of months. There are several matters at issue in the final negotiations, including whether to ban all facial recognition used in public places; the regulation of large language models; and whether to treat certain generative AI models as high risk.
The Spanish government has committed to finalizing an agreement on the legislative text during its Council Presidency this year. However, if the Act is not adopted before the elections for the European Parliament in June 2024 and the selection of a new Commission to take office in late 2024, the legislation is likely to be delayed by six months to a year. If there is such a delay, the new Parliament and Commission may have different priorities for this legislation. Once the AI Act is adopted, it will enter into force across the EU two to three years later, depending on which institution’s text prevails in the negotiations.
B. EU AI Liability Directive and Product Liability Directive
In September 2022, the European Commission proposed a new directive on adapting non-contractual fault-based civil liability rules to AI. The proposal establishes rules that would govern the preservation and disclosure of evidence in cases involving high-risk AI (as defined under the AI Act), as well as rules on the burden of proof and corresponding rebuttable presumptions.
If adopted as proposed, the draft AI Liability Directive will apply to damages that occur two years or more after the Directive enters into force. Five years after its entry into force, the Commission will consider the need for rules on no-fault liability for AI claims. Alongside the AI Liability Directive, the European Commission proposed updates to the Product Liability Directive to harmonize rules for no-fault liability claims by persons who suffer physical injury or damage to property caused by defective products. Software, including AI systems, are explicitly named as “products” under the proposal, meaning that an injured person can claim compensation for damage caused by a defective AI system.
Stakeholders and academics are questioning, among other things, the adequacy and effectiveness of the proposed liability regime, its coherence with the EU AI Act currently under negotiation, its potentially detrimental impact on innovation, and the interplay between EU and national rules. Once the EU AI Act is finalized, focus will turn to completing these two legislative files.
III. Other Policy Initiatives
Beyond the EU AI Act and associated initiatives, the EU has also been active in shaping the direction of AI policy through engagement with industry and international partners.
A. AI Code of Conduct / Pact
Amid the flurry of media attention over the past few months on the pace of AI developments, particularly on generative AI and large language models, the European Commissioners who were in overall charge of digital policy—Executive Vice President Margrethe Vestager and Commissioner Thierry Breton—each signaled their intentions to pursue a voluntary code of conduct with private industry. The precise terms of such a pact or pacts are still to be publicized. Moreover, there has been latent competition for primacy over EU digital policy between Vestager and Breton. Ultimately, it appears that Vestager’s approach will have global scope, building on her discussions within the G7 (as discussed further below), whereas Breton’s will focus on accelerating the de facto applicability of the EU AI Act within Europe, even before the legislation formally goes into effect in two or three years after adoption.
On September 5, Vestager took an unpaid leave of absence from the Commission to run for the presidency of the European Investment Bank, with the selection taking place sometime in the fall and the winner assuming office in January 2024. Vice-President Věra Jourová—the architect of the EU-U.S. Data Protection Umbrella Agreement and its predecessor Privacy Shield—has taken on Vestager’s digital portfolio in the interim. Depending on who replaces Vestager as Danish Commissioner if she is appointed to the EIB role and resigns from the European Commission, Jourová may continue to hold on to some of those responsibilities until the end of this Commission’s mandate next autumn. As Vice-President for Values and Transparency, Jourová has already been engaged in the AI policy debate, recently calling for AI-generated content to be watermarked and identifiable.
B. U.S.-EU Trade and Technology Council
Over the past two years, the EU and the U.S. have held ongoing regulatory dialogue on AI within the U.S.-EU Trade and Technology Council (TTC). In December 2022, the TTC’s working group on tech standards issued a new joint roadmap for trustworthy AI and risk management. The Roadmap aims to (i) advance shared terminologies and taxonomies by way of a common repository, (ii) share approaches to AI risk management and trustworthy AI in order to advance collaborative approaches related to AI in international standards bodies, (iii) establish a shared hub of metrics and methodologies for measuring AI trustworthiness, risk management methods, and related tools, and (iv) develop knowledge-sharing mechanisms to monitor and measure existing and emerging AI risks.
Both sides agree on a risk-based approach to AI and the need to develop trustworthy AI, but differ significantly on the necessary regulatory frameworks, allocation of responsibility for risk assessment, and balance between obligatory and voluntary measures. Relatedly, on June 21, a bipartisan group of Congressmen wrote a letter to President Biden expressing concern with the EU’s digital policies and their impact on U.S. firms.
At the last TTC meeting in Sweden on May 30-31, the two sides committed to continue to focus on seizing the opportunities and mitigating the risks of AI, particularly in light of rapid developments in generative AI. They launched three dedicated expert groups that focus on: (i) AI terminology and taxonomy, (ii) cooperation on AI standards and tools for trustworthy AI and risk management, and (iii) monitoring and measuring existing and emerging AI risks. The closing statement of the May meeting confirms that the EU and U.S. will “continue to consult and be informed by industry, civil society, and academia.”
C. G7 Hiroshima AI Process—and Beyond
The EU is also taking the lead in shaping AI policy through the G7. At their last summit in Hiroshima, G7 leaders pledged to “advance international discussions on inclusive artificial intelligence (AI) governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.” It appears that the EU and U.S. are spearheading this effort, and plan to present a joint proposal on an AI voluntary code of conduct to the G7 leaders for their endorsement. Italy will hold the next presidency of the G7 and will host the G7 summit in Puglia in June 2024.
The UK is also seeking to take a leading role in this multilateral push to develop common standards and approaches to mitigating risks associated with AI. In November 2023, UK Prime Minister Sunak will host an AI Safety Summit, which will be attended by both AI researchers and policymakers. Indeed, European Commission President von der Leyen, U.S. Vice-President Harris, French President Macron, and Canadian Prime Minister Trudeau are all expected to attend.
The U.N. Secretary-General, António Guterres, announced in July that he would also convene a high-level meeting to examine options for the global governance of AI. Guterres intends for this group to build on the recommendations in the July 2023 New Agenda for Peace policy brief that member states develop common norms and national strategies on the development, design, and deployment of AI, and a global framework for the use of AI and similar data-driven technologies in counterterrorism. In her recent State of the European Union speech in September, European Commission President von der Leyen endorsed Guterres’ approach. She called for a process similar to the UN’s Intergovernmental Panel on Climate Change, bringing “scientists, tech companies and independent experts all around the table,” building on the G7 Hiroshima Process. Von der Leyen also proposed that these experts “develop a fast and globally coordinated response” to AI’s “risks and … benefits for humanity”.
* * *
Policymakers in Europe have made significant efforts to keep pace with these technological developments, and have already gained extensive technical and regulatory expertise. Yet, as the landscape keeps evolving, thought leadership—and engagement from industry, civil society, and academia—will be essential to identifying both the opportunities and risks of new technological frontiers on AI and developing corresponding policy and regulatory frameworks.
IV. Thought Leadership
Our regulatory and public policy teams closely track and contribute to the discussion around AI policy in Europe. Below is a sampling of related articles on our public-facing blogs:
- EU and US Lawmakers Agree to Draft AI Code of Conduct (June 12, 2023)
- EU Parliament’s AI Act Proposals Introduce New Obligations for Foundation Models and Generative AI (May 24, 2023)
- A Preview into the European Parliament’s Position on the EU’s AI Act Proposal (March 28, 2023)
- EU AI Policy and Regulation: What to look out for in 2023 (February 2, 2023)
- European Commission Publishes Directive on the Liability of Artificial Intelligence Systems (October 12, 2022)
- European Parliament Votes in Favor of Banning the Use of Facial Recognition in Law Enforcement (October 12, 2021)
- European Commission Proposes New Artificial Intelligence Regulation (May 24, 2021)