A letter to the EU’s future AI office

Keywords: Newsroom, Policy

Once the EU’s AI Act becomes law, the EU faces a long journey to successfully implementing it. We have a message for the artificial intelligence office that will likely be created to help along the way, as well as for others involved in the implementation process.

Authors: HADRIEN POUGET,  JOHANN LAUX, October 3, 2023

Dear (future) members of the (future) AI Office,

We, as observers of AI policy development in Europe, eagerly await your establishment. While the document that will bring you into existence, the AI Act, is still being written, we believe we cannot be too early to address one of your crucial roles in the EU’s regulatory ecosystem. While you will likely have limited decisionmaking authority, you will play a central role in advising and coordinating decisionmakers. You will help manage the uncertainty surrounding the implementation of the AI Act. We hope to draw your attention to three areas where concrete decisions will need to crystallize in the face of this uncertainty and where your watchful eye could be especially valuable: in the development of harmonized standards, in amendments to the AI Act, and ultimately, in courts.

The role you are set to play may be even more important than that of comparable regulatory bodies. The EU’s attempt to capture a wide range of evolving technologies and applications means it has inevitably left some significant decisions to the implementation phase of the AI Act. The EU bodies currently drafting the act have given you an important role in this phase, especially in coordinating efforts to understand AI risks.1

This will not be straightforward. Rather than assigning uniform requirements to all systems, the EU has chosen to pursue risk-based regulation, assigning greater responsibilities to those developing or using systems that are more likely to cause harm. To this end, the AI Act sorts uses of AI into four categories of risk—unacceptable, high, limited, and minimal—and requires open-ended risk assessments. This approach offers flexibility, but also implicitly assumes that AI’s risks are predictable, knowable, and measurable.

Yet no one today knows all the ways AI will be used, how it will function (or not), the impact it may have on people’s rights, and where lines should be drawn when negative impacts are too great. That does not mean, as some have suggested, that we should do nothing until significant harms have materialized. But it does represent a challenge for those attempting to comply with or enforce the act—and you will need to act as a guide.

Harmonized standards

As with other EU product legislation, technical standards made to complement the AI Act will aim to refine its requirements. So-called “harmonized” standards are primarily being developed by the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC). By offering concrete tests, metrics, and best practices, they will help reduce uncertainty and translate legal requirements into clear expectations from providers and deployers of AI systems. Monitoring standards’ development and execution should be a priority for your office. They are an important source of clarity, but some signs indicate they may not be the panacea they have been for past legislation.

The EU’s 2006 Machinery Directive offers an example of the role standards could play for the AI Act. The directive covers a diverse swath of products (“machinery”) with generic requirements (such as “eliminate or reduce risks as far as possible”). To support these requirements, over a thousand harmonized standards have been developed by CEN and CENELEC, ranging from high-level principles down to half a dozen standards for steel wire ropes.2 The standards range from covering “compressors and vacuum pumps,” and “paper making and finishing machines,” to “machines and plants for the manufacture, treatment and processing of flat glass,” and many others. In short, the legal requirements have been translated and adapted to the unique considerations of a dizzying array of machines. This approach has the benefit of providing requirements much clearer than the high-level ones described in legislation—but recreating this approach may not be straightforward for the AI Act. 

Creating effective standards often requires practical experience, of which we have relatively little regarding AI. This experience helps identify common use cases, anticipate problems, and evaluate the effectiveness of mitigations. For example, AI systems intended to “make or materially influence” employment decisions are high-risk under the AI Act,3 but they can be used for anything from generating summaries of interview transcripts to conducting entire interviews. The different uses give rise to different considerations, and it’s unclear (and difficult to tell) how effective existing systems are, and how they should be improved.

AI also raises more complex normative questions than technical standards typically tackle. The AI Act requires identifying and mitigating risks to fundamental rights. In the EU, these include non-discrimination and equality between men and women, for example; how does this translate into accuracy and bias requirements for an employment decision tool? And should standards even attempt to make this translation? Machinery standards largely deal with physical safety and face a relatively straightforward tradeoff between safety and costs. By contrast, the impacts of AI can carry complex ethical and legal dimensions and are difficult for anyone to handle, let alone industry-led standardization organizations. Currently, the AI Act gives no guidance on how fundamental rights can be protected through technical standards. Past standardization efforts have avoided addressing contested normative questions in a substantive manner, essentially kicking the can down the road for industry to decide for itself.

Finally, there is significant time pressure. Policymakers hope AI standards will be ready by the time the AI Act comes fully into effect, likely in late 2025 (two years after passing). The Machinery Directive came into force three years after passing, and the 485 accompanying harmonized standards were a combination of new and pre-existing standards. Since coming into force, 413 standards have been withdrawn, but even more have been added, totaling in 901 harmonized standards in place today.4 Standards for the AI Act have less time and fewer preexisting standards to draw from than the Machinery Directive did—like the directive, it seems likely they will be the subject of continual iteration even after 2025.

Recommendations

First, we hope you can contribute to the effort to iterate on standards and move toward increasing specificity. For all the reasons outlined above, it seems likely that standards will initially remain more procedural, outlining high-level steps that should be taken rather than precise technical requirements. This is in line with the state of international standardization and the United States’ AI Risk Management Framework. The AI Office can help encourage increasing specificity of standards by identifying priority use cases for standardization and pushing for those within standardization bodies. Without a push, important use cases may not be covered—the “bottom-up” spirit of standard-setting organizations like CEN and CENELEC means much work is only done if someone in the committee happens to press the issue.5 

In addition, you can help communicate to policymakers, judges, regulators, and others the level of faith they should place in standards throughout this process. Adherence to standards should not be conflated with trustworthiness, especially for high-level standards.

Second, we hope you can help determine which normative decisions are appropriate to include in standards. It may be that some aspects of compliance with the act should be left out of standards, to be picked up by courts or third-party auditors. The Medical Devices Regulation, for example, relies heavily on the involvement of auditors and has relatively few (seventeen) associated harmonized standards. Similarly, issues that have extensive and nuanced treatment in courts, such as bias in employment decisions, are likely best adjudicated there. As an alternative, standards could require “ethical disclosure by default” and focus on a minimum set of metrics rather than setting thresholds, informing stakeholders and allowing them to decide what’s acceptable.

Third, we hope you can help inform normative judgements when they must be made. Though they are developing harmonized standards for the AI Act, CEN and CENELEC are not technically part of the EU. While the content of harmonized standards was therefore relatively isolated from EU institutions, a court decision from 2016 brought harmonized standards under the jurisdiction of the Court of Justice of the European Union (CJEU). In practice, this gives the court more authority over the standards, and therefore more cases concerning standards are expected to come to the court. The AI Office can aim to issue opinions and reviews on AI safety and fundamental rights that inform legal procedures and are taken up in decisions by the CJEU.6

The decision has also led the European Commission to become more involved in the process of drafting the standards. A 2022 report by the European Commission suggests that standards are frequently found to be inadequate at upholding EU law by the consultants responsible for evaluating them. Working with the consultants could provide an opportunity for you to ensure normative questions are handled well through the drafting process, perhaps by coordinating with civil society.

While some inclusion of civil society is already mandated in the development of harmonized standards, there remain significant obstacles to meaningful participation, including time, money, and expertise. When relevant issues arise, you could translate them into simple language so that those without AI expertise can engage.

When identifying potential impacts of AI on fundamental rights, you should also resist the temptation to focus on the issues with the highest public and political visibility, such as bias and discrimination cases, and avoid neglecting other rights and freedoms. Individual rights, such as the protection of an individual’s health data, may come into tension with society’s interest in effective healthcare. Balancing these involves thorny normative judgements.

Amendments to the act

As the AI Act is implemented and its efficacy reevaluated, the European Commission will be empowered to adopt so-called delegated acts. These are fast and flexible amendments of existing legislation, which do not have to go through the full legislative procedure. Delegated acts could be used to update which techniques fall under the definition of AI, the classification of high-risk AI systems, the requirements of technical documentation, and other aspects of the AI Act.

It seems likely that delegated acts will need to be used. For instance, to determine whether an AI system is high-risk, and therefore warrants most of the acts’ requirements, the AI Act relies on relatively complex interactions with other EU legislations (as outlined in Article 6(1)). In essence, if an AI system were included in a third-party assessment required under another piece of EU legislation, then it would be considered high-risk under the AI Act. However, it can be tricky to integrate the AI Act with other pieces of legislation that employ slightly different definitions and assumptions. For example, an AI system used to identify and avoid harming people in a robot may not be considered high-risk because of the Machinery Directive’s differing definition of “safety component”7—an issue that has largely been addressed in the Machinery Regulation set to replace the Machinery Directive in 2027.8

It seems likely that similar issues will continue to surface, especially as EU legislation is created and updated. More generally, the AI Act is a complex document covering a rapidly shifting industry, and updating the act will be an important part of keeping it relevant.

Recommendations

First, as the AI Act is put into practice, you should pay close attention to these especially complex relationships with other legislation. These interactions may call for revisions not only of the AI Act, but also of the other legislation involved. Such loopholes may not be obvious as they’re quietly taken advantage of.

Second, we hope that if and when you make recommendations to the European Commission on delegated acts, you are careful to maintain your independence. The European Parliament’s proposal for the AI Act goes to great lengths to ensure that the AI Office includes a balanced collection of stakeholders, which we welcome. When the European Commission installed its High-Level Expert Group on Artificial Intelligence (HLEG AI) to initially assess the need for AI legislation, the heavy influence that industry interests exerted on its fifty-two members (many of whom were industry representatives) led to the removal of the term “red lines,” that is, banned applications of AI, from the HLEG AI’s final document. While a set of prohibited practices did make it into the AI Act, this experience should serve as a warning. Advisory groups like the AI Office represent tempting targets for regulatory capture.

Courts

Lastly, enforcement of the AI Act will inevitably play out in courts, both nationally and on the EU level. As mentioned, the CJEU will likely end up reviewing harmonized standards, even if only in limited ways. Moreover, the relationship of the AI Act with other EU legislation such as the Machinery Directive may need to be settled by the CJEU in order to keep the interpretation of EU law coherent across member states. For you—the AI Office—monitoring legal proceedings involving AI systems will be important, as court decisions will reduce both normative and empirical uncertainty around AI.

The uncertainty-reducing effect of judicial decisions will not be limited to cases based on the AI Act. For example, in a 2020 SyRI judgment, the Hague District Court struck down a Dutch law governing the use of SyRI, an algorithm designed to identify potential welfare fraud. The Court ruled that SyRI interfered disproportionately with the right to private life delineated in Article 8(2) of the European Convention on Human Rights. The ruling shed light on the interplay of EU data protection law, international human rights law, and demands for transparency of automated decisionmaking systems. SyRI was opaque by design, as the Dutch government feared that citizens would otherwise learn how to “game” the welfare system. This inhibited the court’s ability to answer legal questions such as whether the risk of discrimination was sufficiently neutralized. With transparency being a core governance principle of the AI Act, such decisions highlight which justifications for opacity—whether based on fears of gaming the system or on IP rights and trade secrets—will be normatively acceptable for the judiciary.

At the time of writing, it is unclear whether individuals will have rights to remedies under the AI Act at all and, if so, to what extent. Even if the European Parliament gets its way and individual remedies are included, not every infringement of the AI Act will confer a right to be compensated.9 A core function of the AI Act is, however, to safeguard fundamental rights. Whether brought to court via the AI Act or through existing legislation like the Directive on Equal Treatment in Employment and Occupation, individuals’ complaints against AI-driven decisions will reveal information about normative pressure points and their judicial resolution.

Recommendations

First, we expect that by following court proceedings involving AI you will be able to identify gaps in the protection of the fundamental rights that the AI Act seeks to guarantee. Part of this monitoring should involve a continuous reevaluation of whether current technical standards suffice to satisfy the normative demands established by courts. Judges will often end up weighing legal interests against each other, such as AI developers’ economic interest, states’ interest in maintaining their social welfare system, and consumers’ and citizens’ interest in not being discriminated against. The balances struck may require changing the legal status quo—for example, by declaring certain AI systems high-risk that have not been so classified before or by amending technical standards that now appear ill-suited to conform with normative demands.

Legal proceedings will not only reduce normative uncertainty. Courts will also play a role in identifying AI’s risks. The prospect of liability can incentivize AI developers (and operators) to generate information about the risks they create and take measures to reduce them.

In September 2022, the European Commission proposed a new AI Liability Directive and a revision of the Product Liability Directive. When product liability laws were introduced decades ago, manufacturers were usually assumed to be best positioned to bear the liability for harms caused by defective products; harms often materialized immediately and with calculable probability. With AI, uncertainties are much greater, for consumers as well as for developers and users. This resembles situations of scientific uncertainty encountered in liability cases concerning toxic chemicals: harms materialize after longer periods of time and the causal links between exposure and harm are not well understood. The foreseeability of harm therefore presents a difficult policy choice: strict liability for unforeseeable behavior of an AI may curb developers’ appetite for placing the AI on the market, thus forfeiting the social benefits that may come from the technology. If developers’ liability were to be limited to risks that can be predicted based on state-of-the-art technology and standard practices, then your expertise on this line of defense will become important in cases where risk predictability is contested. The AI Office will regularly be better placed to judge whether an AI system was developed and used according to the latest scientific and technical knowledge than the CJEU.

Second, we hope that by monitoring the disclosure of technological evidence in legal proceedings, you will help the judiciary understand which harms are predictable and which are not. You could flag situations in which following current best practices will expectably result in unpredictable behavior from an AI system. If a certain type of harm keeps occurring despite being unpredictable by current best practices, this would call for an opinion to be published by the AI Office. By providing such information publicly, you could in turn incentivize AI developers to push the technological status quo further toward safety and fundamental rights protection instead of hiding behind a “state-of-the-art” defense. This is especially important in domains where the technological knowledge about certain AI systems and their risks lies nearly entirely with developers.

A long the road ahead

Passing the AI Act will be but the first step on a long road of decisions that will shape the EU’s regulatory ecosystem for AI. Implementation decisions will abound, in technical standards, in amendments to the act, and in courts—and it seems the AI Act will leave significant uncertainty as to how those decisions should be made. We believe that you, at the AI Office, are poised to play a key role in monitoring the effectiveness of the act with input from all stakeholders, consolidating information and experience as AI’s risks and benefits materialize, and feeding resulting insights into important decisions.

Sincerely,

Hadrien Pouget, Associate Fellow, Carnegie Endowment for International Peace

Johann Laux, British Academy Postdoctoral Fellow, Oxford Internet Institute, University of Oxford

Notes

1 Parliament’s initial April 2021 draft for the AI Act, the Council of the European Union’s proposed edits from December 2022, and the European Parliament’s proposed edits from June 2023 can be compared here under “AIA – Trilogue – Mandates.” Parliament and the council are now negotiating a joint position with the European Commission’s involvement.

2 “Directive 2006/42/EC [Machinery Directive] – Summary List of Harmonised Standards,” European Commission, accessed September 28, 2023, https://ec.europa.eu/docsroom/documents/55577. Including standards that have been withdrawn, 1,314 standards are listed.

3 From the European Parliament’s June 2023 proposal. See note 1.

4 European Commission, “Machinery Directive.”

5 Standardization requests to CEN and CENELEC from the European Commission can help force agendas through. One has already been made for AI, and another will be made specifically for the AI Act. However, it seems unlikely the European Commission will be able to anticipate and include all high-risk uses of AI in its request.

6 The European Data Protection Board—whose role in the implementation of the General Data Protection Regulation (GDPR) is somewhat comparable to that of the AI Office and the AI Act—issued, among other publications, guidelines and a review report that have been cited in several opinions by the CJEU’s advocates general.

7 The robot is likely covered by the Machinery Directive but would not require a third-party conformity assessment, so an AI system integrated into the robot would not be labeled high-risk for that reason. The AI system could still be high-risk under the act if (1) the AI system itself were covered by the machinery directive, and (2) the system were required to undergo a third-party conformity assessment. (1) is possible if the AI system is considered, under the Machinery Directive’s definitions, a safety component of the robot. However, that requires among other things that the AI system be “independently” placed on the market, and a guidance document clarifies that “software . . . is not considered a ‘safety component’” but “physical components incorporating software” can be, leaving room for doubt. Even if (1) holds, for (2) only some types of safety component would require third-party conformity assessments, and only if a company does not apply relevant harmonized standards.

8 Regulation (EU) 2023/1230, the Machinery Regulation, now requires third-party conformity assessments for “Machinery that has embedded systems with fully or partially self-evolving behaviour using machine learning approaches ensuring safety functions that have not been placed independently on the market…”, and admits software as a type of safety component.

9 At least, the CJEU’s case law on Article 82 of the GDPR (compensation for material or nonmaterial damages) suggests as much. In UI v. Öesterreichische Post AG, the CJEU clarified that an infringement of the GDPR by itself does not necessarily confer a right to compensation. At the same time, the judgment rejected a member state’s threshold of seriousness for a nonmaterial damage, allowing emotional harm potentially to entitle a party to compensation.

Source: Carnegie