Join Transform 2021 this July 12-16. Register for the AI event of the year.
Artificial intelligence is no longer the world’s darling; no longer the “15 trillion dollar baby.” Mounting evidence that AI applications can cause harm and pose risk to communities and citizens has lawmakers under pressure to come up with new regulatory guardrails.
While the US government is deliberating on how to regulate big tech, all eyes are on the unbeaten valedictorian of technology regulation: the European Commission. This past Wednesday, April 21, the Commission released wide-ranging proposed regulation that would govern the design, development, and deployment of AI systems. The proposal is the result of a tortuous path that involved the work of a high-level expert group (full disclosure: one of us was a member), a white paper, and a comprehensive impact assessment.
The proposal has already elicited both enthusiastic and critical comments and will certainly be amended by the European Parliament and the Council in the coming months, before becoming a final piece of legislation. It is, however, the first of its kind, and marks an important milestone. In particular, it sends a signal to regulators in the US that they will have to address AI as well, especially since the proposal underscores the need for AI risk assessment and accountability for both material and immaterial damage caused by AI — a major concern for both industry and society.
The main ideas
The proposed regulation identifies prohibited uses of AI (for example, using AI to manipulate human behavior to circumvent users’ free will, or allowing “social scoring” by governments), and specifies criteria for identifying “high-risk” AI systems, which can fall under eight areas: biometric identification, infrastructure management, education, employment, access to essential services (private and public, including public benefits), law enforcement, and migration and border control. Whether or not an AI system is classified as “high-risk” depends on its intended purpose and its modalities, not just the function it performs.
When an AI system is “high-risk,” it will need to undergo a pre-deployment conformity assessment and be registered in a to-be-established EU database.
The focus on transparency in the proposed regulation is laudable and will change industry practice. Specifically, the new regulations would emphasize thorough technical documentation and recording a technology’s intentions and assumptions.
But the strategy of pre-classifying risk has a blindspot. It leads the Commission to miss a crucial feature of AI-related risk: that it is pervasive, and it is emergent, often evolving in unpredictable ways after it has been developed and deployed. Imposing strict procedures on a subset of AI systems and checking them mostly while they are still “in the lab,” may not capture the evolution of risks emerging from the interaction between AI systems in the real world and the evolution of their behaviour over time. The Commission’s proposal contains provisions for post-market surveillance and monitoring, but these provisions appear weaker than the pre-deployment ones.
As it stands, the Commission’s proposal relies heavily on the development of algorithmic auditing practices by so-called “notified bodies” and in the private sector as a whole. Auditing practices, ideally, should be consistent across the markets and geographies where an AI system is deployed; it should also be oriented towards the main requirements of so-called “trustworthy AI,” and be grounded in principles of equity and justice.
The need for consistency across markets
The spotlight is now on US regulators, as well as industry leaders. If they aren’t able to promise consistent auditing in US markets as well, that will impact the whole AI ecosystem.
Instead of playing regulatory ping-pong across the pond, leaders on both sides of the Atlantic would benefit from initiating a research- and stakeholder-led dialog to create a transnational ecosystem focused on maximizing the impact of AI risk identification and mitigation approaches. At the moment, such a transnational approach is hindered by different cultural approaches to regulation, strong tech lobbying, lack of consensus on what constitutes AI risk assessments and AI auditing, and very different litigation systems.
All these barriers can be overcome, and we can reap the real benefits of AI, if the European Commission’s proposal is taken as a cue to harmonize approaches across borders for the maximum protection of citizens. This dialog should focus on equity and impact, outlining optimal procedures for effective risk and audit documentation, and identifying what is needed from governments, civil society, and higher education to build up and maintain a transnational ecosystem of AI risk assessment and auditing.
The benefits are obvious. Strong regulation would meet a strong technology research landscape. Rather than reconciling approaches after the fact, co-developing the regulatory approach from the outset and creating the preconditions for mutual learning would be far more effective. The renewed prospects for an enlightened transatlantic dialog on digital issues are a one-time opportunity to make this happen.
Mona Sloane is an Adjunct Professor at NYU’s Tandon School of Engineering and Senior Research Scientist at the NYU Center for Responsible AI.
Andrea Renda is Senior Research Fellow and Head of Global Governance, Regulation, Innovation & Digital Economy at the Centre for European Policy Studies.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more