European Commission publishes White Paper on Artificial Intelligence
10th March 2020
Throughout 2019, we saw a growing promulgation of guidance, consultations and new laws across the world. In light of such a regulatory landscape beginning to form, there is growing debate as to whether or not Artificial Intelligence (“AI”) should be regulated and, if so, how. The recently published European Commission’s White Paper on Artificial Intelligence (the “AI White Paper”) is arguably one of the strongest forays into looking at widespread regulation and has certainly accelerated the conversation.
The AI White Paper outlines the European Commission’s position on AI developments by articulating the policy options that are being considered for the purpose of achieving the twin objectives of promoting the uptake of AI, while at the same time addressing the associated risks.
The AI White Paper is an important contribution to the global discussion regarding AI regulation. The introductory sections leave little doubt that the Commission intends to pursue an ambitious agenda with the goal of setting a global AI regulatory standard (perhaps comparable to the international impact of GDPR as a benchmark for data protection), and to promote a future of trustworthy and secure developments in AI.
We have summarised some of the key points from the AI White Paper below.
A common European approach
The White Paper expressly seeks to avoid the fragmentation of AI regulations within the single market. In so doing, it takes particular note of national initiatives such as the German Data Ethics Commission, the Danish Data Ethics Seal and the Maltese voluntary certification system for AI. The Commission’s position in this regard is clearly that such diverging national initiatives would endanger the common approach needed for sufficient scale to have an impact on the evolving regulatory discussions at a global level, not to mention increasing the legal uncertainty for domestic innovation. Whilst its ambition to create a cohesive set of laws is clear, given there is already domestic legislation being placed into force, this is potentially over ambitious and, at best, any EU law will sit next to some of its members’ laws. However, the likelihood is these EU laws will be far reaching and effect how AI is brought into use in Europe, both by EU member states and countries wanting to do business in the EU, including the US, China and the UK.
The European data strategy
At the outset, it is important to note that the regulatory approach to AI is not intended to be a standalone issue and the White Paper is clear to position the question within the context of the broader European Data strategy (published on the same day as the AI White Paper). The policy therefore addresses two distinct issues:
- Enabling innovation through an “ecosystem of excellence” – including strategic funding, a focus on education and upskilling the workforce, and an increased focus on research centres to attract and retain international talent; and
- Fostering a future regulatory framework in compliance with European rules and rights through an “ecosystem of trust” that addresses both the concerns of citizens and the legal uncertainty faced by companies.
The overall strategy is clearly intended to both protect and drive economic development in the emerging data economy. As such, the policy options are intended to address issues arising, in part, out of the comparably weak position that Europe finds itself in as a result of the low level of home-grown consumer applications and online platforms. The lack of which, in turn, results in a competitive disadvantage in terms of being able to access data to fuel innovation (particularly when compared with jurisdictions such as the US and China).
The goal here is clear, Europe intends to craft an AI strategy that allows it to benefit from developments as a creator and producer, not just a user, and the Commission is therefore placing a notable focus on data relating to industrial and professional markets where Europe enjoys a stronger position.
For the remainder of this note, we will focus on the proposed regulatory framework for AI, but it is interesting to consider the changing direction of regulatory objectives when framed by these economic and policy drivers.
Proposed framework for AI regulation
The AI White Paper builds on earlier efforts by incorporating findings from the AI Strategy published in 2018 and the respective High-Level Expert Groups’ reports on trustworthy AI and liability for Artificial intelligence, both published in 2019.
The proposed framework for AI regulation contemplates both changes to the existing legislation that are required to ensure that AI is not excluded for the standard legislations, and the implementation of AI specific requirements in certain scenarios, both as outlined below.
Liability regimes and adjustments to EU legislative framework
There currently exists an extensive body of legislation in the EU that addresses safety and liability issues, including sector specific rules, and while these remain (in principle) fully applicable irrespective of the technology used, the AI White Paper stresses the need to assess whether adjustments are required to account for AI specific risks. These include:
- Effective enforcement of existing legislation – the lack of transparency in AI systems make it difficult to identify and prove breaches of law. Adjustments may therefore be needed to certain areas, for example to liability regimes.
- Limitation of existing legislation – it is uncertain whether product safety legislation applies to standalone software outside sectors with specific rules. Similarly, general safety legislation generally applies only to products and not to services based on AI technology. Both may therefore need to be reviewed to account for developments in AI applications.
- Changing functionality of AI systems – existing legislation focuses on safety concerns at the time a product is placed on the market and it does not anticipate the dynamic nature of many AI infused solutions where modifications, software updates and machine learning may give rise to new risks not present at the time the product was initially put on the market.
- Allocation of responsibility in the supply chain – the current regime places significant liability on the producer of a product placed on the market (including components such as AI systems). The rules do not however address scenarios where AI is added to products after they have been placed in the market.
- Changes to the concept of safety – the Commission notes that AI could give rise to risks that are simply not anticipated under current legislation. These risks may be linked to cyber threats, personal security risks, risks that arise as a result of loss of connectivity etc. These risks, as mentioned above, may evolve over time and are not necessarily present at the time the product is first placed on the market.
While the White Paper deals to a considerable extent with the risks associated with safety and liability regimes (as outlined above) there is, perhaps rather surprisingly, very little discussion of how GDPR is intended to fit within the anticipated regulatory environment for AI.
The European approach to AI regulation aims to promote innovation and ethical AI across the single market.
Risk-based approach and categories of risk
The AI White Paper highlights that the proposed regulatory framework should achieve its objectives without putting a disproportionate burden on small and medium-sized enterprises. The Commission has therefore proposed a risk-based approach to ensure proportionate regulation to the two broad categories of risks identified in the AI White Paper, namely:
- Risk to fundamental rights – including privacy, data protection and non-discrimination; and
- Risks relating to safety considerations and questions concerning the effective functioning of the liability regime.
In order to achieve a proportional approach to regulation based on a risk-based assessment, the White Paper proposes a two-step cumulative test to assess whether an AI application should be considered “high-risk” and thereby subject to more stringent requirements:
- First, the AI application is in a sector where “significant risk can be expected to occur”. These sectors would be identified and exhaustively listed; the paper mentions healthcare, transport, energy, and public sector areas such as asylum, migration, border controls, social security, and the judiciary; and
- Second, the application of AI in the listed sector is used in a manner that leads to significant risks. In other words, AI application in the listed sectors are not necessary “high-risk” unless they are used in a manner that leads to significant risks. The paper provides examples such as AI that pose risk of injury or death.
In addition to the above, the AI White Paper also contemplates exceptional circumstances where due to the risks — regardless of the sector concerned — the AI application will always be considered “high-risk”. It calls for particular debate around facial recognition, for example.
Importantly, the purpose of the above test is to filter out the applications of AI that should be subject to mandatory requirements and, in principle, only these types of AI would be subject to the new regulatory framework for AI.
Mandatory requirements for high-risk AI
For the future regulatory framework for high-risk AI, the Commission anticipates a set of requirements (possibly further specified through dedicated standards) that take into account some of the guidelines from the High Level Expert Groups including, for example requirements in relation to:
- the use and content of training data;
- record keeping in relation to documentation of programming and data used for training and testing;
- obligations in respect of information provision for deployment scenarios, as well as in interacting with citizens;
- technical robustness and accuracy;
- human oversight and autonomy; and
- specific requirements in respect of biometric information.
The White Paper anticipates a formal structure for conformity assessments with the mandatory requirements for high-risk AI applications. This process would broadly be comparable with the conformity assessment carried out on physical products made available through the EU market and it would likely include procedures and measures such as certification, inspection, and testing. Identified shortcoming in the assessments can then be remedied on a case by case basis.
AI applications that are not qualified as high-risk would not be subject to mandatory requirements discussed above. However, the White Paper outlines the foundation for a labelling scheme that would allow operators to subject their AI solutions to similar or identical requirements to those governing the high-risk category on a voluntary basis to signal higher standards of compliance.
The Commission clearly anticipates that the future AI regime should allocate obligations to the “actor(s)” who are best placed to address the potential risks. Liability may therefore flow between parties depending on the stage of the AI application. This is without prejudice to the liability to end-users under the current product liability regime. Importantly, it is clear that the future regulatory framework is intended to have extraterritorial effect and it would be applicable to all operators offering AI-enabled solutions in the EU, regardless of whether they are established in the EU.
Finally, to conclude, the European approach to AI regulation aims to promote innovation and ethical AI across the single market. Although the AI White Paper has a long distance to go before the contemplated regulation is ready for implementation, it does signal a strong willingness to act and, following the global influence of GDPR on data protection developments, it is not a stretch to imagine that Europe may well succeed in setting a global standard.
The White Paper is currently open for public consultation and will close on 19 May 2020.
For more information please contact Charlotte Walker-Osborn or Erica Werneman Root.