The EU AI Act: Between Scylla and Charybdis? – Nick Haddad

This article explores the framework for artifical intelligence regulation put in place by the EU AI Act

Introduction

The race to harness, and some may say exploit, Artificial Intelligence (“AI”) is gathering pace globally. This drive comes in tandem with the rise of public recognition and interest in AI. The European Union (“EU”) is no doubt using its significance in the international economy to mould the digital space. Its ‘Digital Strategy’ has witnessed the introduction of new laws, the latest of which is the EU AI Act.

The following analysis will attempt to show that the EU, and other significant economies, are balancing a myriad of public-policy and economic considerations when regulating AI.

This article will proceed in four parts. Part I provides the context in which the AI Act fits within the EU’s Digital Strategy, offering comparisons with regulatory efforts in the United States. Part II will outline the key provisions of the AI Act and the context in which the legislation fits within the EU’s Digital Strategy. Part III will explore the consequences and criticisms of the AI Act. Part IV will offer concluding remarks, noting the opportunities presented to the UK either to integrate the provisions of the AI Act or diverge from the EU regulatory framework. The analysis presented will demonstrate that the EU AI Act contains shortfalls for the consumer, and that the UK should diverge from its stultifying regulatory framework in favour of a sector-specific, pro-innovation model.

Regulators are walking a legal and economic tightrope with regards to AI. The framework offered in the EU’s AI Act is one option, whereas the UK could diverge by implementing business- and consumer-friendly rules. 

Part I: CONTEXT

The EU’s AI Act serves part of its ‘Digital Strategy’, providing a new regulatory framework for the digital and IT sectors of the economy. Notably, GDPR 2018 was the genesis of an attempt to provide comprehensive digital regulation. GDPR provided the global benchmark for data protection – the UK retained this law after exiting the EU.

The Commission President, Ursula von der Leyen, proposed an approach for the “human and ethical” consequences of AI development in 2019. Whilst the draft text has been subject to debate and consultations, the EU has passed other laws regarding the digital economy. The Digital Services Act (“DSA”) and the Digital Markets Act (“DMA”) of 2022 are widely viewed as innovative pieces of legislation to regulate digital marketspaces as well as online search engines. 

The USA, the country most influential in AI development, has a patchier legal framework. Its Algorithmic Accountability Act represents, comparatively, a light-touch approach and is adapted to the current agency regulatory authority. It is notable that the EU takes a more ‘risk-based approach’ to the regulation of AI and technology. Further, the legislative efforts of the EU, particularly the DSA and DMA, reflect a broader European consensus regarding search engines and e-commerce, aspects that the US Congress has barely debated.

Legislative responses from lawmakers across the globe are microcosms of the wider academic, economic and moral debates regarding AI. For some, AI bore the mark of Cain from the beginning: threatening mass surveillance, economic dislocation, and political destabilisation. For others, it presents a great opportunity, being the vehicle of a new industrial revolution. The more moderate view aims to harness the opportunities of AI whilst managing potential risks. Regarding these potential risks, Mark Coeckelbergh notes the material risks of AI development, whereas Jack Balkin discusses the immaterial risks, such as the impact of AI on free speech.

Part II: CONTENT AND FURTHER CONTEXT

The EU’s AI Act, derived from Article 14 of the Treaty of the Functioning of the European Union (TFEU), creates a new framework for the use and development of AI in the EU. The act seeks to create a regulatory framework that can couch AI within the pre-existing system of rights and protections for consumers and citizens, involving the protection of free speech, the right to privacy and various rule-of-law considerations.

Defining AI

The AI Act adopts, more or less, the OECD definition of AI. Article 3(1) provides the five requirements of an AI system; being:

 [1] a “machine-based system”;
 [2] designed to operate with varying levels of autonomy;
 [3] [that] may exhibit adaptiveness after deployment, and that,
 [4] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs; [and]
 [5] that can influence physical or virtual environments.

The terminology is deliberately broad in order to cast the widest net. These provisions seek to encompass ‘Predictive Artificial Intelligence’ (“PredAI”) and ‘Generative Artificial Intelligence’ (“GAI”). The former is commonly used in various industries to improve efficiencies, whereas the latter is used to create new media by synthesising pre-existing data (OpenAI and ChatGPT are good examples of GAI).

The legislation provides diverging regulatory frameworks according to the categorisation of the AI system. There is a four-fold taxonomy:

• Prohibited AI practices;
• High-risk AI systems;
• Certain AI systems with a transparency risk;
• Minimal or no risk systems.

1. Prohibited AI Practices

Taking each in turn, prohibited AI practices as stipulated in Article 5(1) of the AI Act concern eight key scenarios. Where an AI system possesses or employs:

• subliminal techniques;
• a capability to exploit a natural person’s vulnerabilities;
• characteristics of ‘social scoring’;
• the classification of persons for crime prediction “based solely off profiling”;
• capability for expanding facial recognition databases;
• an ability to infer emotions;
• an ability to generate biometric databases; or
• real-time biometric identification in publicly accessible places for purposes of law enforcement;

then it will be prohibited from the EU.

Realistic examples of prohibited practices would be if streaming services used advanced data analytics to generate highly personalised advertisements, or emotionally charged content that could manipulate the user’s feelings. Similarly, an employer or organisation analysing job applicants or assessed behaviour on factors unrelated to job performance would fall foul.

2. High-risk AI systems

Second, if a system is deemed high-risk, it entails essential requirements for its use to continue in the EU. Article 6(1) provides that these regulations apply to AI in two situations: first, where the system is the product of, or safety component of, or is covered under harmonised rules on health and safety; second, an AI system solely for the purpose of fixed areas such as the operation of critical infrastructure, education and vocational training, or law enforcement. AI systems under the high-risk category are exempted from additional regulatory requirements where, per Article 6(3), the system does not pose “a significant risk of harm to the health, safety or fundamental rights of natural persons”.

To comply with the requirements of the AI Act, high-risk AI systems must undertake the following steps:
• Comply with standards regarding ‘accuracy, robustness and cybersecurity’ (Article 15) – the provision merely states that the Commission shall co-operate with benchmarking authorities;
• make a ‘quality management system’ (Article 17);
• maintain documentation to show compliance (Article 18);
• undergo a conformity assessment to determine whether the additional requirements have been met and conform to criteria of various, unspecified ‘harmonised standards’ (Articles 32 and 40);
• in some circumstances, particularly the use of real-time biometric identification in publicly accessible spaces, a fundamental rights impact assessment (FRIA) will be needed. 

3. Transparency risk systems

Third, systems deemed to have a ‘transparency risk’ are required to comply with EU requirements on transparency. Article 50 of the AI Act provides that a user must be ‘informed’ that he is interacting with an AI system and whether the output has been generated or manipulated. The key exception is for output used lawfully for the prevention and detection of criminal offences.

4. Minimal or no risk systems

Fourth, systems that do not fall under the previous three categories are deemed minimal or low risk, entailing no obligations from the AI Act. The guidance suggests merely that deployers could adopt additional, voluntary codes of conduct. The idea, and ideal, behind this provision is that most AI systems will fall under this category. Notable examples of AI systems with minimal or no risk include spam filters or AI-enabled recommender systems. The practical distinction between an AI system that is of minimal or no risk compared with the other categories ostensibly concerns the input values used within its function; the former does not use highly personalised information.

Enforcement mechanisms

Aside from the key provisions, the AI Act provides a new method of enforcement; a noticeable development from the GDPR regulations which were criticised for their ostensibly feeble enforcement. At Union level, the AI Office shall develop expertise and capability in the field whilst the EU AI Board will contain representatives of each Member State to enforce the AI Act. Further, each Member State has to establish a market surveillance authority and notifying authority per Article 70(1).

Sanctions

Under Article 99, the AI Act provides a tiered system of penalties for infringements of its provisions. Though obliged to take account of the AI Act’s guidelines, each Member State will establish a penalty system, the latest date of notification for which is 2 August 2026.

The potential maximum penalties contained in the AI Act are higher than other EU laws forming part of the Digital Strategy. Per Article 99(3), the most severe penalty is an administrative fine up to €35 million or seven per cent of total worldwide annual turnover for the preceding financial year; by comparison, the GDPR carries a maximum €20 million fine.

Broadly, the AI Act provides tiers for the maximum administrative fine for particular infringements:
• For prohibited AI practices, the most severe fine may be administered;
• A maximum of €15 million or three per cent annual turnover for breaches of obligations of providers (Article 16); authorised representatives (Article 25); importers (Article 26); distributors (Article 27); deployers (Article 29 (1)-(6)); and notified bodies (Articles 33 and 34);
• €7.5 million or one per cent annual turnover for the supply of incorrect or misleading information to notified bodies and national competent authorities in response to a request;
• For EU Union institutions, up to €1.5 million for non-compliance for prohibited practices, or €750,000 for non-compliance for any requirements or obligations under the Act. 

The key criterion binding all Member States in the design of its domestic penalty system is the need for it to be “effective, proportionate and dissuasive”, per Article 99(1).  

Part III: CONSEQUENCES AND CRITICISM

There are certain aspects of the AI Act that put the consumer and the everyman at the front and centre.

The category of prohibited AI practices sets a benchmark for governments and political organisations globally: the prohibited list provides obligations on the state as to how it may operate the use of AI on its citizens. The AI Act seeks to respond to macro-level considerations for society, particularly security and surveillance. The additional obligations on a high-risk system and the transparency obligations for those of mild risk seek to curtail potential abuses against the user. The regulations provide an important correction to the asymmetry between the developer and the user. Furthermore, the importance of complying with the obligations is boosted by the heavy fines for violating the provisions of the AI Act.

There are, however, pertinent criticisms that one can make of the AI Act.

First, there is a lacuna in the legislation regarding biometric systems; whilst the AI Act bans the use of AI in biometric systems for law enforcement, it does not prevent EU Member States selling biometric data to oppressive regimes, nor does it ban the use of AI in post-factum biometric systems.

Second, the transparency requirements for AI systems are seen as relatively cautious, particularly from an online safety point of view. Yale legal academic, Sandra Watcher, suggests that the AI Act refrains from placing obligations on deployers of ‘high-risk’ systems; rather, most oversight responsibility is placed on market authorities and the large-scale providers. Further, she notes that the requirements for systems containing a ‘transparency risk’ is relatively light touch, particularly as GAI systems can propagate unsavoury content.

Third, the exact scope of harmonised, technical standards for AI product liability remains unclear. This is a pertinent criticism given that there are no guarantees as to what ‘standard’ will be set. The AI Act merely provides that the Commission will issue the requests for standardisation in accordance with Regulation (EU) No 1025/2012 (Article 40, AI Act); this provides that European Standardisation Organisations, AI providers and business agree technical specifications. Currently, the EU is formulating the contents of the Product Liability Directive and the Artificial Intelligence Liability Directive. Hence, the scope and substance of the standards to be adopted may take years to clarify. Substantively, this provision appears to accept an increasing trend in the EU to delegate regulatory decision-making. Unless the Commission were to provide common specifications, private bodies will, in all likelihood, set the standards.

Fourth, the broader criticism of the EU’s Digital Strategy; that it focuses solely on potential risks and less on innovation. The EU has taken the decision to create a distinctive regulatory approach to AI: a tiered regulatory framework to combat perceived material and monetary damage from AI. As stipulated above, the EU has employed macro-level policy considerations as part of a broader Digital Strategy. Investment and deployment decisions will be taken as a result of the EU’s approach; as noted by Nitzberg and Zysman, early regulatory frameworks create enduring trajectories of technological and political economy development. There are fears that the AI Act renders the EU uncompetitive in the race to develop AI, compared with laissez-faire jurisdictions like the UAE.

Aside from the holistic approach provided within the AI Act at present, a sector-specific regulatory approach could be an alternative, balancing the opportunities for innovation with the need to protect consumers and ensure their right to privacy is protected. By developing a corpus of legislation that specifies regulations for different sectors, each sector may establish codes of conduct, ethical principles and technical standards. In the UK, for instance, the different regulatory objectives between healthcare and the Financial Conduct Authority emerge in their approaches to AI: the former is keen to implement AI, ethically, in the diagnostic process, whereas the latter wishes to combat the potential for disadvantaging consumers, given the asymmetric data use between sellers and consumers. The disadvantage of a piecemeal regulatory approach is that it leaves omissions and discrepancies in the legal framework, potentially undermining the objectives of regulators. A hybrid model between the two regulatory approaches is perhaps preferable.

Notably, the AI Act does not appear, prima facie, to harmonise between the obligations of the GDPR and the AI Act. To name a few instances:
• Compared with the GDPR, where all machine-learning systems processing personal data are required to conduct data protection impact assessment (DPIA), the AI Act requires that only high-risk operators conduct the more onerous FRIA;
• Unlike the GDPR where consumers (‘data subjects’) have explicit enforcement mechanisms and regulations regarding the control of their data, AI users are not provided rights under the AI Act;
• Perhaps most strikingly, the AI Act does not contain explicit obligations for data minimisation or requirements for assessing high-risk AI systems to enhance the rights of users.

Factually, the obligations of data protection will likely overlap with the use of AI, whose systems have unprecedented potential to gather and acquire data. With the presence of GDPR, it could be assumed that many GAI platforms have developed in accordance with GDPR. Brown, Truby and Ibrahim note, however, that the inherent properties of many AI systems contradict the principles of GDPR, specifically the use of big data analytics and GDPR requirement for consent. To articulate and enforce the obligations under the AI Act and the GDPR harmoniously, regulators will need to rethink the enforcement of GDPR. 

At its best, the law could be used to reap labour market benefits and assess risks. By drawing a line around the invasive aspects of GAI, then the AI Act could reinforce particular liberties that are vulnerable, such as free speech, data protection and the right to private life.

PART IV: THE UK AND CONCLUSION

The UK, now outside of the EU, has no obligation to implement the provisions of the AI Act into domestic law. So far, the UK government’s designated approach is explicitly pro-innovation with light-touch and sector-specific regulations. At the time of writing, there is no intention to introduce a general AI Act into Parliament in the near future.

A light-touch approach has its benefits: preventing the accretion of unnecessary rules in order to provide the climate for innovation. It would, however, see important macro policy considerations neglected, including the use of data from biometric systems, security, surveillance and the non-material harms of AI content. Further, it is unlikely that the UK would want to diverge too greatly from a common framework; the EU AI Act will provide a blueprint for more detailed regulations.

If the UK government wishes to develop a distinct regulatory framework for AI, it has several challenges going forward:

1) Determining the exact balance in its priorities; and
2) Walking the tightrope between a pro-innovation approach and risk minimisation.

The above analysis shows that the utility and efficacy of AI regulation is determined by a complex series of policy considerations. The UK government could improve on the omissions of the AI Act to maximise the benefits of AI for consumers; it has the freedom to create an AI strategy with regulations addressing sector-specific concerns. This appears to be the best way to maximise the benefits of AI, whilst placing consumers at the forefront. The EU AI Act, in its current format, has not struck the right balance for all stakeholders concerned.

ICO issues provisional view to fine Clearview AI Inc over £17 million

The Information Commissioner’s Office (“ICO”) has issued a provisional view of the imposition of a £17m fine over Clearview AI.

The BBC cites that the firms’ database has over 10bn images. The ICO has issued a provisional notice to stop further processing of the personal data of people in the UK and to delete any such data following alleged serious breaches of the UK’s data protection laws.

In a joint investigation with the Australian Information Commissioner (“AIC”) the ICO concluded that the data, some scraped from the internet, was being processed, in the case of UK persons, unlawfully in some instances.

Clearview AI Inc’s services were being used on a free trial basis by some law enforcement agencies. This has been confirmed to no longer be the case.

The ICO’s preliminary view is that Clearview AI Inc appears to have failed to comply with UK data protection laws in several ways including by:

  • failing to process the information of people in the UK in a way they are likely to expect or that is fair;
  • failing to have a process in place to stop the data being retained indefinitely;
  • failing to have a lawful reason for collecting the information;
  • failing to meet the higher data protection standards required for biometric data (classed as ‘special category data’ under the GDPR and UK GDPR);
  • failing to inform people in the UK about what is happening to their data; and
  • asking for additional personal information, including photos, which may have acted as a disincentive to individuals who wish to object to their data being processed.

Information Comissioner Elizabeth Denham commented:

“I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is therefore only right that the ICO alerts people to the scale of this potential breach and the proposed action we’re taking. UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people’s legal protections are respected and complied with.

Clearview AI Inc’s services are no longer being offered in the UK. However, the evidence we’ve gathered and analysed suggests Clearview AI Inc were and may be continuing to process significant volumes of UK people’s information without their knowledge. We therefore want to assure the UK public that we are considering these alleged breaches and taking them very seriously.”

This is one of the largest fines issued under the GDPR to date. Clearview now has the opportunity to respond, both in the UK and Australia (the AIC has found breaches of Australian privacy laws).

It’s unsurprising that its database, said to have included images scraped from social media, has drawn the attention of regulators. Facial recognition services have been at the forefront of recent data analytics scrutiny and data protection enforceability.

The ICO press release can be found here and the AIC press release here.

The previous statement of the ICO on the conclusion of the joint investigation can be found here.

Citation: 5RB: European Court of Human Rights upholds Article 8 privacy breach in relation to reputation of a dead person

In a case builds upon pre-existing caselaw on the rights of those who are deceased the European Court of Human Rights has found an article 8 breach in relation to news articles posted about a deceased Roman Catholic Priest.

ML v Slovakia 34159/17 concerned a number of articles published by three Slovakian newspapers about the historic sex offence convictions of the claimants son.

The Court found that the articles were inaccurate and sensationalist citing that: “However, it follows from what has been said above that the domestic courts failed to carry out a balancing exercise between the applicant’s right to private life and the newspaper publishers’ freedom of expression in conformity with the criteria laid down in the Court’s case-law.

Concluding the Courts stated, applying Article 8:

“…dealing appropriately with the dead out of respect for the feelings of the deceased’s relatives falls within the scope of Article 8 of the Convention”.

Furthermore the Court stated a clear and concise view on the journalistic integrity of the reporting: “Although the journalists must be afforded some degree of exaggeration or even provocation, the Court considers that the frivolous and unverified statements about the applicants sons private life must be taken to have gone beyond the limits of responsible journalism” -p.47

5RB has an excellent case comment.

The Schrems II case- EU-US data transfers left in question

The European Court of Justice has handed down its highly anticipated ruling in the Schrems II case. The case considered the validity of the EU-US Privacy Shield and the efficacy of Standard Contractual Clauses (“SCC”) as data transfer protection mechanisms.

In this landmark case it was found that the EU Commission’s adequacy decision around the EU-US Privacy Shield framework was invalid. The leaves the mechanism for conducting EU-US data transfers in question. This matter maybe covered by recent discussions between the UK and US around entering into a seperate data sharing agreement. However, in the interim a transitional mechanism is sorely needed alongside guidance for data processors to give clarity to how data sharing between the countries can be regulated and data subjects rights safeguarded.

The SCC regime was affirmed to be valid however, it was suggested that companies and regulators enter into a case by case basis analysis of risk. In particular, it was highlighted that such an assessment should take place where government access to data is mandated. This is a highly topical issue in the US given current efforts to put in place a federal data protection regime.

For more details on the Schrems II case see-

The IAPP

INFORRM

Law firm Bird & Bird

The ICO‘s press release

The right to be forgotten does not apply to search engine results globally

On 24 September 2019 the European Court of Justice (“ECJ”) handed down judgment in the case of Google v CNIL C-507/17. The effect of the case was that right to be forgotten requests only need be applied to domain names of Member States and not extra-territorially globally. The case, therefore, has implications for the processing and effectiveness of the right to be forgotten requests, particularly for requestors who seek de-listing of search results from multiple non-EU jurisdictions. Notably, the administrative burden upon search engine operators has been limited by the ruling.

light smartphone macbook mockup

Continue reading

Revisiting the right to be forgotten, the NT1 and NT2 case

The right to be forgotten or right to erasure under data protection legislation and enshrined from the Google Spain case allows significant protection of information regarding the individual. In this post, we consider the seminal case of NT1 and NT2 which is illustrative of this fact. Continue reading

Look out for the new incoming ePrivacy Regulation and its GDPR integration

The European Data Protection Board issued a statement on 13 March 2019 urging the European Authorities to implement the new ePrivacy Regulation (the “Regulation”).

The Regulation itself sits alongside the existing GDPR framework and focuses on email marketing and cookies consent.

Debate has been generated around the extent to which the Regulation and the GDPR practically sit alongside each other to ensure that the, now onerous, data protection regime does not duplicate obligations. The Panopticon Blog has an excellent post covering this issue from Robin Hopkins. Continue reading