This article explores the framework for artifical intelligence regulation put in place by the EU AI Act
Introduction
The race to harness, and some may say exploit, Artificial Intelligence (“AI”) is gathering pace globally. This drive comes in tandem with the rise of public recognition and interest in AI. The European Union (“EU”) is no doubt using its significance in the international economy to mould the digital space. Its ‘Digital Strategy’ has witnessed the introduction of new laws, the latest of which is the EU AI Act.
The following analysis will attempt to show that the EU, and other significant economies, are balancing a myriad of public-policy and economic considerations when regulating AI.
This article will proceed in four parts. Part I provides the context in which the AI Act fits within the EU’s Digital Strategy, offering comparisons with regulatory efforts in the United States. Part II will outline the key provisions of the AI Act and the context in which the legislation fits within the EU’s Digital Strategy. Part III will explore the consequences and criticisms of the AI Act. Part IV will offer concluding remarks, noting the opportunities presented to the UK either to integrate the provisions of the AI Act or diverge from the EU regulatory framework. The analysis presented will demonstrate that the EU AI Act contains shortfalls for the consumer, and that the UK should diverge from its stultifying regulatory framework in favour of a sector-specific, pro-innovation model.
Regulators are walking a legal and economic tightrope with regards to AI. The framework offered in the EU’s AI Act is one option, whereas the UK could diverge by implementing business- and consumer-friendly rules.
Part I: CONTEXT
The EU’s AI Act serves part of its ‘Digital Strategy’, providing a new regulatory framework for the digital and IT sectors of the economy. Notably, GDPR 2018 was the genesis of an attempt to provide comprehensive digital regulation. GDPR provided the global benchmark for data protection – the UK retained this law after exiting the EU.
The Commission President, Ursula von der Leyen, proposed an approach for the “human and ethical” consequences of AI development in 2019. Whilst the draft text has been subject to debate and consultations, the EU has passed other laws regarding the digital economy. The Digital Services Act (“DSA”) and the Digital Markets Act (“DMA”) of 2022 are widely viewed as innovative pieces of legislation to regulate digital marketspaces as well as online search engines.
The USA, the country most influential in AI development, has a patchier legal framework. Its Algorithmic Accountability Act represents, comparatively, a light-touch approach and is adapted to the current agency regulatory authority. It is notable that the EU takes a more ‘risk-based approach’ to the regulation of AI and technology. Further, the legislative efforts of the EU, particularly the DSA and DMA, reflect a broader European consensus regarding search engines and e-commerce, aspects that the US Congress has barely debated.
Legislative responses from lawmakers across the globe are microcosms of the wider academic, economic and moral debates regarding AI. For some, AI bore the mark of Cain from the beginning: threatening mass surveillance, economic dislocation, and political destabilisation. For others, it presents a great opportunity, being the vehicle of a new industrial revolution. The more moderate view aims to harness the opportunities of AI whilst managing potential risks. Regarding these potential risks, Mark Coeckelbergh notes the material risks of AI development, whereas Jack Balkin discusses the immaterial risks, such as the impact of AI on free speech.
Part II: CONTENT AND FURTHER CONTEXT
The EU’s AI Act, derived from Article 14 of the Treaty of the Functioning of the European Union (TFEU), creates a new framework for the use and development of AI in the EU. The act seeks to create a regulatory framework that can couch AI within the pre-existing system of rights and protections for consumers and citizens, involving the protection of free speech, the right to privacy and various rule-of-law considerations.
Defining AI
The AI Act adopts, more or less, the OECD definition of AI. Article 3(1) provides the five requirements of an AI system; being:
[1] a “machine-based system”;
[2] designed to operate with varying levels of autonomy;
[3] [that] may exhibit adaptiveness after deployment, and that,
[4] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs; [and]
[5] that can influence physical or virtual environments.
The terminology is deliberately broad in order to cast the widest net. These provisions seek to encompass ‘Predictive Artificial Intelligence’ (“PredAI”) and ‘Generative Artificial Intelligence’ (“GAI”). The former is commonly used in various industries to improve efficiencies, whereas the latter is used to create new media by synthesising pre-existing data (OpenAI and ChatGPT are good examples of GAI).
The legislation provides diverging regulatory frameworks according to the categorisation of the AI system. There is a four-fold taxonomy:
• Prohibited AI practices;
• High-risk AI systems;
• Certain AI systems with a transparency risk;
• Minimal or no risk systems.
1. Prohibited AI Practices
Taking each in turn, prohibited AI practices as stipulated in Article 5(1) of the AI Act concern eight key scenarios. Where an AI system possesses or employs:
• subliminal techniques;
• a capability to exploit a natural person’s vulnerabilities;
• characteristics of ‘social scoring’;
• the classification of persons for crime prediction “based solely off profiling”;
• capability for expanding facial recognition databases;
• an ability to infer emotions;
• an ability to generate biometric databases; or
• real-time biometric identification in publicly accessible places for purposes of law enforcement;
then it will be prohibited from the EU.
Realistic examples of prohibited practices would be if streaming services used advanced data analytics to generate highly personalised advertisements, or emotionally charged content that could manipulate the user’s feelings. Similarly, an employer or organisation analysing job applicants or assessed behaviour on factors unrelated to job performance would fall foul.
2. High-risk AI systems
Second, if a system is deemed high-risk, it entails essential requirements for its use to continue in the EU. Article 6(1) provides that these regulations apply to AI in two situations: first, where the system is the product of, or safety component of, or is covered under harmonised rules on health and safety; second, an AI system solely for the purpose of fixed areas such as the operation of critical infrastructure, education and vocational training, or law enforcement. AI systems under the high-risk category are exempted from additional regulatory requirements where, per Article 6(3), the system does not pose “a significant risk of harm to the health, safety or fundamental rights of natural persons”.
To comply with the requirements of the AI Act, high-risk AI systems must undertake the following steps:
• Comply with standards regarding ‘accuracy, robustness and cybersecurity’ (Article 15) – the provision merely states that the Commission shall co-operate with benchmarking authorities;
• make a ‘quality management system’ (Article 17);
• maintain documentation to show compliance (Article 18);
• undergo a conformity assessment to determine whether the additional requirements have been met and conform to criteria of various, unspecified ‘harmonised standards’ (Articles 32 and 40);
• in some circumstances, particularly the use of real-time biometric identification in publicly accessible spaces, a fundamental rights impact assessment (FRIA) will be needed.
3. Transparency risk systems
Third, systems deemed to have a ‘transparency risk’ are required to comply with EU requirements on transparency. Article 50 of the AI Act provides that a user must be ‘informed’ that he is interacting with an AI system and whether the output has been generated or manipulated. The key exception is for output used lawfully for the prevention and detection of criminal offences.
4. Minimal or no risk systems
Fourth, systems that do not fall under the previous three categories are deemed minimal or low risk, entailing no obligations from the AI Act. The guidance suggests merely that deployers could adopt additional, voluntary codes of conduct. The idea, and ideal, behind this provision is that most AI systems will fall under this category. Notable examples of AI systems with minimal or no risk include spam filters or AI-enabled recommender systems. The practical distinction between an AI system that is of minimal or no risk compared with the other categories ostensibly concerns the input values used within its function; the former does not use highly personalised information.
Enforcement mechanisms
Aside from the key provisions, the AI Act provides a new method of enforcement; a noticeable development from the GDPR regulations which were criticised for their ostensibly feeble enforcement. At Union level, the AI Office shall develop expertise and capability in the field whilst the EU AI Board will contain representatives of each Member State to enforce the AI Act. Further, each Member State has to establish a market surveillance authority and notifying authority per Article 70(1).
Sanctions
Under Article 99, the AI Act provides a tiered system of penalties for infringements of its provisions. Though obliged to take account of the AI Act’s guidelines, each Member State will establish a penalty system, the latest date of notification for which is 2 August 2026.
The potential maximum penalties contained in the AI Act are higher than other EU laws forming part of the Digital Strategy. Per Article 99(3), the most severe penalty is an administrative fine up to €35 million or seven per cent of total worldwide annual turnover for the preceding financial year; by comparison, the GDPR carries a maximum €20 million fine.
Broadly, the AI Act provides tiers for the maximum administrative fine for particular infringements:
• For prohibited AI practices, the most severe fine may be administered;
• A maximum of €15 million or three per cent annual turnover for breaches of obligations of providers (Article 16); authorised representatives (Article 25); importers (Article 26); distributors (Article 27); deployers (Article 29 (1)-(6)); and notified bodies (Articles 33 and 34);
• €7.5 million or one per cent annual turnover for the supply of incorrect or misleading information to notified bodies and national competent authorities in response to a request;
• For EU Union institutions, up to €1.5 million for non-compliance for prohibited practices, or €750,000 for non-compliance for any requirements or obligations under the Act.
The key criterion binding all Member States in the design of its domestic penalty system is the need for it to be “effective, proportionate and dissuasive”, per Article 99(1).
Part III: CONSEQUENCES AND CRITICISM
There are certain aspects of the AI Act that put the consumer and the everyman at the front and centre.
The category of prohibited AI practices sets a benchmark for governments and political organisations globally: the prohibited list provides obligations on the state as to how it may operate the use of AI on its citizens. The AI Act seeks to respond to macro-level considerations for society, particularly security and surveillance. The additional obligations on a high-risk system and the transparency obligations for those of mild risk seek to curtail potential abuses against the user. The regulations provide an important correction to the asymmetry between the developer and the user. Furthermore, the importance of complying with the obligations is boosted by the heavy fines for violating the provisions of the AI Act.
There are, however, pertinent criticisms that one can make of the AI Act.
First, there is a lacuna in the legislation regarding biometric systems; whilst the AI Act bans the use of AI in biometric systems for law enforcement, it does not prevent EU Member States selling biometric data to oppressive regimes, nor does it ban the use of AI in post-factum biometric systems.
Second, the transparency requirements for AI systems are seen as relatively cautious, particularly from an online safety point of view. Yale legal academic, Sandra Watcher, suggests that the AI Act refrains from placing obligations on deployers of ‘high-risk’ systems; rather, most oversight responsibility is placed on market authorities and the large-scale providers. Further, she notes that the requirements for systems containing a ‘transparency risk’ is relatively light touch, particularly as GAI systems can propagate unsavoury content.
Third, the exact scope of harmonised, technical standards for AI product liability remains unclear. This is a pertinent criticism given that there are no guarantees as to what ‘standard’ will be set. The AI Act merely provides that the Commission will issue the requests for standardisation in accordance with Regulation (EU) No 1025/2012 (Article 40, AI Act); this provides that European Standardisation Organisations, AI providers and business agree technical specifications. Currently, the EU is formulating the contents of the Product Liability Directive and the Artificial Intelligence Liability Directive. Hence, the scope and substance of the standards to be adopted may take years to clarify. Substantively, this provision appears to accept an increasing trend in the EU to delegate regulatory decision-making. Unless the Commission were to provide common specifications, private bodies will, in all likelihood, set the standards.
Fourth, the broader criticism of the EU’s Digital Strategy; that it focuses solely on potential risks and less on innovation. The EU has taken the decision to create a distinctive regulatory approach to AI: a tiered regulatory framework to combat perceived material and monetary damage from AI. As stipulated above, the EU has employed macro-level policy considerations as part of a broader Digital Strategy. Investment and deployment decisions will be taken as a result of the EU’s approach; as noted by Nitzberg and Zysman, early regulatory frameworks create enduring trajectories of technological and political economy development. There are fears that the AI Act renders the EU uncompetitive in the race to develop AI, compared with laissez-faire jurisdictions like the UAE.
Aside from the holistic approach provided within the AI Act at present, a sector-specific regulatory approach could be an alternative, balancing the opportunities for innovation with the need to protect consumers and ensure their right to privacy is protected. By developing a corpus of legislation that specifies regulations for different sectors, each sector may establish codes of conduct, ethical principles and technical standards. In the UK, for instance, the different regulatory objectives between healthcare and the Financial Conduct Authority emerge in their approaches to AI: the former is keen to implement AI, ethically, in the diagnostic process, whereas the latter wishes to combat the potential for disadvantaging consumers, given the asymmetric data use between sellers and consumers. The disadvantage of a piecemeal regulatory approach is that it leaves omissions and discrepancies in the legal framework, potentially undermining the objectives of regulators. A hybrid model between the two regulatory approaches is perhaps preferable.
Notably, the AI Act does not appear, prima facie, to harmonise between the obligations of the GDPR and the AI Act. To name a few instances:
• Compared with the GDPR, where all machine-learning systems processing personal data are required to conduct data protection impact assessment (DPIA), the AI Act requires that only high-risk operators conduct the more onerous FRIA;
• Unlike the GDPR where consumers (‘data subjects’) have explicit enforcement mechanisms and regulations regarding the control of their data, AI users are not provided rights under the AI Act;
• Perhaps most strikingly, the AI Act does not contain explicit obligations for data minimisation or requirements for assessing high-risk AI systems to enhance the rights of users.
Factually, the obligations of data protection will likely overlap with the use of AI, whose systems have unprecedented potential to gather and acquire data. With the presence of GDPR, it could be assumed that many GAI platforms have developed in accordance with GDPR. Brown, Truby and Ibrahim note, however, that the inherent properties of many AI systems contradict the principles of GDPR, specifically the use of big data analytics and GDPR requirement for consent. To articulate and enforce the obligations under the AI Act and the GDPR harmoniously, regulators will need to rethink the enforcement of GDPR.
At its best, the law could be used to reap labour market benefits and assess risks. By drawing a line around the invasive aspects of GAI, then the AI Act could reinforce particular liberties that are vulnerable, such as free speech, data protection and the right to private life.
PART IV: THE UK AND CONCLUSION
The UK, now outside of the EU, has no obligation to implement the provisions of the AI Act into domestic law. So far, the UK government’s designated approach is explicitly pro-innovation with light-touch and sector-specific regulations. At the time of writing, there is no intention to introduce a general AI Act into Parliament in the near future.
A light-touch approach has its benefits: preventing the accretion of unnecessary rules in order to provide the climate for innovation. It would, however, see important macro policy considerations neglected, including the use of data from biometric systems, security, surveillance and the non-material harms of AI content. Further, it is unlikely that the UK would want to diverge too greatly from a common framework; the EU AI Act will provide a blueprint for more detailed regulations.
If the UK government wishes to develop a distinct regulatory framework for AI, it has several challenges going forward:
1) Determining the exact balance in its priorities; and
2) Walking the tightrope between a pro-innovation approach and risk minimisation.
The above analysis shows that the utility and efficacy of AI regulation is determined by a complex series of policy considerations. The UK government could improve on the omissions of the AI Act to maximise the benefits of AI for consumers; it has the freedom to create an AI strategy with regulations addressing sector-specific concerns. This appears to be the best way to maximise the benefits of AI, whilst placing consumers at the forefront. The EU AI Act, in its current format, has not struck the right balance for all stakeholders concerned.
Privacy
Top 10 Privacy and Data Protection Cases 2023: a selection – Suneet Sharma
Inforrm covered a wide range of data protection and privacy cases in 2023. Following my posts in 2018, 2019, 2020, 2021 and 2022 here is my selection of notable privacy and data protection cases across 2023. TPP is delighted to repost its annual article on this topic after a haitus.
- Stoute v News Group Newspapers Ltd [2023] EWHC 232 (KB)
Having secured the United Kingdom’s most lucrative government contract for PPE during covid-19, worth £2 billion, a married couple sought an emergency injunction at first instance (and again on appeal), to prevent the publication of photographs of them walking along a public beach, fully dressed (her in a knee-length kaftan, him in board shorts and a polo shirt), on their way to a family lunch at a beach restaurant frequented by celebrities (and paparazzi).(6) The court denied the couple’s application to prevent publication in The Sun of the photographs, over which the court said the couple had no reasonable expectation of privacy.
The Court of Appeal upheld the fact that there was simply no reasonable expectation of privacy in photographs in the circumstances, with some considerable interest placed on the “performative” manner in which the couple arrived at the beach with their larger party by way of loud jet skis from their luxury yacht parked just off-shore.
2. WFZ v BBC [2023] EWHC 1618 (KB)
The applicant, a high-profile man arrested for sexual offences against two women but not charged, sought an interim injunction pending trial to prevent the BBC from publishing his name as part of a broader story concerning the failings in the industry concerned properly to deal with such allegations.(7) The applicant had not yet been named by the mainstream media. The basis for the injunction application was misuse of private information and contempt of court (a novel claim for a private individual to bring).
The High Court held that the applicant had a reasonable expectation of privacy in his arrest, indicating that courts are likely to restrain information about arrests as well as investigations (following the Supreme Court’s decision in ZXC) until the suspect is charged. Additionally, though controversially, the court found that having been arrested, publication of the man’s name would likely give rise to a contempt of court such as to justify restraint.
3. Prismall v Google
In the latest attempt to open the floodgates for group data privacy claims, a representative claimant brought a misuse of private information claim against DeepMind and Google on behalf of £1.6 million people arising from the transfer of their NHS medical records.
The claim was struck out by the High Court for failing to show that, on the lowest common denominator basis, all claimants would be able to establish a reasonable expectation of privacy in the data shared and were entitled to more than nominal damages. The claim would have been a means of getting around the finding in Lloyd v Google that there were no recoverable damages in data claims for loss of control of data. The Court of Appeal has granted permission to appeal.
See the comment from the Panopticon Blog.
4. Baroness Lawrence & Ors v Associated Newspapers Ltd [2023] EWHC 2789 (KB)
A summary judgment where the claimants alleged that the Daily Mail, the Mail on Sunday and MailOnline acquire the private or confidential information through unlawful methods including voicemail interception, eavesdropping on calls, deception and use of private investigators. This information was then alleged posted online by the outlets.
The Defendants made an application to challenge the claim on two grounds- limitation; that the claims were made over six years after the misconduct occurred and contesting the use of ledgers from the leveson inquiry of which there were three orders in place.
It was held that each of the claimants had a real prospect of success with reliance on section 32 of the Limitation Act 1980. In relation to the orders it was found that the approach needed to be regularised which could be achieved in three ways- (a) by the defendant voluntarily disclosing the Ledgers; (b) the relevant government Minister varying the order; or (c) amending the Particulars of Claim to remove the material from the Ledgers.
There was a 5RB case comment on the case.
5. Duke of Sussex v MGN Ltd [2023] EWHC 3217 (Ch).
Fancourt J held that phone hacking had been habitual and widespread at The Daily Mirror, The Sunday Mirror and The People newspapers from 1998 until 2006, and had continued extensively but on a reducing basis from 2007 until 2011. The editors and in-house legal departments knew it was being used, and the group legal director and CEO had known about or turned a blind eye to it. Although claims by the Duke of Sussex and others for damages for loss caused by publication of their private information obtained by phone hacking and/or other unlawful means were statute-barred, some of their claims for damages for misuse of private information succeeded. When assessing damages, losses flowing from publication of their private information were recoverable as damages for the original unlawful information gathering. 5RB news, has a comment.
6. VB v. Natsionalna agentsia za prihodite (C‑340/21)
A case which clarified the concept of non-material damage under Article 82 of the EU General Data Protection Regulation (“GDPR”) and the rules governing burden of proof under the GDPR.
Following a cyber attack against the Bulgarian National Revenue Agency (the “Agency”), one of the more than six million affected individuals brought an action before the Administrative Court of Sofia claiming compensation. In support of that claim, the affected individual argued that they had suffered non-material damage as a result of a personal data breach caused by the Agency’s failure to fulfill its obligations under, inter alia, Articles 5(1)(f), 24 and 32 of the GDPR. The non-material damage claimed consisted of the fear that their personal data, having been published without their consent, might be misused in the future, or that they might be blackmailed, assaulted or even kidnapped.
In its judgment, the CJEU takes the view that the mere fact that a personal data breach occurred does not mean that the Agency did not implement appropriate technical and organizational measures to comply with Articles 24 and 32 of the GDPR. The EU legislator’s intent, as explained by the CJEU, was to “to ‘mitigate’ the risks of personal data breaches, without claiming that it would be possible to eliminate them.” National courts should assess the measures implemented “in a concrete manner, by taking into account the risks associated with the processing concerned and by assessing whether the nature, content and implementation of those measures are appropriate to those risks.”
That said, the CJEU further notes that the fact that an infringement results from the behaviour of a third-party (cyber criminals) does not exempt the controller of liability and that, in the context of an action for compensation under Article 82 of the GDPR, the burden of proving that the implemented technical and organizational measures are appropriate falls on the controller and not on the individual.
Finally, building on its Österreichische Post judgment, the CJEU indicates that the fear experienced by individuals with regard to a possible misuse of their personal data by third parties as a result of an infringement of the GDPR may, in itself, constitute non-material damage. In this respect, the national court is required to verify that the fear can be regarded as well founded, in the specific circumstances at issue for the concerned individual.
7. Delo v Information Commissioner [2023] EWCA Civ 1141
A case which considered the approach to be taken by the Information Commissioner’s approach to complaints made by data subjects. Mr Delo made a data subject access request to Wise Payment Limited to which Wise responded that it was exempt from providing much of the information requested. Upon Mr Delo complaining to the Information Commissioner he was advised that Wise had declined to provide the information sought in keeping with its obligations.
Mr Delo escalated his request by bringing a claim for judicial review and suing Wise.
In finding that Wise had complied with his obligations two matters were clarified by the Court of Appeal as matter which were in the public interest:
1) Is the Commissioner obliged to reach a definitive decision on the merits of each and every complaint or does he have a discretion to decide that some other outcome is appropriate?
(2) If the Commissioner has a discretion, did he nonetheless act unlawfully in this case by declining to investigate or declining to determine the merits of the complaint made by the claimant
Both questions were adjudicated by the Court to be negatives.
Panopticon Blog has an excellent summary of the case.
8. Ali v Chief Constable of Bedfordshire [2023] EWHC 938 (KB)
A informed the police that her husband was a cocaine dealer and a danger to her family, she indicated that she was providing the information on the basis that she would not be identified as a source.
The police informed the local council social services department. However, a malicious council employee informed A’s husband of what A had said.
Whilst the council was not held vicariously liable for the criminal acts of their employee. Her action against Bedford Police succeeded for breaches of the GDPR, misuse of private information and contravention of Article 8 of the ECHR.
For a summary of the case see the Panopticon Blog.
9. Hurbain v Belgium
In 2008 the newspaper placed on its website an electronic version of its archives dating back to 1989 (including the Article). In 2010 Dr G contacted Le Soir, requesting that the article be removed from the newspaper’s electronic archives or at least anonymised. The request mentioned his profession and the fact that the article appeared among the results when his name was entered in several search engines. The newspaper refused to remove the article.
In 2012 Dr G sued Mr Hurbain (in his capacity as editor of Le Soir) to obtain the anonymisation of the article. His action was founded on the right to private life, which (under Belgian law) encompassed a right to be forgotten. Ultimately, the Grand Chamber found that there had been no violation of Article 10, the interference with the right here had been necessary and proportionate.
10. FGX v Gaunt [2023] EWHC 419 (KB)
The covert recording of naked images of the claimant and their publication on a pornographic website gave rise to this claim for (i) intentionally exposing the claimant to a foreseeable risk of injury or severe distress which resulted in injury; (ii) infringement of the claimant’s privacy; and (iii) breach of the claimant’s confidence.
Said to be the first case of its kind in England and Wales, the case resulted in an award of damages in total of £97,041.61.
Quotes from caselaw 7: Driver v CPS [2022] EWHC 2500 KB – a departure from the starting point of a reasonable expectation of privacy in criminal investigations pre-charge on “special facts” and low value data breaches
This case is illustrative of a set of facts where the legitimate starting point of a reasonable expectation of privacy in respect of a criminal investigation at pre-charge stage under English law can be can be departed from:
Whilst a reasonable expectation of privacy in relation to a police investigation is the starting point, on the particular and somewhat special facts of this case, I am unable to conclude that by June 2019 such an expectation subsisted in relation to the information that the CPS were considering a charging decision in relation to the Claimant.
at p.147, Knowles J.
Note reference by the judge to the “special facts” of the case. For the special facts this case turns on in relation to the article 8 grounds see p.148-151.
The case concerned the passing of a file from the CPS and the disclosure of that fact to a third party. This was objected to by the claimant on data protection and privacy grounds.
Whilst the disclosure did not include the name of the claimant, it was found that “personal data can relate to more than one person and does not have to relate exclusively to one data subject, particularly when the group referred to is small.”- p.101
In this case, the operation in question, Operation Sheridan, concerned only eight suspects, of which the claimant was one.
Accordingly in finding for the claimant it was considered that “this data breach was at the lowest end of the spectrum. Taking all matters together in the round, I award the Claimant damages of £250. I will also make a declaration that the Defendant breached the Claimant’s rights under Part 3 of the DPA 2018.” – at p.169
However, in relation to a claim for breach of article 8, as p.147 reflects, the claim was unsuccessful. This was due to the judge considering that there were “special facts” this case turns on in relation to the application of article 8, meriting departure from starting point of there being a reasonable expectation of privacy in criminal inversitgations at pre-charge stage (in particular, see p.148-151).
Such “special facts” included, in combination: an ongoing investigation for many years, the Claimant’s own waiver of their right to privacy by making details of the case at pre-charge stage public themselves (including to media outlets), further proceedings after that intial disclosure, including the Claimant’s arrest in 2017 and further passing of police files to the CPS in 2018 in relation to that same Operation Sheridan.
This case is illustrative of how privacy cases in light of ZXC fall within a spectrum, allowing for circumstances in which the legitimate starting point it established can be departed from, albeit this case turning on “special facts” which are clearly, in this instance, narrow and particularly unique. It also clarifies what facts are considered to give rise to a data breach “at the lowest end of the spectrum” and that the value of such breaches is reflected in nominal damages awards- in this case £250 and a declaration.
This case was number 2 on my Top 10 Data Protection and Privacy Law Cases 2022.
Top 10 Privacy and Data Protection Cases 2022
Inforrm covered a wide range of data protection and privacy cases in 2022. Following my posts in 2018, 2019, 2020 and 2021 here is my selection of notable privacy and data protection cases across 2022.
- ZXC v Bloomberg [2022] UKSC 5
This was the seminal privacy case of the year, decided by the UK Supreme Court. It was considered whether, in general a person under criminal investigation has, prior to being charged, a reasonable expectation of privacy in respect of information relating to that investigation.
Continue readingFestive wishes from TPP
We would like to thank all our readers and subscribers for visiting TPP over the past year. Many thanks also to our contributors across the past year for their insight and expertise.
We are currently working on getting more informative pieces on privacy to you- including a series on what privacy law is like to practice as a professional (if you would like to contribute be sure to let us know) and our traditional Top 10 cases of the year across defamation, privacy law and data protection in association with the esteemed International Forum for Responsible Media Blog.
In the meantime, if any of our readers would like to guest write for us we encourage you to get in touch- we always welcome the opportunity to work with you.
Our case quote of the year is from the seminal case that was heard before the UK Supreme Court, ZXC v Bloomberg [2022] UKSC 5, finding that, as a legitimate starting point, criminal suspects have a reasonable expectation of privacy in the fact of an investigation at pre-charge stage:
[And, as such:]
at p.144 and 146 from Lord Hamblen and Lord Stephens
See our comment on the case for more information.
A very happy Christmas and New Year to you all.
The Privacy Perspective Founder and Editor, Suneet Sharma
The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill
PART VI: UPDATES
Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.
Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.
Continue readingCitation: The Guardian: Privacy laws could be rolled back, government sources suggest – A rebuttal
The Guardian has a piece suggesting, following the judgment of the UK Supreme Court this week in ZXC, that privacy laws could be rolled back by replacements to the Human Rights Act.
Following the judgment in ZXC a government spokesperson has stated: “A free press is one of the cornerstones of any democracy. The government recognises the vital role the media plays in holding people to account and shining a light on the issues which matter most. We will study the implications of the judgment carefully.”
Whilst political sources are usually careful not to criticise judges, the balance between freedom of expression and privacy rights of individuals is a contentious area, drawing critical voices from both sides of the debate. TPP advocates balance between the two competing rights.
Continue readingCitation: Aljazeera: Police usage of facial recognition software to be challenged in Indian Courts
In Hyderabad, Telengana, a case has been brought against the police concerning the use of facial recognition software, Aljazeera reports.
Top 10 Privacy and Data Protection Cases of 2021: A selection – Suneet Sharma
Inforrm covered a wide range of data protection and privacy cases in 2021. Following my posts in 2018, 2019 and 2020 here is my selection of most notable privacy and data protection cases across 2021:
- Lloyd v Google LLC [2021] UKSC 50
In the most significant privacy law judgment of the year the UK Supreme Court considered whether a class action for breach of s4(4) Data Protection Act 1998 (“DPA”) could be brought against Google of its obligations as a data controller for its application of the “Safari Workaround”. The claim for compensation was made under s.13 DPA 1998. The amount claimed per person advanced in the letter of claim was £750. Collectively, with the number of people impacted by the processing, the potential liability of Google was estimated to exceed £3bn.
Lord Leggatt handed down the unanimous judgement in favour of the appellant Google LLC:
“the claim has no real prospect of success. That in turn is because, in the way the claim has been framed in order to try to bring it as a representative action, the claimant seeks damages under section 13 of the DPA 1998 for each individual member of the represented class without attempting to show that any wrongful use was made by Google of personal data relating to that individual or that the individual suffered any material damage or distress as a result of a breach of the requirements of the Act by Google.”
The case has been heralded for its central importance in determining the viability of data protection class actions. The case drew wide coverage from Pinsent Masons, Hill Dickinson, Clifford Chance, Bindmans and Stewarts.
- HRH The Duchess of Sussex v Associated Newspapers Limited [2021] EWHC 273 (Ch) and [2021] EWCA Civ 1810.
In February 2021 Meghan, Duchess of Sussex, won her application for summary judgment against the Mail on Sunday. Warby LJ said there were “compelling reasons” for it not to go to trial over its publication of extracts of a private letter to her estranged father, Thomas Markle. He entered judgment for the Duchess in misuse of private information and copyright. There was a news piece on Inforrm and a piece by Dominic Crossley.
Associated Newspapers was granted permission appeal and the appeal was heard on 9 and 11 November 2021 with judgment being handed down on 2 December 2021, The Court, Sir Geoffrey Vos MR, Sharp P and Bean LJ, unanimously dismissed the appeal on all grounds, stating:
“Essentially, whilst it might have been proportionate to disclose and publish a very small part of the Letter to rebut inaccuracies in the People Article, it was not necessary to deploy half the contents of the Letter as Associated Newspapers did. As the Articles themselves demonstrate, and as the judge found, the primary purpose of the Articles was not to publish Mr Markle’s responses to the inaccurate allegations against him in the People Article. The true purpose of the publication was, as the first 4 lines of the Articles said: to reveal for the first time [to the world] the “[t]he full content of a sensational letter written by [the Duchess] to her estranged father shortly after her wedding”. The contents of the Letter were private when it was written and when it was published, even if the claimant, it now appears, realised that her father might leak its contents to the media.” [106]
The case has been analysed on INFORRM by Brian Cathcart.
- Australian Competition and Consumer Commission v Google LLC (No 2) [2021] FCA 367
The Federal Court of Australia found that Google misled some users about the personal location data it collected through Android devices between January 2017 and December 2018.
The Court found that, in providing the option, “Don’t save my Location History in my Google Account”, represented to some reasonable consumers that they could prevent their location data being saved on their Google Account. In actual fact, users need to change an additional setting, separate, to stop their location data being saved to their Google Account.
- Hájovský v. Slovakia [2021] ECHR 591
Mr Hájovský placed an anonymous advert in a national newspaper offering payment to a woman in return for giving birth to his child. An investigative reporter posed as a candidate interested in surrogacy, replied to the advert and secretly filmed the ensuing meetings. These were later complied into a documentary. A national tabloid also covered the story using stills of footage and taking a critical stance of the applicants’ actions. Both stories revealed the applicant’s identity. This prompted the applicant to bring an action against the media groups for violation of his privacy under Slovakian law.
The Slovakian courts dismissed the application on the basis that the article contributed to a matter of public interest- the debate around surrogacy for payment and in any event the publishing of the advert had brought a private matter, the applicant’s wish to have a child, into the public domain.The ECtHR found in favour of the applicant. In doing so it reiterated the well-established balancing approach vis a vi privacy and freedom of expression as per Von Hannover and Axel Springer. In this instance the court found that the applicants right to privacy had been violated and that the Slovakian courts has erred in their approach to balancing the competing rights. In doing so the court make key observations about the privacy implications of photographs.
- Warren v DSG Retail Ltd [2021] EWHC 2168 (QB)
This case concerned the viability of claims for breach of confidence and misuse of private information against data controllers who have suffered cyber-attacks. In dismissing the claims for breach of confidence and misuse of private information Saini J found that both causes require some form of “positive conduct” by the defendant that is lacking where the cause of the private information being leaked is a cyber-attack.
6. ES v Shillington 2021 ABQB 739
In this case the Alberta Court of the Queen’s Bench awarded damages under new “public disclosure of private fact” tort. The case concerned the making public of images of the claimant engaging in sex acts with the defendant- these had been shared during a romantic relationship between 2005 to 2016 where the parties had two children together. The parties had a mutual understanding that the images would not be shared or published anywhere. However, the defendant then proceeded to share the images online, including those involving the sexual assault of the claimant.
Delivering judgment for the claimant, Inglis J accepted their submissions that a new “public disclosure of private information” tort should be recognised as a separate cause of action from existing common law statutes.
Inforrm has a case comment.
- Hurbain v Belgium ([2021] ECHR 544)
A case in which an order to anonymise a newspaper’s electronic archive was found not to breach the applicant publisher’s right to freedom of expression. This case reflects an important application of the right to be forgotten under article 8 of the Convention. The applicant, Patrick Hurbain, is the president of the Rossel Group which owns one of Belgium’s leading French-language newspapers, Le Soir, of which he was previously Managing Editor. The article in question concerned a series of fatal car accidents and named one of the drivers, G, who had been convicted of a criminal offence for his involvement in the incidents. G made a successful application for rehabilitation in 2006.
However, Le Soir created a free, electronic, searchable version of its archives from 1989 onwards, including the article at issue. G relied on the fact that the article appeared in response to a search on his name on Le Soir’s internal search engine and on Google Search. He explained that its availability was damaging to his reputation, particularly in his work as a doctor. The newspaper refused the application by stated it had asked Google to delist/deindex the article.
In 2012 G sued Mr Hurbain as editor of Le Sior and was successful domestically. Mr Hurbain then lodged an application with the Strasbourg Court complaining that the anonymisation order was a breach of Article 10. In balancing the article 8 and 10 rights in the case the Strasbourg Court found in favour of G.
Informm had a case comment.
- Peters v Attorney-General on behalf of Ministry of Social Development [2021] NZCA 355
The New Zealand Court of Appeal provided guidance in respect of the tort of invasion of privacy in this high-profile case. In 2017, the Ministry for Social Development (“MSD”) realised that Mr Peters, MP and leader of the New Zealand First Party, had overpaid New Zealand Superannuation (“NZS”). Due to errors NZS had been paid at the single rate when it should have been paid at the partner rate. Mr Peters immediately arranged for the overpaid amount to be repaid.
In August 2017 several reporters received anonymous calls in respect of the overpayment. To pre-empt any publicity, Mr Peters released a press statement addressing the incident. He also issued a claim for infringement of the tort of invasion of privacy against several MSD executives. The High Court found the MSD executives were proper recipients of information and thus the claim failed. The Court of Appeal dismissed Mr Peters’ appeal. For an invasion of privacy claim to succeed there is a two “limb” test:
- the existence of facts in respect of which there was a reasonable expectation of privacy; and
- that the publicity given to those private facts would be considered highly offensive to an objective reasonable person.
The Court agreed that limb one was met on the facts. However, the Court found that Mr Peters did not have a reasonable expectation of protection from disclosure of this information within MSD and from MSD to the relevant Ministers and select staff. As the claimant could not prove that any of defendants had released information to the media. The appeal was dismissed. The case affirmed the removal of the requirement for there to be widespread disclosure and the potential for the removal of the requirement that disclosure be highly offensive.
- R (Open Rights Group and the 3 million) v Secretary of State for the Home Department and Others [2021] EWCA Civ 800,
A case concerning “the lawfulness” immigration exemption found in paragraph 4 of Schedule 2 of the Data Protection Act 2018. This exemption allows those processing personal data for immigration control purposes to refuse to comply with the data subject rights guaranteed by the GDPR to the extent that complying with those provisions would prejudice those purposes. The Court of Appeal found that this exemption was not compliant with Article 23 of the GDPR.
There was coverage from Hunton Andrews Kurth and 11KBW.
- Biancardi v. Italy [2021] ECHR 972
The ECtHR found that an order that the editor of an online newspaper was liable for failing to de-index an article concerning criminal proceedings did not breach Article 10 of the Convention. The case concerned an application for the delisting of an article concerning a fight involving a stabbing in a restaurant which mentioned the names of the those involved including the applicant V.X.
Suneet Sharma is a junior legal professional with a particular interest and experience in media, information and privacy law. He is the editor of The Privacy Perspective blog.
Citation: The Bureau of Investigative Journalism: Spam, Spies, SMS- how it works
One of the tactics the article explores is utilising bulk message senders such as Mitto.