Meta’s recent changes to its Hateful Conduct Community Standards place marginalised groups at serious risk and likely breach its duties under the UK – Online Safety Act 2023

On 7 January 2025 Meta made sweeping changes to its policy on Community Standards – Hateful Conduct the (“Standards”). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (the “Act”) are in breach of their duties to prevent and protect these users from harm.    

In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called “it” and women to be referred to as property in user-to-user communications on Meta’s platforms such as Facebook and Instagram.

  1. The changes themselves:

Amongst some of the most concerning removals and additions to the Standards are, and I quote from the Standards itself here:

  1. allowing women to be referred to as “household objects or property or objects in general”
  • allowing transgender or non-binary people to be referred to “as it”  
  • We do allow content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.
  • We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”

[The link to the Meta Hateful Conduct Policy can be found here- https://transparency.meta.com/en-gb/policies/community-standards/hateful-conduct/

If you select the changelog- 7 January 2025 option you can more easily see the most recent changes made by Meta. These include those referenced in this article which, if you scroll down after pressing read more, appear under the tier 2 heading.]  

  • The relevant provisions of the Online Safety Act 2023

So how do these changes sit given the framework of the Act?

The Act came in force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.

Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.

  • It is worth digging into the provisions of s.10(2), which state:

(2) A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to—

  • prevent individuals from encountering priority illegal content by means of the service,
  • effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and
  • effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 9(5)(g)).
  • Furthermore, section 10(3) states:

(3)A duty to operate a service using proportionate systems and processes designed to—

(a)minimise the length of time for which any priority illegal content is present;

(b)where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

From these questions arise- what is “priority illegal content” and “priority offence”?

Priority illegal content is defined at s 59 of the Act also:

(10)“Priority illegal content” means—

(a)terrorism content,

(b)CSEA content, and

(c)content that amounts to an offence specified in Schedule 7.

  • Schedule 7 lists various harassment offences.

What is a priority offence?

  • s. 59 (7) “Priority offence” means—

(a)an offence specified in Schedule 5 (terrorism offences),

(b)an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse), or

(c)an offence specified in Schedule 7 (other priority offences).

  • If we go to Schedule 7 we see that various harassment offences are listed as priority offences.  

    The application of the provisions of the Online Safety Act 2023:

    So now we can determine that where harassment which meets a criminal threshold occurs, such as calling someone such hateful things as the Standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.

    Indeed, the Sentencing Guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim including, sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.

    However, here is Meta, allowing the commission of such offences by making explicit provision that these statements are allowed on its platform? Also, adding insult to what may result in actual injury, it attempts to justify this “given political and religious discourse” in an LGBTQ+ context.    

    Being homosexual was declassified as a mental disorder by the World Health Organisation (“WHO”) in 1990 and in 2019 the WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.  

    Yet, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill?

    Section 10(2) is notably limited to take or use “proportionate measures”- in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider much face the consequences of breaching the Act.    

    Indeed, my hopes are that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.

    Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta- under the Act companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.

    In the UK Ofcom, which regulates this space, has said: “from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.”

    Even though Meta is not based in the UK the Government’s Online Safety Act explainer makes it clear, as do the provisions of the Act:

    “The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.”

    The Draft Codes of Practice-

    Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers.   In particular, for large or mutli-risk services, such as Instagram and Facebook, it sets out the recommendation that they have policies in place for the removal of illegal content.

    In changing its Standards as such, Meta has also rendered Instagram and Facebook in breach of the Code of Practice issued by Ofcom pursuant to the Act. It should be noted that whilst the Codes are recommended to be followed platforms can deviate from them but have to justify where they do so.  

    Other applicable UK legislation-

    It should also be noted that other UK legislation is applicable in these instances, including but not limited to:

    • Communications Act 2003- s.127
    • Malicious Communications Act 1998- s.1
    • Equality Act 2010- particularly in an employment context the discrimination provisions maybe applicable.

    The EU AI Act: Between Scylla and Charybdis? – Nick Haddad

    This article explores the framework for artifical intelligence regulation put in place by the EU AI Act

    Introduction

    The race to harness, and some may say exploit, Artificial Intelligence (“AI”) is gathering pace globally. This drive comes in tandem with the rise of public recognition and interest in AI. The European Union (“EU”) is no doubt using its significance in the international economy to mould the digital space. Its ‘Digital Strategy’ has witnessed the introduction of new laws, the latest of which is the EU AI Act.

    The following analysis will attempt to show that the EU, and other significant economies, are balancing a myriad of public-policy and economic considerations when regulating AI.

    This article will proceed in four parts. Part I provides the context in which the AI Act fits within the EU’s Digital Strategy, offering comparisons with regulatory efforts in the United States. Part II will outline the key provisions of the AI Act and the context in which the legislation fits within the EU’s Digital Strategy. Part III will explore the consequences and criticisms of the AI Act. Part IV will offer concluding remarks, noting the opportunities presented to the UK either to integrate the provisions of the AI Act or diverge from the EU regulatory framework. The analysis presented will demonstrate that the EU AI Act contains shortfalls for the consumer, and that the UK should diverge from its stultifying regulatory framework in favour of a sector-specific, pro-innovation model.

    Regulators are walking a legal and economic tightrope with regards to AI. The framework offered in the EU’s AI Act is one option, whereas the UK could diverge by implementing business- and consumer-friendly rules. 

    Part I: CONTEXT

    The EU’s AI Act serves part of its ‘Digital Strategy’, providing a new regulatory framework for the digital and IT sectors of the economy. Notably, GDPR 2018 was the genesis of an attempt to provide comprehensive digital regulation. GDPR provided the global benchmark for data protection – the UK retained this law after exiting the EU.

    The Commission President, Ursula von der Leyen, proposed an approach for the “human and ethical” consequences of AI development in 2019. Whilst the draft text has been subject to debate and consultations, the EU has passed other laws regarding the digital economy. The Digital Services Act (“DSA”) and the Digital Markets Act (“DMA”) of 2022 are widely viewed as innovative pieces of legislation to regulate digital marketspaces as well as online search engines. 

    The USA, the country most influential in AI development, has a patchier legal framework. Its Algorithmic Accountability Act represents, comparatively, a light-touch approach and is adapted to the current agency regulatory authority. It is notable that the EU takes a more ‘risk-based approach’ to the regulation of AI and technology. Further, the legislative efforts of the EU, particularly the DSA and DMA, reflect a broader European consensus regarding search engines and e-commerce, aspects that the US Congress has barely debated.

    Legislative responses from lawmakers across the globe are microcosms of the wider academic, economic and moral debates regarding AI. For some, AI bore the mark of Cain from the beginning: threatening mass surveillance, economic dislocation, and political destabilisation. For others, it presents a great opportunity, being the vehicle of a new industrial revolution. The more moderate view aims to harness the opportunities of AI whilst managing potential risks. Regarding these potential risks, Mark Coeckelbergh notes the material risks of AI development, whereas Jack Balkin discusses the immaterial risks, such as the impact of AI on free speech.

    Part II: CONTENT AND FURTHER CONTEXT

    The EU’s AI Act, derived from Article 14 of the Treaty of the Functioning of the European Union (TFEU), creates a new framework for the use and development of AI in the EU. The act seeks to create a regulatory framework that can couch AI within the pre-existing system of rights and protections for consumers and citizens, involving the protection of free speech, the right to privacy and various rule-of-law considerations.

    Defining AI

    The AI Act adopts, more or less, the OECD definition of AI. Article 3(1) provides the five requirements of an AI system; being:

     [1] a “machine-based system”;
     [2] designed to operate with varying levels of autonomy;
     [3] [that] may exhibit adaptiveness after deployment, and that,
     [4] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs; [and]
     [5] that can influence physical or virtual environments.

    The terminology is deliberately broad in order to cast the widest net. These provisions seek to encompass ‘Predictive Artificial Intelligence’ (“PredAI”) and ‘Generative Artificial Intelligence’ (“GAI”). The former is commonly used in various industries to improve efficiencies, whereas the latter is used to create new media by synthesising pre-existing data (OpenAI and ChatGPT are good examples of GAI).

    The legislation provides diverging regulatory frameworks according to the categorisation of the AI system. There is a four-fold taxonomy:

    • Prohibited AI practices;
    • High-risk AI systems;
    • Certain AI systems with a transparency risk;
    • Minimal or no risk systems.

    1. Prohibited AI Practices

    Taking each in turn, prohibited AI practices as stipulated in Article 5(1) of the AI Act concern eight key scenarios. Where an AI system possesses or employs:

    • subliminal techniques;
    • a capability to exploit a natural person’s vulnerabilities;
    • characteristics of ‘social scoring’;
    • the classification of persons for crime prediction “based solely off profiling”;
    • capability for expanding facial recognition databases;
    • an ability to infer emotions;
    • an ability to generate biometric databases; or
    • real-time biometric identification in publicly accessible places for purposes of law enforcement;

    then it will be prohibited from the EU.

    Realistic examples of prohibited practices would be if streaming services used advanced data analytics to generate highly personalised advertisements, or emotionally charged content that could manipulate the user’s feelings. Similarly, an employer or organisation analysing job applicants or assessed behaviour on factors unrelated to job performance would fall foul.

    2. High-risk AI systems

    Second, if a system is deemed high-risk, it entails essential requirements for its use to continue in the EU. Article 6(1) provides that these regulations apply to AI in two situations: first, where the system is the product of, or safety component of, or is covered under harmonised rules on health and safety; second, an AI system solely for the purpose of fixed areas such as the operation of critical infrastructure, education and vocational training, or law enforcement. AI systems under the high-risk category are exempted from additional regulatory requirements where, per Article 6(3), the system does not pose “a significant risk of harm to the health, safety or fundamental rights of natural persons”.

    To comply with the requirements of the AI Act, high-risk AI systems must undertake the following steps:
    • Comply with standards regarding ‘accuracy, robustness and cybersecurity’ (Article 15) – the provision merely states that the Commission shall co-operate with benchmarking authorities;
    • make a ‘quality management system’ (Article 17);
    • maintain documentation to show compliance (Article 18);
    • undergo a conformity assessment to determine whether the additional requirements have been met and conform to criteria of various, unspecified ‘harmonised standards’ (Articles 32 and 40);
    • in some circumstances, particularly the use of real-time biometric identification in publicly accessible spaces, a fundamental rights impact assessment (FRIA) will be needed. 

    3. Transparency risk systems

    Third, systems deemed to have a ‘transparency risk’ are required to comply with EU requirements on transparency. Article 50 of the AI Act provides that a user must be ‘informed’ that he is interacting with an AI system and whether the output has been generated or manipulated. The key exception is for output used lawfully for the prevention and detection of criminal offences.

    4. Minimal or no risk systems

    Fourth, systems that do not fall under the previous three categories are deemed minimal or low risk, entailing no obligations from the AI Act. The guidance suggests merely that deployers could adopt additional, voluntary codes of conduct. The idea, and ideal, behind this provision is that most AI systems will fall under this category. Notable examples of AI systems with minimal or no risk include spam filters or AI-enabled recommender systems. The practical distinction between an AI system that is of minimal or no risk compared with the other categories ostensibly concerns the input values used within its function; the former does not use highly personalised information.

    Enforcement mechanisms

    Aside from the key provisions, the AI Act provides a new method of enforcement; a noticeable development from the GDPR regulations which were criticised for their ostensibly feeble enforcement. At Union level, the AI Office shall develop expertise and capability in the field whilst the EU AI Board will contain representatives of each Member State to enforce the AI Act. Further, each Member State has to establish a market surveillance authority and notifying authority per Article 70(1).

    Sanctions

    Under Article 99, the AI Act provides a tiered system of penalties for infringements of its provisions. Though obliged to take account of the AI Act’s guidelines, each Member State will establish a penalty system, the latest date of notification for which is 2 August 2026.

    The potential maximum penalties contained in the AI Act are higher than other EU laws forming part of the Digital Strategy. Per Article 99(3), the most severe penalty is an administrative fine up to €35 million or seven per cent of total worldwide annual turnover for the preceding financial year; by comparison, the GDPR carries a maximum €20 million fine.

    Broadly, the AI Act provides tiers for the maximum administrative fine for particular infringements:
    • For prohibited AI practices, the most severe fine may be administered;
    • A maximum of €15 million or three per cent annual turnover for breaches of obligations of providers (Article 16); authorised representatives (Article 25); importers (Article 26); distributors (Article 27); deployers (Article 29 (1)-(6)); and notified bodies (Articles 33 and 34);
    • €7.5 million or one per cent annual turnover for the supply of incorrect or misleading information to notified bodies and national competent authorities in response to a request;
    • For EU Union institutions, up to €1.5 million for non-compliance for prohibited practices, or €750,000 for non-compliance for any requirements or obligations under the Act. 

    The key criterion binding all Member States in the design of its domestic penalty system is the need for it to be “effective, proportionate and dissuasive”, per Article 99(1).  

    Part III: CONSEQUENCES AND CRITICISM

    There are certain aspects of the AI Act that put the consumer and the everyman at the front and centre.

    The category of prohibited AI practices sets a benchmark for governments and political organisations globally: the prohibited list provides obligations on the state as to how it may operate the use of AI on its citizens. The AI Act seeks to respond to macro-level considerations for society, particularly security and surveillance. The additional obligations on a high-risk system and the transparency obligations for those of mild risk seek to curtail potential abuses against the user. The regulations provide an important correction to the asymmetry between the developer and the user. Furthermore, the importance of complying with the obligations is boosted by the heavy fines for violating the provisions of the AI Act.

    There are, however, pertinent criticisms that one can make of the AI Act.

    First, there is a lacuna in the legislation regarding biometric systems; whilst the AI Act bans the use of AI in biometric systems for law enforcement, it does not prevent EU Member States selling biometric data to oppressive regimes, nor does it ban the use of AI in post-factum biometric systems.

    Second, the transparency requirements for AI systems are seen as relatively cautious, particularly from an online safety point of view. Yale legal academic, Sandra Watcher, suggests that the AI Act refrains from placing obligations on deployers of ‘high-risk’ systems; rather, most oversight responsibility is placed on market authorities and the large-scale providers. Further, she notes that the requirements for systems containing a ‘transparency risk’ is relatively light touch, particularly as GAI systems can propagate unsavoury content.

    Third, the exact scope of harmonised, technical standards for AI product liability remains unclear. This is a pertinent criticism given that there are no guarantees as to what ‘standard’ will be set. The AI Act merely provides that the Commission will issue the requests for standardisation in accordance with Regulation (EU) No 1025/2012 (Article 40, AI Act); this provides that European Standardisation Organisations, AI providers and business agree technical specifications. Currently, the EU is formulating the contents of the Product Liability Directive and the Artificial Intelligence Liability Directive. Hence, the scope and substance of the standards to be adopted may take years to clarify. Substantively, this provision appears to accept an increasing trend in the EU to delegate regulatory decision-making. Unless the Commission were to provide common specifications, private bodies will, in all likelihood, set the standards.

    Fourth, the broader criticism of the EU’s Digital Strategy; that it focuses solely on potential risks and less on innovation. The EU has taken the decision to create a distinctive regulatory approach to AI: a tiered regulatory framework to combat perceived material and monetary damage from AI. As stipulated above, the EU has employed macro-level policy considerations as part of a broader Digital Strategy. Investment and deployment decisions will be taken as a result of the EU’s approach; as noted by Nitzberg and Zysman, early regulatory frameworks create enduring trajectories of technological and political economy development. There are fears that the AI Act renders the EU uncompetitive in the race to develop AI, compared with laissez-faire jurisdictions like the UAE.

    Aside from the holistic approach provided within the AI Act at present, a sector-specific regulatory approach could be an alternative, balancing the opportunities for innovation with the need to protect consumers and ensure their right to privacy is protected. By developing a corpus of legislation that specifies regulations for different sectors, each sector may establish codes of conduct, ethical principles and technical standards. In the UK, for instance, the different regulatory objectives between healthcare and the Financial Conduct Authority emerge in their approaches to AI: the former is keen to implement AI, ethically, in the diagnostic process, whereas the latter wishes to combat the potential for disadvantaging consumers, given the asymmetric data use between sellers and consumers. The disadvantage of a piecemeal regulatory approach is that it leaves omissions and discrepancies in the legal framework, potentially undermining the objectives of regulators. A hybrid model between the two regulatory approaches is perhaps preferable.

    Notably, the AI Act does not appear, prima facie, to harmonise between the obligations of the GDPR and the AI Act. To name a few instances:
    • Compared with the GDPR, where all machine-learning systems processing personal data are required to conduct data protection impact assessment (DPIA), the AI Act requires that only high-risk operators conduct the more onerous FRIA;
    • Unlike the GDPR where consumers (‘data subjects’) have explicit enforcement mechanisms and regulations regarding the control of their data, AI users are not provided rights under the AI Act;
    • Perhaps most strikingly, the AI Act does not contain explicit obligations for data minimisation or requirements for assessing high-risk AI systems to enhance the rights of users.

    Factually, the obligations of data protection will likely overlap with the use of AI, whose systems have unprecedented potential to gather and acquire data. With the presence of GDPR, it could be assumed that many GAI platforms have developed in accordance with GDPR. Brown, Truby and Ibrahim note, however, that the inherent properties of many AI systems contradict the principles of GDPR, specifically the use of big data analytics and GDPR requirement for consent. To articulate and enforce the obligations under the AI Act and the GDPR harmoniously, regulators will need to rethink the enforcement of GDPR. 

    At its best, the law could be used to reap labour market benefits and assess risks. By drawing a line around the invasive aspects of GAI, then the AI Act could reinforce particular liberties that are vulnerable, such as free speech, data protection and the right to private life.

    PART IV: THE UK AND CONCLUSION

    The UK, now outside of the EU, has no obligation to implement the provisions of the AI Act into domestic law. So far, the UK government’s designated approach is explicitly pro-innovation with light-touch and sector-specific regulations. At the time of writing, there is no intention to introduce a general AI Act into Parliament in the near future.

    A light-touch approach has its benefits: preventing the accretion of unnecessary rules in order to provide the climate for innovation. It would, however, see important macro policy considerations neglected, including the use of data from biometric systems, security, surveillance and the non-material harms of AI content. Further, it is unlikely that the UK would want to diverge too greatly from a common framework; the EU AI Act will provide a blueprint for more detailed regulations.

    If the UK government wishes to develop a distinct regulatory framework for AI, it has several challenges going forward:

    1) Determining the exact balance in its priorities; and
    2) Walking the tightrope between a pro-innovation approach and risk minimisation.

    The above analysis shows that the utility and efficacy of AI regulation is determined by a complex series of policy considerations. The UK government could improve on the omissions of the AI Act to maximise the benefits of AI for consumers; it has the freedom to create an AI strategy with regulations addressing sector-specific concerns. This appears to be the best way to maximise the benefits of AI, whilst placing consumers at the forefront. The EU AI Act, in its current format, has not struck the right balance for all stakeholders concerned.

    Top 10 Defamation Cases of 2023: a selection – Suneet Sharma

    Inforrm reported on a large number of defamation cases from around the world in 2023.  Following a now established tradition, with my widely read posts on 2017,  2018,  201920202021 and 2022 defamation cases, I present my personal selection of the most legally and factually interesting cases from England, Australia, Canada and New Zealand from the past year – with three “bonus” cases from the US. After a haitus TPP is delighted to re-post this annual article.

    1. Hay v Cresswell [2023] EWHC 882 (KB).Tattoo artist William Hay took libel action against Nina Cresswell, a woman who published a blog and social media posts stating that he had violently sexually assaulted her 10 years earlier. Mr Hay alleged that the posts had caused him serious distress and damage to his reputation. The court held that the meaning of the posts was defamatory at common law.  However, Ms Cresswell successfully defended the claim on the grounds of truth and public interest. The judge held that it was substantially true that Mr Hay had attacked Ms Cresswell. The court also considered that the public interest aspect of Ms Cresswell’s defence was made out since she had published the posts in light of the “Tattoo MeToo” campaign, which saw several cases reported of male tattoo artists sexually assaulting women, and she was driven to protect other women from Mr Hay’s behaviour.   The case is the first time a victim of sexual assault has relied on the public interest defence to justify naming the person responsible.  There was an Inforrm case comment.

    2. Dyson v MGN Ltd [2023] EWHC 3092 (KB). Inventor and entrepreneur James Dyson sued the Mirror newspaper for an opinion piece declaring Dyson a “hypocrite” for campaigning for Brexit and then moving his own headquarters to Singapore, which made him a bad role model for children.  Upholding the paper’s defence of honest opinion, the judge ruled that the basis of that opinion (that the Dyson headquarters had moved to Singapore) was true and did not accept that it was merely the relocation of two senior executives. The judge held that a publisher is permitted to be selective in the facts relied upon as the basis for an opinion. Press Gazette

    3. Banks v Cadwalladr [2023] EWCA Civ 219. Businessman and Brexit campaigner Arron Banks successfully appealed the dismissal of his libel claims against journalist Carole Cadwalladr, who had stated in a TED Talk and a tweet that Mr Banks had broken electoral law by taking money from the Russian government to fund his Brexit campaign. An official investigation reported a year after the TED Talk that there was no evidence of wrongdoing. The judge in the first instance concluded that the initial publication of the talk was protected by the public interest defence, while the ongoing publication of the tweet and the talk following the investigation result were not, though these claims still failed as Mr Banks did not suffer serious harm under section 1 of the Defamation Act 2013. The Court of Appeal court overturned the first-instance judge and held that he had been caused serious harm by the 100,000 views of the TED Talk in the first year of publication, which was relevant where the public interest defence no longer applied. Ms Cadwalladr was ordered to pay £35,000 in damages and held to be liable for very substantial costs.   There was a post about the case on Inforrm.

    4. Packham v Wightman [2023] EWHC 1256 (KB). he TV presenter and naturalist Chris Packham sued the Editor of Country Squire Magazine over three allegations published on its website which alleged, among other things, that he had misled people in order to raise money for a tiger rescue charity. The High Court found that the accusations were not substantially true and amounted to a “hyperbolic and vitriolic smearing of Mr Packham” [163]. The Defendants were ordered to pay Packham £90,000 in damages. The BBC, the GuardianThe TelegraphZelo Street reported on the judgement. Doughty Street Chambers also covered the case in their blog.

    5. Duke of Sussex v Associated Newspapers [2023] EWHC 3120 (KB). The claimant’s application to strike out and/or obtain summary judgment on the defence of honest opinion relied on by ANL was denied. The case will proceed to trial. The BBCIndependentSpectator and iNews were some of the many outlets to cover the judgment.

    6. Dyson v Channel 4 [2023] EWCA Civ 884. The Court of Appeal upheld an appeal by Dyson Technology Limited and Dyson Limited against the decision of Nicklin J on 31 October 2022 ([2022] EWHC 2718 (KB)) that based solely on intrinsic evidence, they were not referred to in the Channel 4 broadcast that was the subject of their libel claim.  It was held that the test for “ordinary” reference was whether hypothetical reasonable viewer, acquainted with the claimants would identify them as being referred to in the publication.  There was an Inforrm case comment

    7. Roberts-Smith v Fairfax Media Publications Pty Limited (No 41)[2023] FCA 555  After a year long trial, in a judgment of 607 pages and 2618 paragraphs Anthony Besanko J dismissed this libel action, the defendants’ truth defence succeeding.  He held that on the balance of probabilities, Roberts-Smith kicked a handcuffed prisoner off a cliff in Darwan in 2012 before ordering a subordinate Australian soldier to shoot the injured man dead and that in 2009, Roberts-Smith ordered the killing of an elderly man found hiding in a tunnel in a bombed-out compound codenamed “Whiskey 108”, as well as murdering a disabled man with a prosthetic leg during the same mission, using a para machine gun.

    8. Hansman v. Neufeld 2023 SCC 14, The Supreme Court of Canada restored the decision of the first instance judge in dismissing a defamation suit brought in 2018 by a then Chilliwack school board trustee against a former teachers’ union leader, who described comments made by the trustee as bigoted, transphobic and hateful. Case in BriefComment on CBC

    9. Clancy v. Farid2023 ONSC 2750. The Ontario Superior Court of Justice assessed defamation damages aggregating $4,773,000 in a case involving claims by 53 plaintiffs against one individual defendant over a targeted campaign involving tens of thousands of postings on the internet.  Each of the 53 plaintiffs was awarded general damages, in amounts ranging from a high of $90,000 to a low of $55,000 depending on their individual circumstances.  The aggregate sum awarded for general damages amounted to $4,245,000.  Aggravated damages in the amount of $1,500 were awarded to each of 34 of the plaintiffs, aggregating $51,000.  Punitive damages in the amount of $9,000 were awarded to each of the 53 plaintiffs, aggregating $477,000. The Court held that the defamatory publications at issue were salacious, outrageous and malevolent. In addition to the damage award, the Court enjoined the defendant from posting further defamatory statements or comments of the nature and kind which were the subject of this litigation.

    10. Syed v Malik [2023] NZHC 1676. Libel claim arising out of social media posts which attack virtually every aspect of the claimant’s life. There were 20 defamatory publications including 5 videos which caused very serious harm to the claimant’s business and reputation.  The Judge awarded damages of NZ$225,000.  There was a report of the case on Stuff

    And three “bonus” cases from the US:

    • US Dominion, Inc. v. Fox News Network, LLC, a democratically notable defamation case concerning Fox News statements that vote systems sold by Dominion switched votes from former President Donald Trump to Democrat Joe Biden in the 2020 Presidential election. The case ultimately settled for $787.5 million, the claim itself being valued at $1.6 billion.
    • E Jean Carroll v Donald J Trump, twin cases against the former US president one of which came to trial in 2023. It was found that Trump was liable for defaming and sexually abusing Carroll who was awarded damages in the sum of $5 million. The second case is scheduled for trial on 15 January 2024.
    • Freeman v Guliani, a case where two ex-Georgia election workers entered a defamation suit against Rudy Guliani. The case concerned allegations of election fraud made by Guliani against the two workers whilst he was Trump’s attorney. The pair were awarded a total of $148,169,000.

    Top 10 Privacy and Data Protection Cases 2023: a selection – Suneet Sharma

    Inforrm covered a wide range of data protection and privacy cases in 2023. Following my posts in 20182019,  20202021 and 2022 here is my selection of notable privacy and data protection cases across 2023. TPP is delighted to repost its annual article on this topic after a haitus.

    1. Stoute v News Group Newspapers Ltd [2023] EWHC 232 (KB)
      Having secured the United Kingdom’s most lucrative government contract for PPE during covid-19, worth £2 billion, a married couple sought an emergency injunction at first instance (and again on appeal), to prevent the publication of photographs of them walking along a public beach, fully dressed (her in a knee-length kaftan, him in board shorts and a polo shirt), on their way to a family lunch at a beach restaurant frequented by celebrities (and paparazzi).(6) The court denied the couple’s application to prevent publication in The Sun of the photographs, over which the court said the couple had no reasonable expectation of privacy.

    The Court of Appeal upheld the fact that there was simply no reasonable expectation of privacy in photographs in the circumstances, with some considerable interest placed on the “performative” manner in which the couple arrived at the beach with their larger party by way of loud jet skis from their luxury yacht parked just off-shore.

    2. WFZ v BBC [2023] EWHC 1618 (KB)
    The applicant, a high-profile man arrested for sexual offences against two women but not charged, sought an interim injunction pending trial to prevent the BBC from publishing his name as part of a broader story concerning the failings in the industry concerned properly to deal with such allegations.(7) The applicant had not yet been named by the mainstream media. The basis for the injunction application was misuse of private information and contempt of court (a novel claim for a private individual to bring).

    The High Court held that the applicant had a reasonable expectation of privacy in his arrest, indicating that courts are likely to restrain information about arrests as well as investigations (following the Supreme Court’s decision in ZXC) until the suspect is charged. Additionally, though controversially, the court found that having been arrested, publication of the man’s name would likely give rise to a contempt of court such as to justify restraint.

    3. Prismall v Google
    In the latest attempt to open the floodgates for group data privacy claims, a representative claimant brought a misuse of private information claim against DeepMind and Google on behalf of £1.6 million people arising from the transfer of their NHS medical records.

    The claim was struck out by the High Court for failing to show that, on the lowest common denominator basis, all claimants would be able to establish a reasonable expectation of privacy in the data shared and were entitled to more than nominal damages. The claim would have been a means of getting around the finding in Lloyd v Google that there were no recoverable damages in data claims for loss of control of data. The Court of Appeal has granted permission to appeal.

    See the comment from the Panopticon Blog.

    4. Baroness Lawrence & Ors v Associated Newspapers Ltd [2023] EWHC 2789 (KB)

    A summary judgment where the claimants alleged that the Daily Mail, the Mail on Sunday and MailOnline acquire the private or confidential information through unlawful methods including voicemail interception, eavesdropping on calls, deception and use of private investigators. This information was then alleged posted online by the outlets.

    The Defendants made an application to challenge the claim on two grounds- limitation; that the claims were made over six years after the misconduct occurred and contesting the use of ledgers from the leveson inquiry of which there were three orders in place.

    It was held that each of the claimants had a real prospect of success with reliance on section 32 of the Limitation Act 1980. In relation to the orders it was found that the approach needed to be regularised which could be achieved in three ways- (a) by the defendant voluntarily disclosing the Ledgers; (b) the relevant government Minister varying the order; or (c) amending the Particulars of Claim to remove the material from the Ledgers.

     There was a 5RB case comment on the case.

    5. Duke of Sussex v MGN Ltd [2023] EWHC 3217 (Ch). 

    Fancourt J held that phone hacking had been habitual and widespread at The Daily Mirror, The Sunday Mirror and The People newspapers from 1998 until 2006, and had continued extensively but on a reducing basis from 2007 until 2011. The editors and in-house legal departments knew it was being used, and the group legal director and CEO had known about or turned a blind eye to it. Although claims by the Duke of Sussex and others for damages for loss caused by publication of their private information obtained by phone hacking and/or other unlawful means were statute-barred, some of their claims for damages for misuse of private information succeeded. When assessing damages, losses flowing from publication of their private information were recoverable as damages for the original unlawful information gathering.  5RB news, has a comment.

    6. VB v. Natsionalna agentsia za prihodite (C‑340/21)

    A case which clarified the concept of non-material damage under Article 82 of the EU General Data Protection Regulation (“GDPR”) and the rules governing burden of proof under the GDPR.   

    Following a cyber attack against the Bulgarian National Revenue Agency (the “Agency”), one of the more than six million affected individuals brought an action before the Administrative Court of Sofia claiming compensation. In support of that claim, the affected individual argued that they had suffered non-material damage as a result of a personal data breach caused by the Agency’s failure to fulfill its obligations under, inter alia, Articles 5(1)(f), 24 and 32 of the GDPR. The non-material damage claimed consisted of the fear that their personal data, having been published without their consent, might be misused in the future, or that they might be blackmailed, assaulted or even kidnapped.

    In its judgment, the CJEU takes the view that the mere fact that a personal data breach occurred does not mean that the Agency did not implement appropriate technical and organizational measures to comply with Articles 24 and 32 of the GDPR. The EU legislator’s intent, as explained by the CJEU, was to “to ‘mitigate’ the risks of personal data breaches, without claiming that it would be possible to eliminate them.” National courts should assess the measures implemented “in a concrete manner, by taking into account the risks associated with the processing concerned and by assessing whether the nature, content and implementation of those measures are appropriate to those risks.” 

    That said, the CJEU further notes that the fact that an infringement results from the behaviour of a third-party (cyber criminals) does not exempt the controller of liability and that, in the context of an action for compensation under Article 82 of the GDPR, the burden of proving that the implemented technical and organizational measures are appropriate falls on the controller and not on the individual.

    Finally, building on its Österreichische Post judgment, the CJEU indicates that the fear experienced by individuals with regard to a possible misuse of their personal data by third parties as a result of an infringement of the GDPR may, in itself, constitute non-material damage. In this respect, the national court is required to verify that the fear can be regarded as well founded, in the specific circumstances at issue for the concerned individual.

    7. Delo v Information Commissioner [2023] EWCA Civ 1141 

    A case which considered the approach to be taken by the Information Commissioner’s approach to complaints made by data subjects. Mr Delo made a data subject access request to Wise Payment Limited to which Wise responded that it was exempt from providing much of the information requested. Upon Mr Delo complaining to the Information Commissioner he was advised that Wise had declined to provide the information sought in keeping with its obligations.

    Mr Delo escalated his request by bringing a claim for judicial review and suing Wise.   

    In finding that Wise had complied with his obligations two matters were clarified by the Court of Appeal as matter which were in the public interest:

    1) Is the Commissioner obliged to reach a definitive decision on the merits of each and every complaint or does he have a discretion to decide that some other outcome is appropriate?

    (2) If the Commissioner has a discretion, did he nonetheless act unlawfully in this case by declining to investigate or declining to determine the merits of the complaint made by the claimant

    Both questions were adjudicated by the Court to be negatives.  

    Panopticon Blog has an excellent summary of the case.

    8. Ali v Chief Constable of Bedfordshire [2023] EWHC 938 (KB)

    A informed the police that her husband was a cocaine dealer and a danger to her family, she indicated that she was providing the information on the basis that she would not be identified as a source.

    The police informed the local council social services department. However, a malicious council employee informed A’s husband of what A had said.

    Whilst the council was not held vicariously liable for the criminal acts of their employee. Her action against Bedford Police succeeded for breaches of the GDPR, misuse of private information and contravention of Article 8 of the ECHR.  

    For a summary of the case see the Panopticon Blog.

    9. Hurbain v Belgium

    In 2008 the newspaper placed on its website an electronic version of its archives dating back to 1989 (including the Article). In 2010 Dr G contacted Le Soir, requesting that the article be removed from the newspaper’s electronic archives or at least anonymised. The request mentioned his profession and the fact that the article appeared among the results when his name was entered in several search engines.  The newspaper refused to remove the article.

    In 2012 Dr G sued Mr Hurbain (in his capacity as editor of Le Soir) to obtain the anonymisation of the article. His action was founded on the right to private life, which (under Belgian law) encompassed a right to be forgotten. Ultimately, the Grand Chamber found that there had been no violation of Article 10, the interference with the right here had been necessary and proportionate.

    10. FGX  v Gaunt [2023] EWHC 419 (KB)

    The covert recording of naked images of the claimant and their publication on a pornographic website gave rise to this claim for (i) intentionally exposing the claimant to a foreseeable risk of injury or severe distress which resulted in injury; (ii) infringement of the claimant’s privacy; and (iii) breach of the claimant’s confidence.

    Said to be the first case of its kind in England and Wales, the case resulted in an award of damages in total of £97,041.61.

    Inforrm had a case comment.

    s.230 of the Communications Decency Act – Gonzalez v Google No.21-1333, an upcoming challenge to Internet platforms protections – Citation – US – The Associated Press

    The Associated Press has highlighted, in a long-read, a legal case which looks to challenge the protection of internet platforms under s.230 of the Communications Decency Act.

    The Supreme Court case concerns liability for YouTube suggestions which are argued helped the Islamic State recruit. The case is brought by the family of Nohemi Gonzalez who tragically lost her life in a terrorist attack in Paris.

    The case is due to be heard on Tuesday 21 February.

    See here for the article and for more details see the SCOTUS blog.

    “Issue: Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.”

    The SCOTUS Blog

    Liberty and Privacy International v Security Service [2023] UKIPTrib1- MI5 admitted to have been using personal data unlawfully without application of safeguards of retention, review and disposal

    MI5 admitted that personal data had been unlawfully processed and retained between the period of 2016 and 2019 due to failures in retention, review and destruction practicies.

    See pg.79 of the open judgment for a summary of the failings of MI5 in their handling of personal data in particular.

    For further, more detailed, context regarding the case see the Privacy International press release.

    Quotes from caselaw 7: Driver v CPS [2022] EWHC 2500 KB – a departure from the starting point of a reasonable expectation of privacy in criminal investigations pre-charge on “special facts” and low value data breaches

    This case is illustrative of a set of facts where the legitimate starting point of a reasonable expectation of privacy in respect of a criminal investigation at pre-charge stage under English law can be can be departed from:

    Whilst a reasonable expectation of privacy in relation to a police investigation is the starting point, on the particular and somewhat special facts of this case, I am unable to conclude that by June 2019 such an expectation subsisted in relation to the information that the CPS were considering a charging decision in relation to the Claimant.

    at p.147, Knowles J.

    Note reference by the judge to the “special facts” of the case. For the special facts this case turns on in relation to the article 8 grounds see p.148-151.

    The case concerned the passing of a file from the CPS and the disclosure of that fact to a third party. This was objected to by the claimant on data protection and privacy grounds.

    Whilst the disclosure did not include the name of the claimant, it was found that “personal data can relate to more than one person and does not have to relate exclusively to one data subject, particularly when the group referred to is small.”- p.101

    In this case, the operation in question, Operation Sheridan, concerned only eight suspects, of which the claimant was one.

    Accordingly in finding for the claimant it was considered that “this data breach was at the lowest end of the spectrum. Taking all matters together in the round, I award the Claimant damages of £250. I will also make a declaration that the Defendant breached the Claimant’s rights under Part 3 of the DPA 2018.” – at p.169

    However, in relation to a claim for breach of article 8, as p.147 reflects, the claim was unsuccessful. This was due to the judge considering that there were “special facts” this case turns on in relation to the application of article 8, meriting departure from starting point of there being a reasonable expectation of privacy in criminal inversitgations at pre-charge stage (in particular, see p.148-151).

    Such “special facts” included, in combination: an ongoing investigation for many years, the Claimant’s own waiver of their right to privacy by making details of the case at pre-charge stage public themselves (including to media outlets), further proceedings after that intial disclosure, including the Claimant’s arrest in 2017 and further passing of police files to the CPS in 2018 in relation to that same Operation Sheridan.

    This case is illustrative of how privacy cases in light of ZXC fall within a spectrum, allowing for circumstances in which the legitimate starting point it established can be departed from, albeit this case turning on “special facts” which are clearly, in this instance, narrow and particularly unique. It also clarifies what facts are considered to give rise to a data breach “at the lowest end of the spectrum” and that the value of such breaches is reflected in nominal damages awards- in this case £250 and a declaration.

    This case was number 2 on my Top 10 Data Protection and Privacy Law Cases 2022.

    Privacy Law in Practice – An Insight into Data Protection Law as an In-House IT Lawyer – Madeleine Weber

    Welcome to Privacy Law in Practice, our series at TPP demystifying what it is like to practice in privacy law.

    Have you ever wondered which data protection law issues come up in practice? It obviously depends on the industry and area you work in, but data protection law might be more prevalent than you think.

    Continue reading

    Top 10 Privacy and Data Protection Cases 2022

    Inforrm covered a wide range of data protection and privacy cases in 2022. Following my posts in 20182019,  2020 and 2021 here is my selection of notable privacy and data protection cases across 2022.

    1. ZXC v Bloomberg [2022] UKSC 5

    This was the seminal privacy case of the year, decided by the UK Supreme Court. It was considered whether, in general a person under criminal investigation has, prior to being charged, a reasonable expectation of privacy in respect of information relating to that investigation.

    Continue reading

    Top 10 Defamation Cases 2022

    Inforrm reported on a large number of defamation cases from around the world in 2022.  Following my widely read posts on 2017,  2018,  20192020 and 2021 defamation cases, this is my personal selection of the most legally and factually interesting cases from England, Australia and Canada from the past year.

    1. Vardy v. Rooney [2022] EWHC 2017 (QB)

    An interim hearing in this case featured at number five in my 2021 list.  We now have the final judgment in the “Wagatha Christie” case between Rebekah Vardy and Coleen Rooney as number one in my 2022 list. The case was one of the most high-profile libel cases in recent years, concerning the alleged leaking of posts from Ms Rooney’s private Instagram account to the Sun by Ms Vardy, via her agent Ms Caroline Watt. The resulting post on social media regarding Ms Vardy’s involvement in the leaks by Ms Rooney, were the subject of the libel claim.

    Ultimately the claim of libel against the defendant, Coleen Rooney, was dismissed due to the defence of truth being established. Notably, “the information disclosed was not deeply confidential, and it can fairly be described as trivial, but it does not need to be confidential or important to meet the sting of the libel.” [287]

    Continue reading