Meta’s recent changes to its Hateful Conduct Community Standards place marginalised groups at serious risk and likely breach its duties under the UK – Online Safety Act 2023

On 7 January 2025 Meta made sweeping changes to its policy on Community Standards – Hateful Conduct the (“Standards”). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (the “Act”) are in breach of their duties to prevent and protect these users from harm.    

In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called “it” and women to be referred to as property in user-to-user communications on Meta’s platforms such as Facebook and Instagram.

  1. The changes themselves:

Amongst some of the most concerning removals and additions to the Standards are, and I quote from the Standards itself here:

  1. allowing women to be referred to as “household objects or property or objects in general”
  • allowing transgender or non-binary people to be referred to “as it”  
  • We do allow content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.
  • We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”

[The link to the Meta Hateful Conduct Policy can be found here- https://transparency.meta.com/en-gb/policies/community-standards/hateful-conduct/

If you select the changelog- 7 January 2025 option you can more easily see the most recent changes made by Meta. These include those referenced in this article which, if you scroll down after pressing read more, appear under the tier 2 heading.]  

  • The relevant provisions of the Online Safety Act 2023

So how do these changes sit given the framework of the Act?

The Act came in force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.

Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.

  • It is worth digging into the provisions of s.10(2), which state:

(2) A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to—

  • prevent individuals from encountering priority illegal content by means of the service,
  • effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and
  • effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 9(5)(g)).
  • Furthermore, section 10(3) states:

(3)A duty to operate a service using proportionate systems and processes designed to—

(a)minimise the length of time for which any priority illegal content is present;

(b)where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

From these questions arise- what is “priority illegal content” and “priority offence”?

Priority illegal content is defined at s 59 of the Act also:

(10)“Priority illegal content” means—

(a)terrorism content,

(b)CSEA content, and

(c)content that amounts to an offence specified in Schedule 7.

  • Schedule 7 lists various harassment offences.

What is a priority offence?

  • s. 59 (7) “Priority offence” means—

(a)an offence specified in Schedule 5 (terrorism offences),

(b)an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse), or

(c)an offence specified in Schedule 7 (other priority offences).

  • If we go to Schedule 7 we see that various harassment offences are listed as priority offences.  

    The application of the provisions of the Online Safety Act 2023:

    So now we can determine that where harassment which meets a criminal threshold occurs, such as calling someone such hateful things as the Standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.

    Indeed, the Sentencing Guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim including, sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.

    However, here is Meta, allowing the commission of such offences by making explicit provision that these statements are allowed on its platform? Also, adding insult to what may result in actual injury, it attempts to justify this “given political and religious discourse” in an LGBTQ+ context.    

    Being homosexual was declassified as a mental disorder by the World Health Organisation (“WHO”) in 1990 and in 2019 the WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.  

    Yet, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill?

    Section 10(2) is notably limited to take or use “proportionate measures”- in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider much face the consequences of breaching the Act.    

    Indeed, my hopes are that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.

    Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta- under the Act companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.

    In the UK Ofcom, which regulates this space, has said: “from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.”

    Even though Meta is not based in the UK the Government’s Online Safety Act explainer makes it clear, as do the provisions of the Act:

    “The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.”

    The Draft Codes of Practice-

    Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers.   In particular, for large or mutli-risk services, such as Instagram and Facebook, it sets out the recommendation that they have policies in place for the removal of illegal content.

    In changing its Standards as such, Meta has also rendered Instagram and Facebook in breach of the Code of Practice issued by Ofcom pursuant to the Act. It should be noted that whilst the Codes are recommended to be followed platforms can deviate from them but have to justify where they do so.  

    Other applicable UK legislation-

    It should also be noted that other UK legislation is applicable in these instances, including but not limited to:

    • Communications Act 2003- s.127
    • Malicious Communications Act 1998- s.1
    • Equality Act 2010- particularly in an employment context the discrimination provisions maybe applicable.

    Top 10 Defamation Cases of 2023: a selection – Suneet Sharma

    Inforrm reported on a large number of defamation cases from around the world in 2023.  Following a now established tradition, with my widely read posts on 2017,  2018,  201920202021 and 2022 defamation cases, I present my personal selection of the most legally and factually interesting cases from England, Australia, Canada and New Zealand from the past year – with three “bonus” cases from the US. After a haitus TPP is delighted to re-post this annual article.

    1. Hay v Cresswell [2023] EWHC 882 (KB).Tattoo artist William Hay took libel action against Nina Cresswell, a woman who published a blog and social media posts stating that he had violently sexually assaulted her 10 years earlier. Mr Hay alleged that the posts had caused him serious distress and damage to his reputation. The court held that the meaning of the posts was defamatory at common law.  However, Ms Cresswell successfully defended the claim on the grounds of truth and public interest. The judge held that it was substantially true that Mr Hay had attacked Ms Cresswell. The court also considered that the public interest aspect of Ms Cresswell’s defence was made out since she had published the posts in light of the “Tattoo MeToo” campaign, which saw several cases reported of male tattoo artists sexually assaulting women, and she was driven to protect other women from Mr Hay’s behaviour.   The case is the first time a victim of sexual assault has relied on the public interest defence to justify naming the person responsible.  There was an Inforrm case comment.

    2. Dyson v MGN Ltd [2023] EWHC 3092 (KB). Inventor and entrepreneur James Dyson sued the Mirror newspaper for an opinion piece declaring Dyson a “hypocrite” for campaigning for Brexit and then moving his own headquarters to Singapore, which made him a bad role model for children.  Upholding the paper’s defence of honest opinion, the judge ruled that the basis of that opinion (that the Dyson headquarters had moved to Singapore) was true and did not accept that it was merely the relocation of two senior executives. The judge held that a publisher is permitted to be selective in the facts relied upon as the basis for an opinion. Press Gazette

    3. Banks v Cadwalladr [2023] EWCA Civ 219. Businessman and Brexit campaigner Arron Banks successfully appealed the dismissal of his libel claims against journalist Carole Cadwalladr, who had stated in a TED Talk and a tweet that Mr Banks had broken electoral law by taking money from the Russian government to fund his Brexit campaign. An official investigation reported a year after the TED Talk that there was no evidence of wrongdoing. The judge in the first instance concluded that the initial publication of the talk was protected by the public interest defence, while the ongoing publication of the tweet and the talk following the investigation result were not, though these claims still failed as Mr Banks did not suffer serious harm under section 1 of the Defamation Act 2013. The Court of Appeal court overturned the first-instance judge and held that he had been caused serious harm by the 100,000 views of the TED Talk in the first year of publication, which was relevant where the public interest defence no longer applied. Ms Cadwalladr was ordered to pay £35,000 in damages and held to be liable for very substantial costs.   There was a post about the case on Inforrm.

    4. Packham v Wightman [2023] EWHC 1256 (KB). he TV presenter and naturalist Chris Packham sued the Editor of Country Squire Magazine over three allegations published on its website which alleged, among other things, that he had misled people in order to raise money for a tiger rescue charity. The High Court found that the accusations were not substantially true and amounted to a “hyperbolic and vitriolic smearing of Mr Packham” [163]. The Defendants were ordered to pay Packham £90,000 in damages. The BBC, the GuardianThe TelegraphZelo Street reported on the judgement. Doughty Street Chambers also covered the case in their blog.

    5. Duke of Sussex v Associated Newspapers [2023] EWHC 3120 (KB). The claimant’s application to strike out and/or obtain summary judgment on the defence of honest opinion relied on by ANL was denied. The case will proceed to trial. The BBCIndependentSpectator and iNews were some of the many outlets to cover the judgment.

    6. Dyson v Channel 4 [2023] EWCA Civ 884. The Court of Appeal upheld an appeal by Dyson Technology Limited and Dyson Limited against the decision of Nicklin J on 31 October 2022 ([2022] EWHC 2718 (KB)) that based solely on intrinsic evidence, they were not referred to in the Channel 4 broadcast that was the subject of their libel claim.  It was held that the test for “ordinary” reference was whether hypothetical reasonable viewer, acquainted with the claimants would identify them as being referred to in the publication.  There was an Inforrm case comment

    7. Roberts-Smith v Fairfax Media Publications Pty Limited (No 41)[2023] FCA 555  After a year long trial, in a judgment of 607 pages and 2618 paragraphs Anthony Besanko J dismissed this libel action, the defendants’ truth defence succeeding.  He held that on the balance of probabilities, Roberts-Smith kicked a handcuffed prisoner off a cliff in Darwan in 2012 before ordering a subordinate Australian soldier to shoot the injured man dead and that in 2009, Roberts-Smith ordered the killing of an elderly man found hiding in a tunnel in a bombed-out compound codenamed “Whiskey 108”, as well as murdering a disabled man with a prosthetic leg during the same mission, using a para machine gun.

    8. Hansman v. Neufeld 2023 SCC 14, The Supreme Court of Canada restored the decision of the first instance judge in dismissing a defamation suit brought in 2018 by a then Chilliwack school board trustee against a former teachers’ union leader, who described comments made by the trustee as bigoted, transphobic and hateful. Case in BriefComment on CBC

    9. Clancy v. Farid2023 ONSC 2750. The Ontario Superior Court of Justice assessed defamation damages aggregating $4,773,000 in a case involving claims by 53 plaintiffs against one individual defendant over a targeted campaign involving tens of thousands of postings on the internet.  Each of the 53 plaintiffs was awarded general damages, in amounts ranging from a high of $90,000 to a low of $55,000 depending on their individual circumstances.  The aggregate sum awarded for general damages amounted to $4,245,000.  Aggravated damages in the amount of $1,500 were awarded to each of 34 of the plaintiffs, aggregating $51,000.  Punitive damages in the amount of $9,000 were awarded to each of the 53 plaintiffs, aggregating $477,000. The Court held that the defamatory publications at issue were salacious, outrageous and malevolent. In addition to the damage award, the Court enjoined the defendant from posting further defamatory statements or comments of the nature and kind which were the subject of this litigation.

    10. Syed v Malik [2023] NZHC 1676. Libel claim arising out of social media posts which attack virtually every aspect of the claimant’s life. There were 20 defamatory publications including 5 videos which caused very serious harm to the claimant’s business and reputation.  The Judge awarded damages of NZ$225,000.  There was a report of the case on Stuff

    And three “bonus” cases from the US:

    • US Dominion, Inc. v. Fox News Network, LLC, a democratically notable defamation case concerning Fox News statements that vote systems sold by Dominion switched votes from former President Donald Trump to Democrat Joe Biden in the 2020 Presidential election. The case ultimately settled for $787.5 million, the claim itself being valued at $1.6 billion.
    • E Jean Carroll v Donald J Trump, twin cases against the former US president one of which came to trial in 2023. It was found that Trump was liable for defaming and sexually abusing Carroll who was awarded damages in the sum of $5 million. The second case is scheduled for trial on 15 January 2024.
    • Freeman v Guliani, a case where two ex-Georgia election workers entered a defamation suit against Rudy Guliani. The case concerned allegations of election fraud made by Guliani against the two workers whilst he was Trump’s attorney. The pair were awarded a total of $148,169,000.