Meta’s recent changes to its Hateful Conduct Community Standards place marginalised groups at serious risk and likely breach its duties under the UK – Online Safety Act 2023

On 7 January 2025 Meta made sweeping changes to its policy on Community Standards – Hateful Conduct the (“Standards”). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (the “Act”) are in breach of their duties to prevent and protect these users from harm.    

In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called “it” and women to be referred to as property in user-to-user communications on Meta’s platforms such as Facebook and Instagram.

  1. The changes themselves:

Amongst some of the most concerning removals and additions to the Standards are, and I quote from the Standards itself here:

  1. allowing women to be referred to as “household objects or property or objects in general”
  • allowing transgender or non-binary people to be referred to “as it”  
  • We do allow content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.
  • We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”

[The link to the Meta Hateful Conduct Policy can be found here- https://transparency.meta.com/en-gb/policies/community-standards/hateful-conduct/

If you select the changelog- 7 January 2025 option you can more easily see the most recent changes made by Meta. These include those referenced in this article which, if you scroll down after pressing read more, appear under the tier 2 heading.]  

  • The relevant provisions of the Online Safety Act 2023

So how do these changes sit given the framework of the Act?

The Act came in force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.

Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.

  • It is worth digging into the provisions of s.10(2), which state:

(2) A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to—

  • prevent individuals from encountering priority illegal content by means of the service,
  • effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and
  • effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 9(5)(g)).
  • Furthermore, section 10(3) states:

(3)A duty to operate a service using proportionate systems and processes designed to—

(a)minimise the length of time for which any priority illegal content is present;

(b)where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.

From these questions arise- what is “priority illegal content” and “priority offence”?

Priority illegal content is defined at s 59 of the Act also:

(10)“Priority illegal content” means—

(a)terrorism content,

(b)CSEA content, and

(c)content that amounts to an offence specified in Schedule 7.

  • Schedule 7 lists various harassment offences.

What is a priority offence?

  • s. 59 (7) “Priority offence” means—

(a)an offence specified in Schedule 5 (terrorism offences),

(b)an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse), or

(c)an offence specified in Schedule 7 (other priority offences).

  • If we go to Schedule 7 we see that various harassment offences are listed as priority offences.  

    The application of the provisions of the Online Safety Act 2023:

    So now we can determine that where harassment which meets a criminal threshold occurs, such as calling someone such hateful things as the Standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.

    Indeed, the Sentencing Guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim including, sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.

    However, here is Meta, allowing the commission of such offences by making explicit provision that these statements are allowed on its platform? Also, adding insult to what may result in actual injury, it attempts to justify this “given political and religious discourse” in an LGBTQ+ context.    

    Being homosexual was declassified as a mental disorder by the World Health Organisation (“WHO”) in 1990 and in 2019 the WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.  

    Yet, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill?

    Section 10(2) is notably limited to take or use “proportionate measures”- in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider much face the consequences of breaching the Act.    

    Indeed, my hopes are that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.

    Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta- under the Act companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.

    In the UK Ofcom, which regulates this space, has said: “from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.”

    Even though Meta is not based in the UK the Government’s Online Safety Act explainer makes it clear, as do the provisions of the Act:

    “The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.”

    The Draft Codes of Practice-

    Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers.   In particular, for large or mutli-risk services, such as Instagram and Facebook, it sets out the recommendation that they have policies in place for the removal of illegal content.

    In changing its Standards as such, Meta has also rendered Instagram and Facebook in breach of the Code of Practice issued by Ofcom pursuant to the Act. It should be noted that whilst the Codes are recommended to be followed platforms can deviate from them but have to justify where they do so.  

    Other applicable UK legislation-

    It should also be noted that other UK legislation is applicable in these instances, including but not limited to:

    • Communications Act 2003- s.127
    • Malicious Communications Act 1998- s.1
    • Equality Act 2010- particularly in an employment context the discrimination provisions maybe applicable.

    s.230 of the Communications Decency Act – Gonzalez v Google No.21-1333, an upcoming challenge to Internet platforms protections – Citation – US – The Associated Press

    The Associated Press has highlighted, in a long-read, a legal case which looks to challenge the protection of internet platforms under s.230 of the Communications Decency Act.

    The Supreme Court case concerns liability for YouTube suggestions which are argued helped the Islamic State recruit. The case is brought by the family of Nohemi Gonzalez who tragically lost her life in a terrorist attack in Paris.

    The case is due to be heard on Tuesday 21 February.

    See here for the article and for more details see the SCOTUS blog.

    “Issue: Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.”

    The SCOTUS Blog

    The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill

    PART VI: UPDATES

    Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.

    Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.

    Continue reading

    Bloomberg v ZXC: UK Supreme Court finds that suspects of crime have a reasonable expectation of privacy in investigation details pre-charge

    Judgment has been handed down by the UK Supreme Court in the appeal in the case of Bloomberg v ZXC. The court has found for the respondent, refusing the appeal.

    The case has significant implications for the law of privacy. It endorses the finding in the Cliff Richard case and provides crucial precedent on the reasonable expectation of privacy suspects of crime can expect. TPP will have further coverage of the judgment shortly. See the judgment here.

    “The courts below were correct to hold that, as a legitimate starting point, a

    person under criminal investigation has, prior to being charged, a reasonable

    expectation of privacy in respect of information relating to that investigation and that

    in all the circumstances this is a case in which that applies and there is such an

    expectation.”

    at p.146

    TPP number 30 on Feedspot – Top 35 Privacy Websites and Blogs

    We are delighted to be ranked 30 out of Feedspots top 35 blogs. TPP was ranked alongside law firms and authoritative blogs on privacy law.

    According to Feedspot sites are ranked “by traffic rank, social media followers, domain authority & freshness.” The full list can be found here and is a must read for anyone interested in privacy law matters.

    TPP re-published by the The Student Lawyer: Use of facial recognition software in school lunch queues in North Ayrshire

    TPP is pleased to announce that the article that appeared on this site analysing use of facial recognition software in schools in North Ayrshire has been republished by the Student Lawyer.

    The Student Lawyer is a go-to legal news and blogging site for law students. You can find the article here.

    Citation: 5RB: European Court of Human Rights upholds Article 8 privacy breach in relation to reputation of a dead person

    In a case builds upon pre-existing caselaw on the rights of those who are deceased the European Court of Human Rights has found an article 8 breach in relation to news articles posted about a deceased Roman Catholic Priest.

    ML v Slovakia 34159/17 concerned a number of articles published by three Slovakian newspapers about the historic sex offence convictions of the claimants son.

    The Court found that the articles were inaccurate and sensationalist citing that: “However, it follows from what has been said above that the domestic courts failed to carry out a balancing exercise between the applicant’s right to private life and the newspaper publishers’ freedom of expression in conformity with the criteria laid down in the Court’s case-law.

    Concluding the Courts stated, applying Article 8:

    “…dealing appropriately with the dead out of respect for the feelings of the deceased’s relatives falls within the scope of Article 8 of the Convention”.

    Furthermore the Court stated a clear and concise view on the journalistic integrity of the reporting: “Although the journalists must be afforded some degree of exaggeration or even provocation, the Court considers that the frivolous and unverified statements about the applicants sons private life must be taken to have gone beyond the limits of responsible journalism” -p.47

    5RB has an excellent case comment.

    ICO launches consultation on the Draft Journalism Code of Practice

    The ICO’s consultation on its Draft Journalism Code of Practice has begun.

    Be sure to have your say- the deadline to submit responses is 22 January 2022.

    The Code covers privacy safeguards among many other topics. In particular, it covers the journalism exemption under the Data Protection Act 2018 and its broad exemption that disapplies requirements to holding and processing data.

    Journalism should be balanced with other rights that are also
    fundamentally important to democracy, such as data protection and the
    right to privacy.

    at p.4

    The Code substantively addresses the safeguarding of journalism under the exemption, briefly touching on balancing a free press against privacy rights before going on to discuss how this balance is struck under data protection laws:

    Why is it important to balance journalism and privacy?


    It is widely accepted that a free press, especially a diverse press, is a
    fundamental component of a democracy.

    It is associated with strong and
    important public benefits worthy of special protection. This in itself is a public
    interest.

    Most obviously, a free press plays a vital role in the free flow of

    communications in a democracy. It increases knowledge, informs debates
    and helps citizens to participate more fully in society. All forms of journalistic
    content can perform this crucial role, from day-to-day stories about local
    events to celebrity gossip to major public interest investigations.

    A free press is also regarded as a public watch-dog. It acts as an important
    check on political and other forms of power, and in particular abuses of
    power. In this way, it helps citizens to hold the powerful to account.

    However, the right to freedom of expression and information should be
    balanced with other rights that are necessary in a democratic society, such
    as the right to privacy. The public interest in individual freedom of expression
    is itself an aspect of a broader public interest in the autonomy, integrity and
    dignity of individuals.

    The influence and power of the press in society, and the reach of the

    internet, means that it is particularly important to balance journalism and
    people’s right to privacy.

    This code provides guidance about balancing these two important rights by
    helping you to understand what data protection law requires and how to
    comply with these requirements effectively.

    at p.25

    ICO intervenes in nine schools in North Ayrshire which are using facial recognition software to scan faces of pupils in lunch queues

    According to the Financial Times and Guardian the ICO is set to intervene in nine schools in North Ayrshire following the discovery that pupils faces were being scanned in lunch queues to take payments.

    The ICO commented: 

    “Data protection law provides additional protections for children, and organisations need to carefully consider the necessity and proportionality of collecting biometric data before they do so. Organisations should consider using a different approach if the same goal can be achieved in a less intrusive manner. We are aware of the introduction, and will be making inquiries with North Ayrshire council.”

    Whilst the company that provides the software argues this a safe way to take payments in the age of covid the question, as the ICO rightly posits, clearly arises as to whether a less invasive method of safely taking payments could be used.

    Simple measures such as issuing pupils with lunch cards that they can scan to identify themselves or even with just a unique ID number that could easily be anonymised and aggregated, would just as easily serve this purpose.

    Under Article 35 of the GDPR a Data Protection Impact Assessment must be made before this software is used. This would assess whether the use of facial recognition software was a proportionate means for achieving the legitimate aim of securely taking card payments. Aspects such as the retention period of data, storage methods, basis for processing, safeguards and processes for gathering consent must be considered.

    Schools should have mechanisms and documentation in place to explain to children the circumstances of this data collection, storage and their rights under the GDPR, including an option to opt out of the data collection. 

    Under the GDPR the age where children can consent to the sharing of their personal data in England and Wales is as low as is permissible- thirteen. In Scotland, the location of the schools, the age is lower- at twelve years of age.

    Interestingly, North Ayrshire Council indicated that 97% of pupils or their parents had given consent to this process. The Council has temporarily paused the rollout of the software given the ICO’s intervention.

    CBR Cumminghams, a company that provides the software, stated that their cameras check pupils faces against encrypted templates, an thus operated differently to “live” facial recognition used by the police to scan for criminal activities, that was challenged in the Bridges case.

    A Principal of one of the schools, David Waugh, commented:

    “The combined fingerprint and facial recognition system was part of an upgrade to the catering cashless system, so that the time it takes to serve students is reduced, thus giving a better dining experience. However, we will not be using the facial recognition aspect.”

    Mischon de Reya has a excellent analysis of these issues, which cover Scotland and are thus outside of TPP’s remit. The BBC also reports on the story.

    Duchess of Sussex, Meghan Markle successful in privacy claim against the Mail on Sunday

    Meghan Markle has been successful in her privacy claim against the Mail on Sunday regarding the publication of excerpts of the contents of a private letter to her father.

    The Duchess’ request for summary judgment on the parts of the claim concerning privacy were granted by Justice Warby.

    In finding that the statement of case had no reasonable grounds for defending the claim Warby J considered whether the defence stated has an defence had the ability to offer a defence to the claim of misuse of private information. Further,
    “(i) at the time of its publication, the claimant had a reasonable expectation of privacy in respect of the contents of the Letter, and

    (ii) this being the case, and
    applying the requisite balancing exercise, the defendant has failed to discharge the burden which rests upon it to advance a viable justification for interfering with that
    right.” at p.35

    Question (i) – A reasonable expectation of privacy

    Justice Warby considered whether the Defence set out and had a reasonable prospect of advancing that the claimant no expectation of privacy in the information at issue. Also whether there was an realistic prospect of success of the defendant defending this at trail. Warby considered the response to be no on both counts.

    He strictly applied the criteria found in the Murray case:

    “(1) The claimant was a prominent member of the Royal Family, and in that sense a public figure, who had a high public profile, and about whom much had been and continued to be written and published; this is an important feature of the background and the circumstances but

    (2) the nature of the “activity” in which she had engaged was not an aspect of her public role or functions; she was communicating to
    her father about his behaviour, its impact on her, her feelings about it, and her wishes
    for the future; and

    (3) she was doing this in a letter sent to him alone, privately, by means of a courier service.

    (4) The “intrusion” involved the publication of much if not most of the information in the Letter by way of sensational revelations over four pages of a popular newspaper and online, to a very large readership; and that, in broad terms, was the purpose of the “intrusion”.

    (5) There was no consent, and it is beyond dispute that this was known to or could have been inferred by Mr Markle and the defendant.

    (6) The unwanted disclosure was likely to cause the claimant at least some distress,
    especially as it was done with the co-operation of her father, and in the context of a detailed and critical response by him to the content of the Letter.

    (7) The information
    was given to the defendant by the claimant’s father.” at p.69

    Question (ii) – the balancing exercise

    Warby J next turned to the fact of whether the publication could be proportionate in pursuit of
    the legitimate aim of protecting the rights of others? Is the interference with freedom
    of expression that would be represented by a finding of liability necessary and
    proportionate in pursuit of the legitimate aim of protecting the rights of the claimant?

    In concluding that it could not significant weight was given to Ms Markle’s status as a public figure. It was considered a theme of the Defendant’s arguements that the Duchess had sought to manipulate her image to be seen favourably. In this case an arguement that publication was preventing the public from being misled- a weighty arguement indeed- failed.

    Warby J however considered the case “legally untenable or flimsy at best.” Concluding as two part (ii):

    “The claimant had a reasonable expectation that the contents of the Letter would remain
    private. The Mail Articles interfered with that reasonable expectation. The only tenable justification for any such interference was to correct some inaccuracies about the Letter contained in the People Article. On an objective review of the Articles in the light of the surrounding circumstances, the inescapable conclusion is that, save to the very limited extent I have identified, the disclosures made were not a necessary or proportionate means of serving that purpose. For the most part they did not serve that purpose at all. Taken as a whole the disclosures were manifestly excessive and hence unlawful. There is no prospect that a different judgment would be reached after a trial. The interference with freedom of expression which those conclusions represent is a necessary and proportionate means of pursuing the legitimate aim of protecting the claimant’s privacy.” at p. 128

    The copyright infringement questions were partially disposed off. The remaining copyright issues were left to be considered following the directions given at the next hearing of 2 March 2021.