On 7 January 2025 Meta made sweeping changes to its policy on Community Standards – Hateful Conduct the (“Standards”). This article examines how these changes put marginalised groups at serious risk and how they, in the context of the Online Safety Act 2023 (the “Act”) are in breach of their duties to prevent and protect these users from harm.
In particular, these changes allow LGBTQ+ persons to be called mentally ill, transgender people to be called “it” and women to be referred to as property in user-to-user communications on Meta’s platforms such as Facebook and Instagram.
- The changes themselves:
Amongst some of the most concerning removals and additions to the Standards are, and I quote from the Standards itself here:
- allowing women to be referred to as “household objects or property or objects in general”
- allowing transgender or non-binary people to be referred to “as it”
- We do allow content arguing for gender-based limitations of military, law enforcement, and teaching jobs. We also allow the same content based on sexual orientation, when the content is based on religious beliefs.
- We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like “weird.”
[The link to the Meta Hateful Conduct Policy can be found here- https://transparency.meta.com/en-gb/policies/community-standards/hateful-conduct/
If you select the changelog- 7 January 2025 option you can more easily see the most recent changes made by Meta. These include those referenced in this article which, if you scroll down after pressing read more, appear under the tier 2 heading.]
- The relevant provisions of the Online Safety Act 2023
So how do these changes sit given the framework of the Act?
The Act came in force on 26 October 2023, and many of its provisions are still in phased implementation. As user-to-user services, both Facebook and Instagram come under the purview of the Act.
Section 7 of the Act places a duty of care on user-to-user service providers such as Meta. More particularly, s.7(2) of the Act sets out that Meta must comply with duties regarding illegal content set out in s.10(2) to (8) of the Act and also duties about complaints procedures set out in s.21.
- It is worth digging into the provisions of s.10(2), which state:
(2) A duty, in relation to a service, to take or use proportionate measures relating to the design or operation of the service to—
- prevent individuals from encountering priority illegal content by means of the service,
- effectively mitigate and manage the risk of the service being used for the commission or facilitation of a priority offence, as identified in the most recent illegal content risk assessment of the service, and
- effectively mitigate and manage the risks of harm to individuals, as identified in the most recent illegal content risk assessment of the service (see section 9(5)(g)).
- Furthermore, section 10(3) states:
(3)A duty to operate a service using proportionate systems and processes designed to—
(a)minimise the length of time for which any priority illegal content is present;
(b)where the provider is alerted by a person to the presence of any illegal content, or becomes aware of it in any other way, swiftly take down such content.
From these questions arise- what is “priority illegal content” and “priority offence”?
Priority illegal content is defined at s 59 of the Act also:
(10)“Priority illegal content” means—
(a)terrorism content,
(b)CSEA content, and
(c)content that amounts to an offence specified in Schedule 7.
- Schedule 7 lists various harassment offences.
What is a priority offence?
- s. 59 (7) “Priority offence” means—
(a)an offence specified in Schedule 5 (terrorism offences),
(b)an offence specified in Schedule 6 (offences related to child sexual exploitation and abuse), or
(c)an offence specified in Schedule 7 (other priority offences).
- If we go to Schedule 7 we see that various harassment offences are listed as priority offences.
The application of the provisions of the Online Safety Act 2023:
So now we can determine that where harassment which meets a criminal threshold occurs, such as calling someone such hateful things as the Standards allow, Facebook and Instagram’s owners have a duty to prevent individuals from encountering such content and should mitigate or manage the risk of those platforms being used for the commission of such priority offences.
Indeed, the Sentencing Guidelines for such offences note that if these offences are committed by demonstrating hostility based on presumed characteristics of the victim including, sex, sexual orientation or transgender identity, these are factors which demonstrate high culpability in the commission of such offence, potentially justifying a finding of high culpability and impacting sentencing.
However, here is Meta, allowing the commission of such offences by making explicit provision that these statements are allowed on its platform? Also, adding insult to what may result in actual injury, it attempts to justify this “given political and religious discourse” in an LGBTQ+ context.
Being homosexual was declassified as a mental disorder by the World Health Organisation (“WHO”) in 1990 and in 2019 the WHO reclassified transgender people’s gender identity as gender incongruence, moving it from the mental health and behavioural disorders chapter to conditions related to sexual health.
Yet, Meta still thinks it’s acceptable to equate LGBTQ+ people to being mentally ill?
Section 10(2) is notably limited to take or use “proportionate measures”- in the cases of Instagram and Facebook these user-to-user services are clearly the most sophisticated and wide-ranging services there are. As such it is easily arguable that having policies that entrench the protection of users at the outset, prevent such content on their platforms and allow for complaints where users have been subjected to such comments to be upheld rather than dismissed, must be in place or the service provider much face the consequences of breaching the Act.
Indeed, my hopes are that, as the polices are worldwide, online safety laws will intervene in such pernicious changes which further marginalise those at risk and expose them to abuse at the whim of political pandering.
Non-compliance with any regulatory action from Ofcom could have rightly serious implications for companies such as Meta- under the Act companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater.
In the UK Ofcom, which regulates this space, has said: “from 17 March 2025, providers will need to take the safety measures set out in the Codes of Practice or use other effective measures to protect users from illegal content and activity.”
Even though Meta is not based in the UK the Government’s Online Safety Act explainer makes it clear, as do the provisions of the Act:
“The Act gives Ofcom the powers they need to take appropriate action against all companies in scope, no matter where they are based, where services have relevant links with the UK. This means services with a significant number of UK users or where UK users are a target market, as well as other services which have in-scope content that presents a risk of significant harm to people in the UK.”
The Draft Codes of Practice-
Also of relevance here is the illegal content Codes of Practice for user-to-user services which is the recommended guidance to be adopted by service providers. In particular, for large or mutli-risk services, such as Instagram and Facebook, it sets out the recommendation that they have policies in place for the removal of illegal content.
In changing its Standards as such, Meta has also rendered Instagram and Facebook in breach of the Code of Practice issued by Ofcom pursuant to the Act. It should be noted that whilst the Codes are recommended to be followed platforms can deviate from them but have to justify where they do so.
Other applicable UK legislation-
It should also be noted that other UK legislation is applicable in these instances, including but not limited to:
- Communications Act 2003- s.127
- Malicious Communications Act 1998- s.1
- Equality Act 2010- particularly in an employment context the discrimination provisions maybe applicable.