The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill

PART VI: UPDATES

Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.

Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts III, IV and V

PART III: CRITICISM

In a rare show of national unity, disapproval of the OSB has spanned both ends of the political spectrum. Alongside criticism from the Labour culture minister, Conservative politicians have also weighed in on the ‘legal but harmful’ debate. Thinktanks and non-profit groups have likewise been apprehensive.

Perhaps most headline-grabbing was the censure of the former Supreme Court judge, Lord Sumption, who denounced the OSB in an article in The Spectator, and subsequently on the Law Pod UK podcast.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts I and II

A number of Bills proposed by the recent Conservative governments have sparked controversy among commentators: among them, the Northern Ireland Protocol Bill, the Retained EU Law Bill, and the ‘British’ Bill of Rights Bill. Taking its place in the rogues’ gallery is the Online Safety Bill (OSB).

Now returning to the House of Commons on 5 December 2022 to finish its Report Stage, the OSB has come some way since the ‘Online Harms’ White Paper published in April 2019. The Bill raises important questions about freedom of expression, online speech regulation and government (over)reach.

This article has four principal components.

Part I lays out the content and objectives of the Bill, highlighting its legislative development and the key issues arising from that. Part II situates the Bill within the wider context of online regulation, considering how recent developments may inflect the Bill’s impact.

This provides the framework for Part III, which addresses the various criticisms that the Bill has received from commentators across political spectrum. Part IV then examines the broader legal and theoretical consequences of the Bill, posing further questions to be answered. Some conclusions are drawn in Part V.

An appended Part VI briefly outlines the most recent updates to the Bill.

PART I: CONTENT

Much of the OSB’s content was clarified by the Commons Digital, Culture, Media and Sport (DCMS) Committee Report in January 2022, and the Government’s Response to this in March 2022.

As these reports confirmed, the main priority of the OSB is evident from its name change. Now couched in broader terms, the Bill is designed to protect internet users’ online safety by way of three central objectives (Response, at [2]):

  1. To tackle illegal content and activity.
  2. To deliver protection for children online.
  3. To afford adults greater control, while protecting freedom of expression.

To achieve these objectives, the Bill operates on a duty of care model. Under this model, online platforms are liable only for their own conduct: the Bill seeks to hold platforms responsible for systemic ‘lack of diligence in failing to adopt preventive or remedial measures’ (Report, at [7]). This is, in theory, a less stringent regulatory model than ‘intermediary liability’, under which online platforms would also be liable for others’ content and activity.

Moreover, service providers will not owe a limitless duty of care (Report, at [4]). Instead, the Bill divides providers into various categories, which in turn are subject to specific duties. For example, Category 1 (high-risk and high-reach, user-to-user) services are deemed to be the largest and most risky, so incur additional duties as compared to Categories 2A (all regulated search services) and 2B (the remaining regulated user-to-user services).

Enforcement of such duties lies not with the government, but with the regulatory authority Ofcom, to which the legislation grants overseeing and enforcing powers (Response, at [3]).

Central to the Bill’s duty of care model is its typology of online content. Initially, the OSB distinguished illegal from legal material, the latter of which it subdivided into two – producing three content typologies to align with the Bill’s stated objectives:

  1. Illegal content
  2. Legal but harmful content
    1. Content that is harmful to children
    1. Content that is harmful to adults (for Category 1 services)

The Bill originally defined each type of content as follows (Report, at [5]):

  • Illegal content: content whose use / dissemination constitutes a relevant offence
  • Content harmful to children and adults:
    • Designated – content of a type designated in regulations made by the Secretary of State
    • Non-designated – content which fulfils one of the general definitions
      • These apply where the provider has reasonable grounds to believe that there is a material risk of the content having (even indirectly) a significant adverse physical / psychological impact on a child or adult (as applicable) of ordinary sensibilities.

These definitions were essential to the Bill’s regulatory framework, since they directly underpinned the associated risk assessment and safety duties (Report, at [6]). Simply put, how content is defined determines what a provider is required (or not) to do about it. The lower the definitional bar, the more content is subject to regulation – and, potentially, removal.

While illegal content has certainly provoked discussion, controversy has principally surrounded the ‘legal but harmful’ debate. The regulation of such content begs the question: can moderation be justified where the content, by its nature, does not meet the criminal standard?

Of particular interest are the Government’s subsequent amendments to the draft Bill, following the DCMS Report. Despite accepting eight of the Committee’s recommendations, the Government’s Response stated in the legal but harmful context that ‘rather than using the Committee’s proposed reframing, we have made other changes that meet a similar objective’ (at [29]).  

As the Bill stood in March 2022, the Government had amended its position in the following key areas:

  1. Definition of ‘harmful’ – This was simplified under the revised Bill: content had to present a material risk of significant harm to an appreciable number of children/adults (Response, at [30]). The key threshold to engage safety duties was one of ‘priority’ harmful content.
  • Designation of types of harmful content – As envisaged in the draft Bill, priority content harmful to children and adults was to be designated by the Secretary of State in secondary legislation, following consultation with Ofcom. This would now be subject to the affirmative resolution procedure, to maximise parliamentary scrutiny (Response, at [12], [55]-[57]). The government also published an indicative list of what might be designated under the Bill as priority harmful content.
  • Non-priority content harmful to adults – The revised Bill removed the obligation upon service providers to address non-priority content harmful to adults. Companies were required only to report its presence to Ofcom (Response, at [31], [40]).

According to a Ministerial Statement released in July 2022, service providers’ safety duties regarding ‘legal but harmful’ content could thus be broken down as follows:

  1. Children – Primary priority content harmful to children
    1. Services must prevent children from encountering this type of content altogether
  • Children – Priority content harmful to children
    • Services must ensure content is age-appropriate for their child users
  • Adults – Priority content harmful to adults
    • Applies only to Category 1 services
    • These must address such content in their terms and conditions, but may set their own tolerance: this may range from removing such content, to allowing it freely.

PART II: CONTEXT

To understand the ‘legal but harmful’ debate more fully, we must situate the OSB in context.

Europe:

In the EU, the recently adopted Digital Services Act (DSA) shares some similarities with the OSB: both provide a legal framework for online platforms’ duties regarding content moderation.

However, Dr Monica Horten has identified the following distinctions:

  • The DSA focuses on regulating illegal rather than merely ‘harmful’ content. In doing so, according to the non-profit Electronic Frontier Foundation, the DSA ‘avoids transforming social networks and other services into censorship tools’ – a position from which the OSB’s broader scope deviates.
  • The DSA unequivocally recognises the right to freedom of expression as guaranteed by Article 11 of the Charter of Fundamental Rights, in accordance with which service providers must act when fulfilling their obligations. The adequacy of free speech protection under the OSB may be less assured, as considered below.
  • The measures also differ in their provision of redress. While the DSA includes both prospective and retrospective procedural safeguards for users who have acted lawfully, the OSB arguably falls short – despite the Government’s assurance that users’ access to courts would not be impeded by the Bill’s ‘super-complaints mechanism’ (Response, at [18]).

It is also worth noting the proposed European Media Freedom Act (EMFA), broadly hailed as a positive step for journalistic pluralism within the EU. Granted, the OSB purports to exclude the press (‘news publishers’) from its content moderation rules. However, uncertainty remains as to the possible regulation of comments sections on newspaper websites, not to mention newspapers’ own activity on social media.

USA:

Across the Atlantic, the US courts show some signs of a legal vacuum developing around over-moderation. Recent attempts by social media users to challenge online content moderation by asserting their First Amendment rights have failed, on the basis that sites such as Facebook and Twitter are not ‘state actors’, but rather private actors not subject to constitutional claims.

As a counterpoint, the recent takeover of Twitter by Elon Musk may illustrate the risks of under-moderation. Concerns are particularly acute in light of Musk’s reinstatement of banned high-profile accounts – having stated he would wait until a new ‘content moderation council’ had convened – and his announcement of a general amnesty. This follows the removal of thousands of Twitter content moderators, and swift resurgence of hate speech and misinformation.

UK:

Returning to the UK, the wider position of freedom of expression is somewhat ambiguous.

On the one hand, the aforementioned Bill of Rights Bill (BORB) claims to improve safeguards: clause 4 requires judges to give ‘great weight’ to protecting freedom of expression. However, the former Deputy President of the Supreme Court, Lord Mance, has queried how different this is to the ‘particular regard’ provision in s 12(4) of the HRA. Other commentators have questioned whether this presumptive priority of Article 10 may in fact skew the balance in privacy actions, which rely on the presumptive parity between Articles 8 and 10. On either analysis, the BORB’s parallel statutory attempt to enshrine freedom of expression – recalling the OSB’s third objective – is not encouraging.

On the other hand, calls for greater online regulation have gained traction following the inquest into the death of the British teenager Molly Russell. The senior coroner found in October that the 14-year-old had suffered from ‘the negative effects of on-line content’, calling inter alia for ‘the effective regulation of harmful on-line content’, and for legislation ‘to ensure the protection of children’ against its effects. This offers a compelling policy argument in favour of the OSB’s second objective.

This overview of the Bill’s content and context provides the factual basis for a normative analysis of its criticisms and consequences in Parts III and IV.

Naomi Kilcoyne is a Visiting Lecturer in Public Law at City University, having completed her GDL there in 2021-22. She has a particular interest in the interplay between public and private law.

The Personal Data life cycle: Where to start the analysis? – Vladyslav Tamashev, Privacy lawyer at Legal IT Group

Have you ever thought about data on your computer? It doesn’t matter whether you are a content creator, programmer, or just a regular user thousands of different files were created, downloaded, and altered on your device. But what happens when some of that data becomes useless to you?

Usually, this data will be manually deleted to get some free space on your storage device or it will be wiped during the OS reinstallation. Everything that happened with that data starting from its creation or collection until its destruction is called the data life cycle.

The data life cycle is a sequence of stages that happened to a particular unit of data. The simplified life cycle model has 5 basic stages: Collection, Processing, Retention, Disclosure, Destruction. In practice, when we talk about personal data life cycle, this sequence can be dramatically different, dependant on the type of information, its usage, origin, company policies, personal data protection regulations and legislation.

Continue reading

Copyright

Copyright under English law is primarily established under the Copyright Designs and Patents Act 1988. Copyright can extend to protect videos and images taken by you on your devices.

In such circumstances, these videos and images are protected 70 years from the end of the life of the taker. This can function to protect photographs and videos that you have taken from use by third parties. By enforcing your copyright ownership you can control who has the right to use and edit the images and/or footage in question. This is usually in the form of a cease and desist letter notifying the third party of your ownership of the material whilst asking that they stop usage as soon as possible.

aperture black blur camera

Revisiting the right to be forgotten, the NT1 and NT2 case

The right to be forgotten or right to erasure under data protection legislation and enshrined from the Google Spain case allows significant protection of information regarding the individual. In this post, we consider the seminal case of NT1 and NT2 which is illustrative of this fact. Continue reading

Breach of confidence

Breach of confidence occurs when confidential information, as shared between parties in a manner which is confidential, is shared with a third party in breach of that duty of confidence. What imposes the duty to protect the information in a breach of confidence case is a pre-existing confidential relationship between the parties.

The case of Coco v A.N. Clark involved the claimant looking to bring a new form of moped to the market, parts of which were then sourced from a third party in breach of obligations of confidence. This case underpinned the three elements of the tort and highlights the most common scenario breach of confidence claims arise in; those involving business secrets and negotiations.

In relation to privacy breach of confidence tends to cover confidential conversations and communications where the nature of the information itself attracts a reasonable expectation of privacy. This may relate to communications with lawyers or medical professionals, for example.

black android smartphone on top of white book

Defamation

Defamation seeks to protect the individuals’ reputation from false statements which harm or may harm it. Slander and libel (more permanent forms of communication) refer to a statement publicized to a third party which has or is likely to cause serious harm to their reputation.

Defamation is a construct of the common law, built up over a series of legal cases. Defamation cases have been held to extend to social media, such as to tweets made by Katie Hopkins to food writer Jack Monroe.

Thornton v Telegraph Media Group Ltd [2011] EWHC 159 (QB) highlighted that defamation claims often cross the threshold to engage Article 8 privacy rights. In particular, the European Court of Human Rights has ruled that:

“In order for Article 8 to come into play, however, an attack on a person’s reputation must attain a certain level of seriousness and in a manner causing prejudice to personal enjoyment of the right to respect for private life…”

Claimants have to show the statement at issue is likely to cause serious harm to their reputation per s.1 Defamation Act 2013. This is typically via evidence such as circulation, subscribers and views of the statement at issue.

The defenses available to defamation are:

  1. Truth: That the statement itself was substantially true.
  2. Honest opinion: That the statement was one of opinion and that an honest person could have reasonably held that opinion.
  3. Public interest: That the matter was one which was in the public interest and the publisher of the statement reasonably believed it to be so.
  4. Privilege: This can be absolute (such as a Parliamentary statement) or qualified (e.g. job references). Qualified privilege does not protect the publisher of a statement where it was done so maliciously.

Passing off

Passing off is typically used to protect a person’s name or image, which has attracted goodwill as a business commodity. There are three well-established elements of passing off as stated in the case of Reckitt & Colman Products Ltd v Borden Inc & Ors [1990] RPC 341:

shallow focus photography of assorted color clothes hanged on clothes rack

  1. Goodwill or reputation in the mind of the public attached to goods or services;
  2. The defendant misrepresented that their goods or services are that of the claimant’s; and
  3. The claimant suffered or is likely to suffer damages due to erroneous belief in the mind of the public that the defendant’s goods are the claimant’s.

A case which illustrates this is that of Fenty v Arcadia [2015] EWCA Civ 3, a case involving Rihanna bringing a passing off action against Topshop. The action arose from Topshop’s unauthorized use of an image of Rihanna on a line of t-shirts. It was first considered that:

  “registered trade marks aside, no-one can claim monopoly rights in a word or a name. Conversely, however, no-one may, by the use of any word or name, or in any other way, represent his goods or services as being the goods or services of another person and so cause that other person injury to his goodwill and so damage him in his business” – p.34

However, it was concluded that all elements of the tort were made out by the claimant. Rihanna had a marked presence in the fashion industry and had generated significant goodwill. By using her image on its t-shirts Topshop created a likelihood of confusion between customers that the t-shirts were endorsed by Rihanna herself. They were not. It was, therefore, considered Rihanna suffered damage due to the unauthorized use of her image. This was despite the fact that there is no standalone right to protect one’s image at law.

The Fenty case is illustrative of how passing off can be used to protect elements of the person which are inherently private identifying factors. The foremost of these being the likeness of a person or their name.

It should be noted that the rationale from protection in passing off cases in protecting the goodwill which attaches to these elements of the person. The nature of a passing-off action is, therefore, more akin to other economic torts such as malicious falsehood. Notwithstanding this nature, the propensity for passing off actions to be used to protect elements of the persona that attract inherent private character is significant.

Malicious falsehood

These claims stem from the malicious publication of a false statement which identifies the claimant and has caused them financial loss. These four elements must be proven by the claimant. What malicious falsehood seeks to protect is the claimant’s economic rights, primarily the goodwill in their business. Therefore, in many cases claimants will seek to show special pecuniary loss in the form of damages to business evidenced by loss of profits.

In some cases, however, no loss needs to be proven by the claimant. These instances are outlined in s3(1)(a) and (b) Defamation Act 1952 and typically involve instances, where the statement complained of, is in writing and was calculated to cause pecuniary damage to the plaintiff.

Malicious falsehood is firstly concerned with the falsity of a statement, rather than matters of comment or opinion that defamation is typically debated around.

“Some malicious falsehood claims also involve Art 8 (privacy) rights, although less frequently than in defamation claims” – Thornton v Telegraph Media Group Ltd [2011] EWHC 159 (QB) at p.33

The requirement for financial loss to be evidenced in malicious falsehood cases means that it less often covers Article 8 issues, as these are more likely to be personal attacks meriting Article 8 protection. As an economic tort malicious falsehood sits less easily with Article 8 issues than the personal tort of defamation (see Ajinomoto Sweetners Europe SAS v Asda Stores Limited [2010] EWCA Civ 609).

In a malicious falsehood claim damages are compensatory in nature. They seek to provide compensation for the pecuniary loss caused by the false statement.

As practical matter defamation and malicious falsehood claims are typically brought together. Covering Article 8 rights statements can include false allegations which impinge upon the private life of the claimant. These include mixed statements which have personal imputations which damage the claimant’s business, such as statements about infidelity or convictions.

marketing man person communication