s.230 of the Communications Decency Act – Gonzalez v Google No.21-1333, an upcoming challenge to Internet platforms protections – Citation – US – The Associated Press

The Associated Press has highlighted, in a long-read, a legal case which looks to challenge the protection of internet platforms under s.230 of the Communications Decency Act.

The Supreme Court case concerns liability for YouTube suggestions which are argued helped the Islamic State recruit. The case is brought by the family of Nohemi Gonzalez who tragically lost her life in a terrorist attack in Paris.

The case is due to be heard on Tuesday 21 February.

See here for the article and for more details see the SCOTUS blog.

“Issue: Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information.”

The SCOTUS Blog

Liberty and Privacy International v Security Service [2023] UKIPTrib1- MI5 admitted to have been using personal data unlawfully without application of safeguards of retention, review and disposal

MI5 admitted that personal data had been unlawfully processed and retained between the period of 2016 and 2019 due to failures in retention, review and destruction practicies.

See pg.79 of the open judgment for a summary of the failings of MI5 in their handling of personal data in particular.

For further, more detailed, context regarding the case see the Privacy International press release.

Quotes from caselaw 7: Driver v CPS [2022] EWHC 2500 KB – a departure from the starting point of a reasonable expectation of privacy in criminal investigations pre-charge on “special facts” and low value data breaches

This case is illustrative of a set of facts where the legitimate starting point of a reasonable expectation of privacy in respect of a criminal investigation at pre-charge stage under English law can be can be departed from:

Whilst a reasonable expectation of privacy in relation to a police investigation is the starting point, on the particular and somewhat special facts of this case, I am unable to conclude that by June 2019 such an expectation subsisted in relation to the information that the CPS were considering a charging decision in relation to the Claimant.

at p.147, Knowles J.

Note reference by the judge to the “special facts” of the case. For the special facts this case turns on in relation to the article 8 grounds see p.148-151.

The case concerned the passing of a file from the CPS and the disclosure of that fact to a third party. This was objected to by the claimant on data protection and privacy grounds.

Whilst the disclosure did not include the name of the claimant, it was found that “personal data can relate to more than one person and does not have to relate exclusively to one data subject, particularly when the group referred to is small.”- p.101

In this case, the operation in question, Operation Sheridan, concerned only eight suspects, of which the claimant was one.

Accordingly in finding for the claimant it was considered that “this data breach was at the lowest end of the spectrum. Taking all matters together in the round, I award the Claimant damages of £250. I will also make a declaration that the Defendant breached the Claimant’s rights under Part 3 of the DPA 2018.” – at p.169

However, in relation to a claim for breach of article 8, as p.147 reflects, the claim was unsuccessful. This was due to the judge considering that there were “special facts” this case turns on in relation to the application of article 8, meriting departure from starting point of there being a reasonable expectation of privacy in criminal inversitgations at pre-charge stage (in particular, see p.148-151).

Such “special facts” included, in combination: an ongoing investigation for many years, the Claimant’s own waiver of their right to privacy by making details of the case at pre-charge stage public themselves (including to media outlets), further proceedings after that intial disclosure, including the Claimant’s arrest in 2017 and further passing of police files to the CPS in 2018 in relation to that same Operation Sheridan.

This case is illustrative of how privacy cases in light of ZXC fall within a spectrum, allowing for circumstances in which the legitimate starting point it established can be departed from, albeit this case turning on “special facts” which are clearly, in this instance, narrow and particularly unique. It also clarifies what facts are considered to give rise to a data breach “at the lowest end of the spectrum” and that the value of such breaches is reflected in nominal damages awards- in this case £250 and a declaration.

This case was number 2 on my Top 10 Data Protection and Privacy Law Cases 2022.

Privacy Law in Practice – An Insight into Data Protection Law as an In-House IT Lawyer – Madeleine Weber

Welcome to Privacy Law in Practice, our series at TPP demystifying what it is like to practice in privacy law.

Have you ever wondered which data protection law issues come up in practice? It obviously depends on the industry and area you work in, but data protection law might be more prevalent than you think.

Continue reading

Top 10 Privacy and Data Protection Cases 2022

Inforrm covered a wide range of data protection and privacy cases in 2022. Following my posts in 20182019,  2020 and 2021 here is my selection of notable privacy and data protection cases across 2022.

  1. ZXC v Bloomberg [2022] UKSC 5

This was the seminal privacy case of the year, decided by the UK Supreme Court. It was considered whether, in general a person under criminal investigation has, prior to being charged, a reasonable expectation of privacy in respect of information relating to that investigation.

Continue reading

Top 10 Defamation Cases 2022

Inforrm reported on a large number of defamation cases from around the world in 2022.  Following my widely read posts on 2017,  2018,  20192020 and 2021 defamation cases, this is my personal selection of the most legally and factually interesting cases from England, Australia and Canada from the past year.

  1. Vardy v. Rooney [2022] EWHC 2017 (QB)

An interim hearing in this case featured at number five in my 2021 list.  We now have the final judgment in the “Wagatha Christie” case between Rebekah Vardy and Coleen Rooney as number one in my 2022 list. The case was one of the most high-profile libel cases in recent years, concerning the alleged leaking of posts from Ms Rooney’s private Instagram account to the Sun by Ms Vardy, via her agent Ms Caroline Watt. The resulting post on social media regarding Ms Vardy’s involvement in the leaks by Ms Rooney, were the subject of the libel claim.

Ultimately the claim of libel against the defendant, Coleen Rooney, was dismissed due to the defence of truth being established. Notably, “the information disclosed was not deeply confidential, and it can fairly be described as trivial, but it does not need to be confidential or important to meet the sting of the libel.” [287]

Continue reading

Festive wishes from TPP

We would like to thank all our readers and subscribers for visiting TPP over the past year. Many thanks also to our contributors across the past year for their insight and expertise.

We are currently working on getting more informative pieces on privacy to you- including a series on what privacy law is like to practice as a professional (if you would like to contribute be sure to let us know) and our traditional Top 10 cases of the year across defamation, privacy law and data protection in association with the esteemed International Forum for Responsible Media Blog.

In the meantime, if any of our readers would like to guest write for us we encourage you to get in touch- we always welcome the opportunity to work with you.

Our case quote of the year is from the seminal case that was heard before the UK Supreme Court, ZXC v Bloomberg [2022] UKSC 5, finding that, as a legitimate starting point, criminal suspects have a reasonable expectation of privacy in the fact of an investigation at pre-charge stage:

…whether there is a reasonable expectation of privacy in the relevant information is a fact-specific enquiry which requires the evaluation of all circumstances in the individual case… We consider that the courts below were correct in articulating such a legitimate starting point to the information in this case. This means that once the claimant has set out and established the circumstances, the court should commence its analysis by applying the starting point.

[And, as such:]

The courts below were correct to hold that, as a legitimate starting point, a person under criminal investigation has, prior to being charged, a reasonable expectation of privacy in respect of information relating to that investigation and that in all the circumstances this is a case in which that applies and there is such an expectation.

at p.144 and 146 from Lord Hamblen and Lord Stephens

See our comment on the case for more information.

A very happy Christmas and New Year to you all.

The Privacy Perspective Founder and Editor, Suneet Sharma

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill

PART VI: UPDATES

Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.

Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts III, IV and V

PART III: CRITICISM

In a rare show of national unity, disapproval of the OSB has spanned both ends of the political spectrum. Alongside criticism from the Labour culture minister, Conservative politicians have also weighed in on the ‘legal but harmful’ debate. Thinktanks and non-profit groups have likewise been apprehensive.

Perhaps most headline-grabbing was the censure of the former Supreme Court judge, Lord Sumption, who denounced the OSB in an article in The Spectator, and subsequently on the Law Pod UK podcast.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts I and II

A number of Bills proposed by the recent Conservative governments have sparked controversy among commentators: among them, the Northern Ireland Protocol Bill, the Retained EU Law Bill, and the ‘British’ Bill of Rights Bill. Taking its place in the rogues’ gallery is the Online Safety Bill (OSB).

Now returning to the House of Commons on 5 December 2022 to finish its Report Stage, the OSB has come some way since the ‘Online Harms’ White Paper published in April 2019. The Bill raises important questions about freedom of expression, online speech regulation and government (over)reach.

This article has four principal components.

Part I lays out the content and objectives of the Bill, highlighting its legislative development and the key issues arising from that. Part II situates the Bill within the wider context of online regulation, considering how recent developments may inflect the Bill’s impact.

This provides the framework for Part III, which addresses the various criticisms that the Bill has received from commentators across political spectrum. Part IV then examines the broader legal and theoretical consequences of the Bill, posing further questions to be answered. Some conclusions are drawn in Part V.

An appended Part VI briefly outlines the most recent updates to the Bill.

PART I: CONTENT

Much of the OSB’s content was clarified by the Commons Digital, Culture, Media and Sport (DCMS) Committee Report in January 2022, and the Government’s Response to this in March 2022.

As these reports confirmed, the main priority of the OSB is evident from its name change. Now couched in broader terms, the Bill is designed to protect internet users’ online safety by way of three central objectives (Response, at [2]):

  1. To tackle illegal content and activity.
  2. To deliver protection for children online.
  3. To afford adults greater control, while protecting freedom of expression.

To achieve these objectives, the Bill operates on a duty of care model. Under this model, online platforms are liable only for their own conduct: the Bill seeks to hold platforms responsible for systemic ‘lack of diligence in failing to adopt preventive or remedial measures’ (Report, at [7]). This is, in theory, a less stringent regulatory model than ‘intermediary liability’, under which online platforms would also be liable for others’ content and activity.

Moreover, service providers will not owe a limitless duty of care (Report, at [4]). Instead, the Bill divides providers into various categories, which in turn are subject to specific duties. For example, Category 1 (high-risk and high-reach, user-to-user) services are deemed to be the largest and most risky, so incur additional duties as compared to Categories 2A (all regulated search services) and 2B (the remaining regulated user-to-user services).

Enforcement of such duties lies not with the government, but with the regulatory authority Ofcom, to which the legislation grants overseeing and enforcing powers (Response, at [3]).

Central to the Bill’s duty of care model is its typology of online content. Initially, the OSB distinguished illegal from legal material, the latter of which it subdivided into two – producing three content typologies to align with the Bill’s stated objectives:

  1. Illegal content
  2. Legal but harmful content
    1. Content that is harmful to children
    1. Content that is harmful to adults (for Category 1 services)

The Bill originally defined each type of content as follows (Report, at [5]):

  • Illegal content: content whose use / dissemination constitutes a relevant offence
  • Content harmful to children and adults:
    • Designated – content of a type designated in regulations made by the Secretary of State
    • Non-designated – content which fulfils one of the general definitions
      • These apply where the provider has reasonable grounds to believe that there is a material risk of the content having (even indirectly) a significant adverse physical / psychological impact on a child or adult (as applicable) of ordinary sensibilities.

These definitions were essential to the Bill’s regulatory framework, since they directly underpinned the associated risk assessment and safety duties (Report, at [6]). Simply put, how content is defined determines what a provider is required (or not) to do about it. The lower the definitional bar, the more content is subject to regulation – and, potentially, removal.

While illegal content has certainly provoked discussion, controversy has principally surrounded the ‘legal but harmful’ debate. The regulation of such content begs the question: can moderation be justified where the content, by its nature, does not meet the criminal standard?

Of particular interest are the Government’s subsequent amendments to the draft Bill, following the DCMS Report. Despite accepting eight of the Committee’s recommendations, the Government’s Response stated in the legal but harmful context that ‘rather than using the Committee’s proposed reframing, we have made other changes that meet a similar objective’ (at [29]).  

As the Bill stood in March 2022, the Government had amended its position in the following key areas:

  1. Definition of ‘harmful’ – This was simplified under the revised Bill: content had to present a material risk of significant harm to an appreciable number of children/adults (Response, at [30]). The key threshold to engage safety duties was one of ‘priority’ harmful content.
  • Designation of types of harmful content – As envisaged in the draft Bill, priority content harmful to children and adults was to be designated by the Secretary of State in secondary legislation, following consultation with Ofcom. This would now be subject to the affirmative resolution procedure, to maximise parliamentary scrutiny (Response, at [12], [55]-[57]). The government also published an indicative list of what might be designated under the Bill as priority harmful content.
  • Non-priority content harmful to adults – The revised Bill removed the obligation upon service providers to address non-priority content harmful to adults. Companies were required only to report its presence to Ofcom (Response, at [31], [40]).

According to a Ministerial Statement released in July 2022, service providers’ safety duties regarding ‘legal but harmful’ content could thus be broken down as follows:

  1. Children – Primary priority content harmful to children
    1. Services must prevent children from encountering this type of content altogether
  • Children – Priority content harmful to children
    • Services must ensure content is age-appropriate for their child users
  • Adults – Priority content harmful to adults
    • Applies only to Category 1 services
    • These must address such content in their terms and conditions, but may set their own tolerance: this may range from removing such content, to allowing it freely.

PART II: CONTEXT

To understand the ‘legal but harmful’ debate more fully, we must situate the OSB in context.

Europe:

In the EU, the recently adopted Digital Services Act (DSA) shares some similarities with the OSB: both provide a legal framework for online platforms’ duties regarding content moderation.

However, Dr Monica Horten has identified the following distinctions:

  • The DSA focuses on regulating illegal rather than merely ‘harmful’ content. In doing so, according to the non-profit Electronic Frontier Foundation, the DSA ‘avoids transforming social networks and other services into censorship tools’ – a position from which the OSB’s broader scope deviates.
  • The DSA unequivocally recognises the right to freedom of expression as guaranteed by Article 11 of the Charter of Fundamental Rights, in accordance with which service providers must act when fulfilling their obligations. The adequacy of free speech protection under the OSB may be less assured, as considered below.
  • The measures also differ in their provision of redress. While the DSA includes both prospective and retrospective procedural safeguards for users who have acted lawfully, the OSB arguably falls short – despite the Government’s assurance that users’ access to courts would not be impeded by the Bill’s ‘super-complaints mechanism’ (Response, at [18]).

It is also worth noting the proposed European Media Freedom Act (EMFA), broadly hailed as a positive step for journalistic pluralism within the EU. Granted, the OSB purports to exclude the press (‘news publishers’) from its content moderation rules. However, uncertainty remains as to the possible regulation of comments sections on newspaper websites, not to mention newspapers’ own activity on social media.

USA:

Across the Atlantic, the US courts show some signs of a legal vacuum developing around over-moderation. Recent attempts by social media users to challenge online content moderation by asserting their First Amendment rights have failed, on the basis that sites such as Facebook and Twitter are not ‘state actors’, but rather private actors not subject to constitutional claims.

As a counterpoint, the recent takeover of Twitter by Elon Musk may illustrate the risks of under-moderation. Concerns are particularly acute in light of Musk’s reinstatement of banned high-profile accounts – having stated he would wait until a new ‘content moderation council’ had convened – and his announcement of a general amnesty. This follows the removal of thousands of Twitter content moderators, and swift resurgence of hate speech and misinformation.

UK:

Returning to the UK, the wider position of freedom of expression is somewhat ambiguous.

On the one hand, the aforementioned Bill of Rights Bill (BORB) claims to improve safeguards: clause 4 requires judges to give ‘great weight’ to protecting freedom of expression. However, the former Deputy President of the Supreme Court, Lord Mance, has queried how different this is to the ‘particular regard’ provision in s 12(4) of the HRA. Other commentators have questioned whether this presumptive priority of Article 10 may in fact skew the balance in privacy actions, which rely on the presumptive parity between Articles 8 and 10. On either analysis, the BORB’s parallel statutory attempt to enshrine freedom of expression – recalling the OSB’s third objective – is not encouraging.

On the other hand, calls for greater online regulation have gained traction following the inquest into the death of the British teenager Molly Russell. The senior coroner found in October that the 14-year-old had suffered from ‘the negative effects of on-line content’, calling inter alia for ‘the effective regulation of harmful on-line content’, and for legislation ‘to ensure the protection of children’ against its effects. This offers a compelling policy argument in favour of the OSB’s second objective.

This overview of the Bill’s content and context provides the factual basis for a normative analysis of its criticisms and consequences in Parts III and IV.

Naomi Kilcoyne is a Visiting Lecturer in Public Law at City University, having completed her GDL there in 2021-22. She has a particular interest in the interplay between public and private law.