Quotes from caselaw 7: Driver v CPS [2022] EWHC 2500 KB – a departure from the starting point of a reasonable expectation of privacy in criminal investigations pre-charge on “special facts” and low value data breaches

This case is illustrative of a set of facts where the legitimate starting point of a reasonable expectation of privacy in respect of a criminal investigation at pre-charge stage under English law can be can be departed from:

Whilst a reasonable expectation of privacy in relation to a police investigation is the starting point, on the particular and somewhat special facts of this case, I am unable to conclude that by June 2019 such an expectation subsisted in relation to the information that the CPS were considering a charging decision in relation to the Claimant.

at p.147, Knowles J.

Note reference by the judge to the “special facts” of the case. For the special facts this case turns on in relation to the article 8 grounds see p.148-151.

The case concerned the passing of a file from the CPS and the disclosure of that fact to a third party. This was objected to by the claimant on data protection and privacy grounds.

Whilst the disclosure did not include the name of the claimant, it was found that “personal data can relate to more than one person and does not have to relate exclusively to one data subject, particularly when the group referred to is small.”- p.101

In this case, the operation in question, Operation Sheridan, concerned only eight suspects, of which the claimant was one.

Accordingly in finding for the claimant it was considered that “this data breach was at the lowest end of the spectrum. Taking all matters together in the round, I award the Claimant damages of £250. I will also make a declaration that the Defendant breached the Claimant’s rights under Part 3 of the DPA 2018.” – at p.169

However, in relation to a claim for breach of article 8, as p.147 reflects, the claim was unsuccessful. This was due to the judge considering that there were “special facts” this case turns on in relation to the application of article 8, meriting departure from starting point of there being a reasonable expectation of privacy in criminal inversitgations at pre-charge stage (in particular, see p.148-151).

Such “special facts” included, in combination: an ongoing investigation for many years, the Claimant’s own waiver of their right to privacy by making details of the case at pre-charge stage public themselves (including to media outlets), further proceedings after that intial disclosure, including the Claimant’s arrest in 2017 and further passing of police files to the CPS in 2018 in relation to that same Operation Sheridan.

This case is illustrative of how privacy cases in light of ZXC fall within a spectrum, allowing for circumstances in which the legitimate starting point it established can be departed from, albeit this case turning on “special facts” which are clearly, in this instance, narrow and particularly unique. It also clarifies what facts are considered to give rise to a data breach “at the lowest end of the spectrum” and that the value of such breaches is reflected in nominal damages awards- in this case £250 and a declaration.

This case was number 2 on my Top 10 Data Protection and Privacy Law Cases 2022.

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts I and II

A number of Bills proposed by the recent Conservative governments have sparked controversy among commentators: among them, the Northern Ireland Protocol Bill, the Retained EU Law Bill, and the ‘British’ Bill of Rights Bill. Taking its place in the rogues’ gallery is the Online Safety Bill (OSB).

Now returning to the House of Commons on 5 December 2022 to finish its Report Stage, the OSB has come some way since the ‘Online Harms’ White Paper published in April 2019. The Bill raises important questions about freedom of expression, online speech regulation and government (over)reach.

This article has four principal components.

Part I lays out the content and objectives of the Bill, highlighting its legislative development and the key issues arising from that. Part II situates the Bill within the wider context of online regulation, considering how recent developments may inflect the Bill’s impact.

This provides the framework for Part III, which addresses the various criticisms that the Bill has received from commentators across political spectrum. Part IV then examines the broader legal and theoretical consequences of the Bill, posing further questions to be answered. Some conclusions are drawn in Part V.

An appended Part VI briefly outlines the most recent updates to the Bill.

PART I: CONTENT

Much of the OSB’s content was clarified by the Commons Digital, Culture, Media and Sport (DCMS) Committee Report in January 2022, and the Government’s Response to this in March 2022.

As these reports confirmed, the main priority of the OSB is evident from its name change. Now couched in broader terms, the Bill is designed to protect internet users’ online safety by way of three central objectives (Response, at [2]):

  1. To tackle illegal content and activity.
  2. To deliver protection for children online.
  3. To afford adults greater control, while protecting freedom of expression.

To achieve these objectives, the Bill operates on a duty of care model. Under this model, online platforms are liable only for their own conduct: the Bill seeks to hold platforms responsible for systemic ‘lack of diligence in failing to adopt preventive or remedial measures’ (Report, at [7]). This is, in theory, a less stringent regulatory model than ‘intermediary liability’, under which online platforms would also be liable for others’ content and activity.

Moreover, service providers will not owe a limitless duty of care (Report, at [4]). Instead, the Bill divides providers into various categories, which in turn are subject to specific duties. For example, Category 1 (high-risk and high-reach, user-to-user) services are deemed to be the largest and most risky, so incur additional duties as compared to Categories 2A (all regulated search services) and 2B (the remaining regulated user-to-user services).

Enforcement of such duties lies not with the government, but with the regulatory authority Ofcom, to which the legislation grants overseeing and enforcing powers (Response, at [3]).

Central to the Bill’s duty of care model is its typology of online content. Initially, the OSB distinguished illegal from legal material, the latter of which it subdivided into two – producing three content typologies to align with the Bill’s stated objectives:

  1. Illegal content
  2. Legal but harmful content
    1. Content that is harmful to children
    1. Content that is harmful to adults (for Category 1 services)

The Bill originally defined each type of content as follows (Report, at [5]):

  • Illegal content: content whose use / dissemination constitutes a relevant offence
  • Content harmful to children and adults:
    • Designated – content of a type designated in regulations made by the Secretary of State
    • Non-designated – content which fulfils one of the general definitions
      • These apply where the provider has reasonable grounds to believe that there is a material risk of the content having (even indirectly) a significant adverse physical / psychological impact on a child or adult (as applicable) of ordinary sensibilities.

These definitions were essential to the Bill’s regulatory framework, since they directly underpinned the associated risk assessment and safety duties (Report, at [6]). Simply put, how content is defined determines what a provider is required (or not) to do about it. The lower the definitional bar, the more content is subject to regulation – and, potentially, removal.

While illegal content has certainly provoked discussion, controversy has principally surrounded the ‘legal but harmful’ debate. The regulation of such content begs the question: can moderation be justified where the content, by its nature, does not meet the criminal standard?

Of particular interest are the Government’s subsequent amendments to the draft Bill, following the DCMS Report. Despite accepting eight of the Committee’s recommendations, the Government’s Response stated in the legal but harmful context that ‘rather than using the Committee’s proposed reframing, we have made other changes that meet a similar objective’ (at [29]).  

As the Bill stood in March 2022, the Government had amended its position in the following key areas:

  1. Definition of ‘harmful’ – This was simplified under the revised Bill: content had to present a material risk of significant harm to an appreciable number of children/adults (Response, at [30]). The key threshold to engage safety duties was one of ‘priority’ harmful content.
  • Designation of types of harmful content – As envisaged in the draft Bill, priority content harmful to children and adults was to be designated by the Secretary of State in secondary legislation, following consultation with Ofcom. This would now be subject to the affirmative resolution procedure, to maximise parliamentary scrutiny (Response, at [12], [55]-[57]). The government also published an indicative list of what might be designated under the Bill as priority harmful content.
  • Non-priority content harmful to adults – The revised Bill removed the obligation upon service providers to address non-priority content harmful to adults. Companies were required only to report its presence to Ofcom (Response, at [31], [40]).

According to a Ministerial Statement released in July 2022, service providers’ safety duties regarding ‘legal but harmful’ content could thus be broken down as follows:

  1. Children – Primary priority content harmful to children
    1. Services must prevent children from encountering this type of content altogether
  • Children – Priority content harmful to children
    • Services must ensure content is age-appropriate for their child users
  • Adults – Priority content harmful to adults
    • Applies only to Category 1 services
    • These must address such content in their terms and conditions, but may set their own tolerance: this may range from removing such content, to allowing it freely.

PART II: CONTEXT

To understand the ‘legal but harmful’ debate more fully, we must situate the OSB in context.

Europe:

In the EU, the recently adopted Digital Services Act (DSA) shares some similarities with the OSB: both provide a legal framework for online platforms’ duties regarding content moderation.

However, Dr Monica Horten has identified the following distinctions:

  • The DSA focuses on regulating illegal rather than merely ‘harmful’ content. In doing so, according to the non-profit Electronic Frontier Foundation, the DSA ‘avoids transforming social networks and other services into censorship tools’ – a position from which the OSB’s broader scope deviates.
  • The DSA unequivocally recognises the right to freedom of expression as guaranteed by Article 11 of the Charter of Fundamental Rights, in accordance with which service providers must act when fulfilling their obligations. The adequacy of free speech protection under the OSB may be less assured, as considered below.
  • The measures also differ in their provision of redress. While the DSA includes both prospective and retrospective procedural safeguards for users who have acted lawfully, the OSB arguably falls short – despite the Government’s assurance that users’ access to courts would not be impeded by the Bill’s ‘super-complaints mechanism’ (Response, at [18]).

It is also worth noting the proposed European Media Freedom Act (EMFA), broadly hailed as a positive step for journalistic pluralism within the EU. Granted, the OSB purports to exclude the press (‘news publishers’) from its content moderation rules. However, uncertainty remains as to the possible regulation of comments sections on newspaper websites, not to mention newspapers’ own activity on social media.

USA:

Across the Atlantic, the US courts show some signs of a legal vacuum developing around over-moderation. Recent attempts by social media users to challenge online content moderation by asserting their First Amendment rights have failed, on the basis that sites such as Facebook and Twitter are not ‘state actors’, but rather private actors not subject to constitutional claims.

As a counterpoint, the recent takeover of Twitter by Elon Musk may illustrate the risks of under-moderation. Concerns are particularly acute in light of Musk’s reinstatement of banned high-profile accounts – having stated he would wait until a new ‘content moderation council’ had convened – and his announcement of a general amnesty. This follows the removal of thousands of Twitter content moderators, and swift resurgence of hate speech and misinformation.

UK:

Returning to the UK, the wider position of freedom of expression is somewhat ambiguous.

On the one hand, the aforementioned Bill of Rights Bill (BORB) claims to improve safeguards: clause 4 requires judges to give ‘great weight’ to protecting freedom of expression. However, the former Deputy President of the Supreme Court, Lord Mance, has queried how different this is to the ‘particular regard’ provision in s 12(4) of the HRA. Other commentators have questioned whether this presumptive priority of Article 10 may in fact skew the balance in privacy actions, which rely on the presumptive parity between Articles 8 and 10. On either analysis, the BORB’s parallel statutory attempt to enshrine freedom of expression – recalling the OSB’s third objective – is not encouraging.

On the other hand, calls for greater online regulation have gained traction following the inquest into the death of the British teenager Molly Russell. The senior coroner found in October that the 14-year-old had suffered from ‘the negative effects of on-line content’, calling inter alia for ‘the effective regulation of harmful on-line content’, and for legislation ‘to ensure the protection of children’ against its effects. This offers a compelling policy argument in favour of the OSB’s second objective.

This overview of the Bill’s content and context provides the factual basis for a normative analysis of its criticisms and consequences in Parts III and IV.

Naomi Kilcoyne is a Visiting Lecturer in Public Law at City University, having completed her GDL there in 2021-22. She has a particular interest in the interplay between public and private law.

Privacy protection in practice: The coronavirus and healthcare data

TTP extends its best wishes to all those impacted by the coronavirus and hopes that all are safe and well. For those readers based in the UK the NHS coronavirus guidance can be found here and Government guidance here. Stay home, stay safe.   Continue reading

Data protection rights

Personal data, such as your name, likeness, birthday or any other information which can be used to identify you is highly sensitive.

Protecting and bringing actions on the basis of your personal data being harvested, used or misused is a key foundational right to privacy. Continue reading

Copyright

Copyright under English law is primarily established under the Copyright Designs and Patents Act 1988. Copyright can extend to protect videos and images taken by you on your devices.

In such circumstances, these videos and images are protected 70 years from the end of the life of the taker. This can function to protect photographs and videos that you have taken from use by third parties. By enforcing your copyright ownership you can control who has the right to use and edit the images and/or footage in question. This is usually in the form of a cease and desist letter notifying the third party of your ownership of the material whilst asking that they stop usage as soon as possible.

aperture black blur camera

Breach of confidence

Breach of confidence occurs when confidential information, as shared between parties in a manner which is confidential, is shared with a third party in breach of that duty of confidence. What imposes the duty to protect the information in a breach of confidence case is a pre-existing confidential relationship between the parties.

The case of Coco v A.N. Clark involved the claimant looking to bring a new form of moped to the market, parts of which were then sourced from a third party in breach of obligations of confidence. This case underpinned the three elements of the tort and highlights the most common scenario breach of confidence claims arise in; those involving business secrets and negotiations.

In relation to privacy breach of confidence tends to cover confidential conversations and communications where the nature of the information itself attracts a reasonable expectation of privacy. This may relate to communications with lawyers or medical professionals, for example.

black android smartphone on top of white book

Misuse of private information

The tort of misuse of private information is relatively new and is the primary action which protects privacy rights under English law. To give effect to the European Convention of Human Rights Article 8, which enshrined the right to a private life, English common law (Campbell v MGN Ltd [2004] UKHL) 22 sought to extend the remit of the breach of confidence action to cover instances absent a pre-existing confidential relationship.

The result was the formation of the new tort of misuse of private information. The starting point in Campbell was that there was no cause of action for invasion of privacy. Mirror Group Newspapers, in disclosing information regarding Naomi Campbell’s drug addiction, has committed wrongful use of private information.

The elements of the tort are that:

  1. the Claimant must prove that they had a reasonable expectation of privacy in respect of the information at issue; and
  2. contrary interests, typically between privacy rights and freedom of expression, must be balanced.

A reasonable expectation of privacy arises typically, in the context of private information such as health matters. A reasonable expectation of privacy takes into account a broad number of elements from the individual themselves to the quality of information used and previous statements concerning the information. As in Campbell the information at issue may be broken down into categories from most private to least to enable the application of this test.

Managing competing interests typically involves a consideration of journalistic freedom of expression. This considers the public interest for and against the disclosure of the information. It will also consider the context in which the information is communicated. Interference with rights must be considered proportionate and justifiable.

Big brother is watching you, in compliance with the European Convention of Human Rights

Revisiting the case of Big Brother Watch and Others v. the United Kingdom

The operation of the UK’s surveillance services, MI5, MI6, GCHQ and the Metropolitan Police Service and their interaction with human rights (“Convention rights”) have historically been obscure to safeguard the interests of national security. The specifics of policy and practices when conducting national surveillance and its interaction with the private lives citizens have only come to light since the whistleblowing of Edward Snowden in 2013, catalyzing closer scrutiny of their potential to impinge upon the democratic freedoms.

Continue reading