Privacy Law in Practice – An Insight into Data Protection Law as an In-House IT Lawyer – Madeleine Weber

Welcome to Privacy Law in Practice, our series at TPP demystifying what it is like to practice in privacy law.

Have you ever wondered which data protection law issues come up in practice? It obviously depends on the industry and area you work in, but data protection law might be more prevalent than you think.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill

PART VI: UPDATES

Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.

Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts III, IV and V

PART III: CRITICISM

In a rare show of national unity, disapproval of the OSB has spanned both ends of the political spectrum. Alongside criticism from the Labour culture minister, Conservative politicians have also weighed in on the ‘legal but harmful’ debate. Thinktanks and non-profit groups have likewise been apprehensive.

Perhaps most headline-grabbing was the censure of the former Supreme Court judge, Lord Sumption, who denounced the OSB in an article in The Spectator, and subsequently on the Law Pod UK podcast.

Continue reading

The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts I and II

A number of Bills proposed by the recent Conservative governments have sparked controversy among commentators: among them, the Northern Ireland Protocol Bill, the Retained EU Law Bill, and the ‘British’ Bill of Rights Bill. Taking its place in the rogues’ gallery is the Online Safety Bill (OSB).

Now returning to the House of Commons on 5 December 2022 to finish its Report Stage, the OSB has come some way since the ‘Online Harms’ White Paper published in April 2019. The Bill raises important questions about freedom of expression, online speech regulation and government (over)reach.

This article has four principal components.

Part I lays out the content and objectives of the Bill, highlighting its legislative development and the key issues arising from that. Part II situates the Bill within the wider context of online regulation, considering how recent developments may inflect the Bill’s impact.

This provides the framework for Part III, which addresses the various criticisms that the Bill has received from commentators across political spectrum. Part IV then examines the broader legal and theoretical consequences of the Bill, posing further questions to be answered. Some conclusions are drawn in Part V.

An appended Part VI briefly outlines the most recent updates to the Bill.

PART I: CONTENT

Much of the OSB’s content was clarified by the Commons Digital, Culture, Media and Sport (DCMS) Committee Report in January 2022, and the Government’s Response to this in March 2022.

As these reports confirmed, the main priority of the OSB is evident from its name change. Now couched in broader terms, the Bill is designed to protect internet users’ online safety by way of three central objectives (Response, at [2]):

  1. To tackle illegal content and activity.
  2. To deliver protection for children online.
  3. To afford adults greater control, while protecting freedom of expression.

To achieve these objectives, the Bill operates on a duty of care model. Under this model, online platforms are liable only for their own conduct: the Bill seeks to hold platforms responsible for systemic ‘lack of diligence in failing to adopt preventive or remedial measures’ (Report, at [7]). This is, in theory, a less stringent regulatory model than ‘intermediary liability’, under which online platforms would also be liable for others’ content and activity.

Moreover, service providers will not owe a limitless duty of care (Report, at [4]). Instead, the Bill divides providers into various categories, which in turn are subject to specific duties. For example, Category 1 (high-risk and high-reach, user-to-user) services are deemed to be the largest and most risky, so incur additional duties as compared to Categories 2A (all regulated search services) and 2B (the remaining regulated user-to-user services).

Enforcement of such duties lies not with the government, but with the regulatory authority Ofcom, to which the legislation grants overseeing and enforcing powers (Response, at [3]).

Central to the Bill’s duty of care model is its typology of online content. Initially, the OSB distinguished illegal from legal material, the latter of which it subdivided into two – producing three content typologies to align with the Bill’s stated objectives:

  1. Illegal content
  2. Legal but harmful content
    1. Content that is harmful to children
    1. Content that is harmful to adults (for Category 1 services)

The Bill originally defined each type of content as follows (Report, at [5]):

  • Illegal content: content whose use / dissemination constitutes a relevant offence
  • Content harmful to children and adults:
    • Designated – content of a type designated in regulations made by the Secretary of State
    • Non-designated – content which fulfils one of the general definitions
      • These apply where the provider has reasonable grounds to believe that there is a material risk of the content having (even indirectly) a significant adverse physical / psychological impact on a child or adult (as applicable) of ordinary sensibilities.

These definitions were essential to the Bill’s regulatory framework, since they directly underpinned the associated risk assessment and safety duties (Report, at [6]). Simply put, how content is defined determines what a provider is required (or not) to do about it. The lower the definitional bar, the more content is subject to regulation – and, potentially, removal.

While illegal content has certainly provoked discussion, controversy has principally surrounded the ‘legal but harmful’ debate. The regulation of such content begs the question: can moderation be justified where the content, by its nature, does not meet the criminal standard?

Of particular interest are the Government’s subsequent amendments to the draft Bill, following the DCMS Report. Despite accepting eight of the Committee’s recommendations, the Government’s Response stated in the legal but harmful context that ‘rather than using the Committee’s proposed reframing, we have made other changes that meet a similar objective’ (at [29]).  

As the Bill stood in March 2022, the Government had amended its position in the following key areas:

  1. Definition of ‘harmful’ – This was simplified under the revised Bill: content had to present a material risk of significant harm to an appreciable number of children/adults (Response, at [30]). The key threshold to engage safety duties was one of ‘priority’ harmful content.
  • Designation of types of harmful content – As envisaged in the draft Bill, priority content harmful to children and adults was to be designated by the Secretary of State in secondary legislation, following consultation with Ofcom. This would now be subject to the affirmative resolution procedure, to maximise parliamentary scrutiny (Response, at [12], [55]-[57]). The government also published an indicative list of what might be designated under the Bill as priority harmful content.
  • Non-priority content harmful to adults – The revised Bill removed the obligation upon service providers to address non-priority content harmful to adults. Companies were required only to report its presence to Ofcom (Response, at [31], [40]).

According to a Ministerial Statement released in July 2022, service providers’ safety duties regarding ‘legal but harmful’ content could thus be broken down as follows:

  1. Children – Primary priority content harmful to children
    1. Services must prevent children from encountering this type of content altogether
  • Children – Priority content harmful to children
    • Services must ensure content is age-appropriate for their child users
  • Adults – Priority content harmful to adults
    • Applies only to Category 1 services
    • These must address such content in their terms and conditions, but may set their own tolerance: this may range from removing such content, to allowing it freely.

PART II: CONTEXT

To understand the ‘legal but harmful’ debate more fully, we must situate the OSB in context.

Europe:

In the EU, the recently adopted Digital Services Act (DSA) shares some similarities with the OSB: both provide a legal framework for online platforms’ duties regarding content moderation.

However, Dr Monica Horten has identified the following distinctions:

  • The DSA focuses on regulating illegal rather than merely ‘harmful’ content. In doing so, according to the non-profit Electronic Frontier Foundation, the DSA ‘avoids transforming social networks and other services into censorship tools’ – a position from which the OSB’s broader scope deviates.
  • The DSA unequivocally recognises the right to freedom of expression as guaranteed by Article 11 of the Charter of Fundamental Rights, in accordance with which service providers must act when fulfilling their obligations. The adequacy of free speech protection under the OSB may be less assured, as considered below.
  • The measures also differ in their provision of redress. While the DSA includes both prospective and retrospective procedural safeguards for users who have acted lawfully, the OSB arguably falls short – despite the Government’s assurance that users’ access to courts would not be impeded by the Bill’s ‘super-complaints mechanism’ (Response, at [18]).

It is also worth noting the proposed European Media Freedom Act (EMFA), broadly hailed as a positive step for journalistic pluralism within the EU. Granted, the OSB purports to exclude the press (‘news publishers’) from its content moderation rules. However, uncertainty remains as to the possible regulation of comments sections on newspaper websites, not to mention newspapers’ own activity on social media.

USA:

Across the Atlantic, the US courts show some signs of a legal vacuum developing around over-moderation. Recent attempts by social media users to challenge online content moderation by asserting their First Amendment rights have failed, on the basis that sites such as Facebook and Twitter are not ‘state actors’, but rather private actors not subject to constitutional claims.

As a counterpoint, the recent takeover of Twitter by Elon Musk may illustrate the risks of under-moderation. Concerns are particularly acute in light of Musk’s reinstatement of banned high-profile accounts – having stated he would wait until a new ‘content moderation council’ had convened – and his announcement of a general amnesty. This follows the removal of thousands of Twitter content moderators, and swift resurgence of hate speech and misinformation.

UK:

Returning to the UK, the wider position of freedom of expression is somewhat ambiguous.

On the one hand, the aforementioned Bill of Rights Bill (BORB) claims to improve safeguards: clause 4 requires judges to give ‘great weight’ to protecting freedom of expression. However, the former Deputy President of the Supreme Court, Lord Mance, has queried how different this is to the ‘particular regard’ provision in s 12(4) of the HRA. Other commentators have questioned whether this presumptive priority of Article 10 may in fact skew the balance in privacy actions, which rely on the presumptive parity between Articles 8 and 10. On either analysis, the BORB’s parallel statutory attempt to enshrine freedom of expression – recalling the OSB’s third objective – is not encouraging.

On the other hand, calls for greater online regulation have gained traction following the inquest into the death of the British teenager Molly Russell. The senior coroner found in October that the 14-year-old had suffered from ‘the negative effects of on-line content’, calling inter alia for ‘the effective regulation of harmful on-line content’, and for legislation ‘to ensure the protection of children’ against its effects. This offers a compelling policy argument in favour of the OSB’s second objective.

This overview of the Bill’s content and context provides the factual basis for a normative analysis of its criticisms and consequences in Parts III and IV.

Naomi Kilcoyne is a Visiting Lecturer in Public Law at City University, having completed her GDL there in 2021-22. She has a particular interest in the interplay between public and private law.

The Personal Data life cycle: Where to start the analysis? – Vladyslav Tamashev, Privacy lawyer at Legal IT Group

Have you ever thought about data on your computer? It doesn’t matter whether you are a content creator, programmer, or just a regular user thousands of different files were created, downloaded, and altered on your device. But what happens when some of that data becomes useless to you?

Usually, this data will be manually deleted to get some free space on your storage device or it will be wiped during the OS reinstallation. Everything that happened with that data starting from its creation or collection until its destruction is called the data life cycle.

The data life cycle is a sequence of stages that happened to a particular unit of data. The simplified life cycle model has 5 basic stages: Collection, Processing, Retention, Disclosure, Destruction. In practice, when we talk about personal data life cycle, this sequence can be dramatically different, dependant on the type of information, its usage, origin, company policies, personal data protection regulations and legislation.

Continue reading