This article explores the framework for artifical intelligence regulation put in place by the EU AI Act
Introduction
The race to harness, and some may say exploit, Artificial Intelligence (“AI”) is gathering pace globally. This drive comes in tandem with the rise of public recognition and interest in AI. The European Union (“EU”) is no doubt using its significance in the international economy to mould the digital space. Its ‘Digital Strategy’ has witnessed the introduction of new laws, the latest of which is the EU AI Act.
The following analysis will attempt to show that the EU, and other significant economies, are balancing a myriad of public-policy and economic considerations when regulating AI.
This article will proceed in four parts. Part I provides the context in which the AI Act fits within the EU’s Digital Strategy, offering comparisons with regulatory efforts in the United States. Part II will outline the key provisions of the AI Act and the context in which the legislation fits within the EU’s Digital Strategy. Part III will explore the consequences and criticisms of the AI Act. Part IV will offer concluding remarks, noting the opportunities presented to the UK either to integrate the provisions of the AI Act or diverge from the EU regulatory framework. The analysis presented will demonstrate that the EU AI Act contains shortfalls for the consumer, and that the UK should diverge from its stultifying regulatory framework in favour of a sector-specific, pro-innovation model.
Regulators are walking a legal and economic tightrope with regards to AI. The framework offered in the EU’s AI Act is one option, whereas the UK could diverge by implementing business- and consumer-friendly rules.
Part I: CONTEXT
The EU’s AI Act serves part of its ‘Digital Strategy’, providing a new regulatory framework for the digital and IT sectors of the economy. Notably, GDPR 2018 was the genesis of an attempt to provide comprehensive digital regulation. GDPR provided the global benchmark for data protection – the UK retained this law after exiting the EU.
The Commission President, Ursula von der Leyen, proposed an approach for the “human and ethical” consequences of AI development in 2019. Whilst the draft text has been subject to debate and consultations, the EU has passed other laws regarding the digital economy. The Digital Services Act (“DSA”) and the Digital Markets Act (“DMA”) of 2022 are widely viewed as innovative pieces of legislation to regulate digital marketspaces as well as online search engines.
The USA, the country most influential in AI development, has a patchier legal framework. Its Algorithmic Accountability Act represents, comparatively, a light-touch approach and is adapted to the current agency regulatory authority. It is notable that the EU takes a more ‘risk-based approach’ to the regulation of AI and technology. Further, the legislative efforts of the EU, particularly the DSA and DMA, reflect a broader European consensus regarding search engines and e-commerce, aspects that the US Congress has barely debated.
Legislative responses from lawmakers across the globe are microcosms of the wider academic, economic and moral debates regarding AI. For some, AI bore the mark of Cain from the beginning: threatening mass surveillance, economic dislocation, and political destabilisation. For others, it presents a great opportunity, being the vehicle of a new industrial revolution. The more moderate view aims to harness the opportunities of AI whilst managing potential risks. Regarding these potential risks, Mark Coeckelbergh notes the material risks of AI development, whereas Jack Balkin discusses the immaterial risks, such as the impact of AI on free speech.
Part II: CONTENT AND FURTHER CONTEXT
The EU’s AI Act, derived from Article 14 of the Treaty of the Functioning of the European Union (TFEU), creates a new framework for the use and development of AI in the EU. The act seeks to create a regulatory framework that can couch AI within the pre-existing system of rights and protections for consumers and citizens, involving the protection of free speech, the right to privacy and various rule-of-law considerations.
Defining AI
The AI Act adopts, more or less, the OECD definition of AI. Article 3(1) provides the five requirements of an AI system; being:
[1] a “machine-based system”;
[2] designed to operate with varying levels of autonomy;
[3] [that] may exhibit adaptiveness after deployment, and that,
[4] for explicit or implicit objectives, infers, from the input it receives, how to generate outputs; [and]
[5] that can influence physical or virtual environments.
The terminology is deliberately broad in order to cast the widest net. These provisions seek to encompass ‘Predictive Artificial Intelligence’ (“PredAI”) and ‘Generative Artificial Intelligence’ (“GAI”). The former is commonly used in various industries to improve efficiencies, whereas the latter is used to create new media by synthesising pre-existing data (OpenAI and ChatGPT are good examples of GAI).
The legislation provides diverging regulatory frameworks according to the categorisation of the AI system. There is a four-fold taxonomy:
• Prohibited AI practices;
• High-risk AI systems;
• Certain AI systems with a transparency risk;
• Minimal or no risk systems.
1. Prohibited AI Practices
Taking each in turn, prohibited AI practices as stipulated in Article 5(1) of the AI Act concern eight key scenarios. Where an AI system possesses or employs:
• subliminal techniques;
• a capability to exploit a natural person’s vulnerabilities;
• characteristics of ‘social scoring’;
• the classification of persons for crime prediction “based solely off profiling”;
• capability for expanding facial recognition databases;
• an ability to infer emotions;
• an ability to generate biometric databases; or
• real-time biometric identification in publicly accessible places for purposes of law enforcement;
then it will be prohibited from the EU.
Realistic examples of prohibited practices would be if streaming services used advanced data analytics to generate highly personalised advertisements, or emotionally charged content that could manipulate the user’s feelings. Similarly, an employer or organisation analysing job applicants or assessed behaviour on factors unrelated to job performance would fall foul.
2. High-risk AI systems
Second, if a system is deemed high-risk, it entails essential requirements for its use to continue in the EU. Article 6(1) provides that these regulations apply to AI in two situations: first, where the system is the product of, or safety component of, or is covered under harmonised rules on health and safety; second, an AI system solely for the purpose of fixed areas such as the operation of critical infrastructure, education and vocational training, or law enforcement. AI systems under the high-risk category are exempted from additional regulatory requirements where, per Article 6(3), the system does not pose “a significant risk of harm to the health, safety or fundamental rights of natural persons”.
To comply with the requirements of the AI Act, high-risk AI systems must undertake the following steps:
• Comply with standards regarding ‘accuracy, robustness and cybersecurity’ (Article 15) – the provision merely states that the Commission shall co-operate with benchmarking authorities;
• make a ‘quality management system’ (Article 17);
• maintain documentation to show compliance (Article 18);
• undergo a conformity assessment to determine whether the additional requirements have been met and conform to criteria of various, unspecified ‘harmonised standards’ (Articles 32 and 40);
• in some circumstances, particularly the use of real-time biometric identification in publicly accessible spaces, a fundamental rights impact assessment (FRIA) will be needed.
3. Transparency risk systems
Third, systems deemed to have a ‘transparency risk’ are required to comply with EU requirements on transparency. Article 50 of the AI Act provides that a user must be ‘informed’ that he is interacting with an AI system and whether the output has been generated or manipulated. The key exception is for output used lawfully for the prevention and detection of criminal offences.
4. Minimal or no risk systems
Fourth, systems that do not fall under the previous three categories are deemed minimal or low risk, entailing no obligations from the AI Act. The guidance suggests merely that deployers could adopt additional, voluntary codes of conduct. The idea, and ideal, behind this provision is that most AI systems will fall under this category. Notable examples of AI systems with minimal or no risk include spam filters or AI-enabled recommender systems. The practical distinction between an AI system that is of minimal or no risk compared with the other categories ostensibly concerns the input values used within its function; the former does not use highly personalised information.
Enforcement mechanisms
Aside from the key provisions, the AI Act provides a new method of enforcement; a noticeable development from the GDPR regulations which were criticised for their ostensibly feeble enforcement. At Union level, the AI Office shall develop expertise and capability in the field whilst the EU AI Board will contain representatives of each Member State to enforce the AI Act. Further, each Member State has to establish a market surveillance authority and notifying authority per Article 70(1).
Sanctions
Under Article 99, the AI Act provides a tiered system of penalties for infringements of its provisions. Though obliged to take account of the AI Act’s guidelines, each Member State will establish a penalty system, the latest date of notification for which is 2 August 2026.
The potential maximum penalties contained in the AI Act are higher than other EU laws forming part of the Digital Strategy. Per Article 99(3), the most severe penalty is an administrative fine up to €35 million or seven per cent of total worldwide annual turnover for the preceding financial year; by comparison, the GDPR carries a maximum €20 million fine.
Broadly, the AI Act provides tiers for the maximum administrative fine for particular infringements:
• For prohibited AI practices, the most severe fine may be administered;
• A maximum of €15 million or three per cent annual turnover for breaches of obligations of providers (Article 16); authorised representatives (Article 25); importers (Article 26); distributors (Article 27); deployers (Article 29 (1)-(6)); and notified bodies (Articles 33 and 34);
• €7.5 million or one per cent annual turnover for the supply of incorrect or misleading information to notified bodies and national competent authorities in response to a request;
• For EU Union institutions, up to €1.5 million for non-compliance for prohibited practices, or €750,000 for non-compliance for any requirements or obligations under the Act.
The key criterion binding all Member States in the design of its domestic penalty system is the need for it to be “effective, proportionate and dissuasive”, per Article 99(1).
Part III: CONSEQUENCES AND CRITICISM
There are certain aspects of the AI Act that put the consumer and the everyman at the front and centre.
The category of prohibited AI practices sets a benchmark for governments and political organisations globally: the prohibited list provides obligations on the state as to how it may operate the use of AI on its citizens. The AI Act seeks to respond to macro-level considerations for society, particularly security and surveillance. The additional obligations on a high-risk system and the transparency obligations for those of mild risk seek to curtail potential abuses against the user. The regulations provide an important correction to the asymmetry between the developer and the user. Furthermore, the importance of complying with the obligations is boosted by the heavy fines for violating the provisions of the AI Act.
There are, however, pertinent criticisms that one can make of the AI Act.
First, there is a lacuna in the legislation regarding biometric systems; whilst the AI Act bans the use of AI in biometric systems for law enforcement, it does not prevent EU Member States selling biometric data to oppressive regimes, nor does it ban the use of AI in post-factum biometric systems.
Second, the transparency requirements for AI systems are seen as relatively cautious, particularly from an online safety point of view. Yale legal academic, Sandra Watcher, suggests that the AI Act refrains from placing obligations on deployers of ‘high-risk’ systems; rather, most oversight responsibility is placed on market authorities and the large-scale providers. Further, she notes that the requirements for systems containing a ‘transparency risk’ is relatively light touch, particularly as GAI systems can propagate unsavoury content.
Third, the exact scope of harmonised, technical standards for AI product liability remains unclear. This is a pertinent criticism given that there are no guarantees as to what ‘standard’ will be set. The AI Act merely provides that the Commission will issue the requests for standardisation in accordance with Regulation (EU) No 1025/2012 (Article 40, AI Act); this provides that European Standardisation Organisations, AI providers and business agree technical specifications. Currently, the EU is formulating the contents of the Product Liability Directive and the Artificial Intelligence Liability Directive. Hence, the scope and substance of the standards to be adopted may take years to clarify. Substantively, this provision appears to accept an increasing trend in the EU to delegate regulatory decision-making. Unless the Commission were to provide common specifications, private bodies will, in all likelihood, set the standards.
Fourth, the broader criticism of the EU’s Digital Strategy; that it focuses solely on potential risks and less on innovation. The EU has taken the decision to create a distinctive regulatory approach to AI: a tiered regulatory framework to combat perceived material and monetary damage from AI. As stipulated above, the EU has employed macro-level policy considerations as part of a broader Digital Strategy. Investment and deployment decisions will be taken as a result of the EU’s approach; as noted by Nitzberg and Zysman, early regulatory frameworks create enduring trajectories of technological and political economy development. There are fears that the AI Act renders the EU uncompetitive in the race to develop AI, compared with laissez-faire jurisdictions like the UAE.
Aside from the holistic approach provided within the AI Act at present, a sector-specific regulatory approach could be an alternative, balancing the opportunities for innovation with the need to protect consumers and ensure their right to privacy is protected. By developing a corpus of legislation that specifies regulations for different sectors, each sector may establish codes of conduct, ethical principles and technical standards. In the UK, for instance, the different regulatory objectives between healthcare and the Financial Conduct Authority emerge in their approaches to AI: the former is keen to implement AI, ethically, in the diagnostic process, whereas the latter wishes to combat the potential for disadvantaging consumers, given the asymmetric data use between sellers and consumers. The disadvantage of a piecemeal regulatory approach is that it leaves omissions and discrepancies in the legal framework, potentially undermining the objectives of regulators. A hybrid model between the two regulatory approaches is perhaps preferable.
Notably, the AI Act does not appear, prima facie, to harmonise between the obligations of the GDPR and the AI Act. To name a few instances:
• Compared with the GDPR, where all machine-learning systems processing personal data are required to conduct data protection impact assessment (DPIA), the AI Act requires that only high-risk operators conduct the more onerous FRIA;
• Unlike the GDPR where consumers (‘data subjects’) have explicit enforcement mechanisms and regulations regarding the control of their data, AI users are not provided rights under the AI Act;
• Perhaps most strikingly, the AI Act does not contain explicit obligations for data minimisation or requirements for assessing high-risk AI systems to enhance the rights of users.
Factually, the obligations of data protection will likely overlap with the use of AI, whose systems have unprecedented potential to gather and acquire data. With the presence of GDPR, it could be assumed that many GAI platforms have developed in accordance with GDPR. Brown, Truby and Ibrahim note, however, that the inherent properties of many AI systems contradict the principles of GDPR, specifically the use of big data analytics and GDPR requirement for consent. To articulate and enforce the obligations under the AI Act and the GDPR harmoniously, regulators will need to rethink the enforcement of GDPR.
At its best, the law could be used to reap labour market benefits and assess risks. By drawing a line around the invasive aspects of GAI, then the AI Act could reinforce particular liberties that are vulnerable, such as free speech, data protection and the right to private life.
PART IV: THE UK AND CONCLUSION
The UK, now outside of the EU, has no obligation to implement the provisions of the AI Act into domestic law. So far, the UK government’s designated approach is explicitly pro-innovation with light-touch and sector-specific regulations. At the time of writing, there is no intention to introduce a general AI Act into Parliament in the near future.
A light-touch approach has its benefits: preventing the accretion of unnecessary rules in order to provide the climate for innovation. It would, however, see important macro policy considerations neglected, including the use of data from biometric systems, security, surveillance and the non-material harms of AI content. Further, it is unlikely that the UK would want to diverge too greatly from a common framework; the EU AI Act will provide a blueprint for more detailed regulations.
If the UK government wishes to develop a distinct regulatory framework for AI, it has several challenges going forward:
1) Determining the exact balance in its priorities; and
2) Walking the tightrope between a pro-innovation approach and risk minimisation.
The above analysis shows that the utility and efficacy of AI regulation is determined by a complex series of policy considerations. The UK government could improve on the omissions of the AI Act to maximise the benefits of AI for consumers; it has the freedom to create an AI strategy with regulations addressing sector-specific concerns. This appears to be the best way to maximise the benefits of AI, whilst placing consumers at the forefront. The EU AI Act, in its current format, has not struck the right balance for all stakeholders concerned.
Introductory Articles
The Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Part VI, Updates to the Bill
PART VI: UPDATES
Any commentary upon legislation in progress risks rapidly becoming outdated: an occupational hazard to which this piece is by no means immune.
Ahead of the OSB’s return to Parliament, the Government issued a press release on 28 November 2022 noting a number of important developments to the amended Bill.
Continue readingThe Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts III, IV and V
PART III: CRITICISM
In a rare show of national unity, disapproval of the OSB has spanned both ends of the political spectrum. Alongside criticism from the Labour culture minister, Conservative politicians have also weighed in on the ‘legal but harmful’ debate. Thinktanks and non-profit groups have likewise been apprehensive.
Perhaps most headline-grabbing was the censure of the former Supreme Court judge, Lord Sumption, who denounced the OSB in an article in The Spectator, and subsequently on the Law Pod UK podcast.
Continue readingThe Online Safety Bill: Everything in Moderation? – Naomi Kilcoyne – Parts I and II
A number of Bills proposed by the recent Conservative governments have sparked controversy among commentators: among them, the Northern Ireland Protocol Bill, the Retained EU Law Bill, and the ‘British’ Bill of Rights Bill. Taking its place in the rogues’ gallery is the Online Safety Bill (OSB).
Now returning to the House of Commons on 5 December 2022 to finish its Report Stage, the OSB has come some way since the ‘Online Harms’ White Paper published in April 2019. The Bill raises important questions about freedom of expression, online speech regulation and government (over)reach.
This article has four principal components.
Part I lays out the content and objectives of the Bill, highlighting its legislative development and the key issues arising from that. Part II situates the Bill within the wider context of online regulation, considering how recent developments may inflect the Bill’s impact.
This provides the framework for Part III, which addresses the various criticisms that the Bill has received from commentators across political spectrum. Part IV then examines the broader legal and theoretical consequences of the Bill, posing further questions to be answered. Some conclusions are drawn in Part V.
An appended Part VI briefly outlines the most recent updates to the Bill.
PART I: CONTENT
Much of the OSB’s content was clarified by the Commons Digital, Culture, Media and Sport (DCMS) Committee Report in January 2022, and the Government’s Response to this in March 2022.
As these reports confirmed, the main priority of the OSB is evident from its name change. Now couched in broader terms, the Bill is designed to protect internet users’ online safety by way of three central objectives (Response, at [2]):
- To tackle illegal content and activity.
- To deliver protection for children online.
- To afford adults greater control, while protecting freedom of expression.
To achieve these objectives, the Bill operates on a duty of care model. Under this model, online platforms are liable only for their own conduct: the Bill seeks to hold platforms responsible for systemic ‘lack of diligence in failing to adopt preventive or remedial measures’ (Report, at [7]). This is, in theory, a less stringent regulatory model than ‘intermediary liability’, under which online platforms would also be liable for others’ content and activity.
Moreover, service providers will not owe a limitless duty of care (Report, at [4]). Instead, the Bill divides providers into various categories, which in turn are subject to specific duties. For example, Category 1 (high-risk and high-reach, user-to-user) services are deemed to be the largest and most risky, so incur additional duties as compared to Categories 2A (all regulated search services) and 2B (the remaining regulated user-to-user services).
Enforcement of such duties lies not with the government, but with the regulatory authority Ofcom, to which the legislation grants overseeing and enforcing powers (Response, at [3]).
Central to the Bill’s duty of care model is its typology of online content. Initially, the OSB distinguished illegal from legal material, the latter of which it subdivided into two – producing three content typologies to align with the Bill’s stated objectives:
- Illegal content
- Legal but harmful content
- Content that is harmful to children
- Content that is harmful to adults (for Category 1 services)
The Bill originally defined each type of content as follows (Report, at [5]):
- Illegal content: content whose use / dissemination constitutes a relevant offence
- Content harmful to children and adults:
- Designated – content of a type designated in regulations made by the Secretary of State
- Non-designated – content which fulfils one of the general definitions
- These apply where the provider has reasonable grounds to believe that there is a material risk of the content having (even indirectly) a significant adverse physical / psychological impact on a child or adult (as applicable) of ordinary sensibilities.
These definitions were essential to the Bill’s regulatory framework, since they directly underpinned the associated risk assessment and safety duties (Report, at [6]). Simply put, how content is defined determines what a provider is required (or not) to do about it. The lower the definitional bar, the more content is subject to regulation – and, potentially, removal.
While illegal content has certainly provoked discussion, controversy has principally surrounded the ‘legal but harmful’ debate. The regulation of such content begs the question: can moderation be justified where the content, by its nature, does not meet the criminal standard?
Of particular interest are the Government’s subsequent amendments to the draft Bill, following the DCMS Report. Despite accepting eight of the Committee’s recommendations, the Government’s Response stated in the legal but harmful context that ‘rather than using the Committee’s proposed reframing, we have made other changes that meet a similar objective’ (at [29]).
As the Bill stood in March 2022, the Government had amended its position in the following key areas:
- Definition of ‘harmful’ – This was simplified under the revised Bill: content had to present a material risk of significant harm to an appreciable number of children/adults (Response, at [30]). The key threshold to engage safety duties was one of ‘priority’ harmful content.
- Designation of types of harmful content – As envisaged in the draft Bill, priority content harmful to children and adults was to be designated by the Secretary of State in secondary legislation, following consultation with Ofcom. This would now be subject to the affirmative resolution procedure, to maximise parliamentary scrutiny (Response, at [12], [55]-[57]). The government also published an indicative list of what might be designated under the Bill as priority harmful content.
- Non-priority content harmful to adults – The revised Bill removed the obligation upon service providers to address non-priority content harmful to adults. Companies were required only to report its presence to Ofcom (Response, at [31], [40]).
According to a Ministerial Statement released in July 2022, service providers’ safety duties regarding ‘legal but harmful’ content could thus be broken down as follows:
- Children – Primary priority content harmful to children
- Services must prevent children from encountering this type of content altogether
- Children – Priority content harmful to children
- Services must ensure content is age-appropriate for their child users
- Adults – Priority content harmful to adults
- Applies only to Category 1 services
- These must address such content in their terms and conditions, but may set their own tolerance: this may range from removing such content, to allowing it freely.
PART II: CONTEXT
To understand the ‘legal but harmful’ debate more fully, we must situate the OSB in context.
Europe:
In the EU, the recently adopted Digital Services Act (DSA) shares some similarities with the OSB: both provide a legal framework for online platforms’ duties regarding content moderation.
However, Dr Monica Horten has identified the following distinctions:
- The DSA focuses on regulating illegal rather than merely ‘harmful’ content. In doing so, according to the non-profit Electronic Frontier Foundation, the DSA ‘avoids transforming social networks and other services into censorship tools’ – a position from which the OSB’s broader scope deviates.
- The DSA unequivocally recognises the right to freedom of expression as guaranteed by Article 11 of the Charter of Fundamental Rights, in accordance with which service providers must act when fulfilling their obligations. The adequacy of free speech protection under the OSB may be less assured, as considered below.
- The measures also differ in their provision of redress. While the DSA includes both prospective and retrospective procedural safeguards for users who have acted lawfully, the OSB arguably falls short – despite the Government’s assurance that users’ access to courts would not be impeded by the Bill’s ‘super-complaints mechanism’ (Response, at [18]).
It is also worth noting the proposed European Media Freedom Act (EMFA), broadly hailed as a positive step for journalistic pluralism within the EU. Granted, the OSB purports to exclude the press (‘news publishers’) from its content moderation rules. However, uncertainty remains as to the possible regulation of comments sections on newspaper websites, not to mention newspapers’ own activity on social media.
USA:
Across the Atlantic, the US courts show some signs of a legal vacuum developing around over-moderation. Recent attempts by social media users to challenge online content moderation by asserting their First Amendment rights have failed, on the basis that sites such as Facebook and Twitter are not ‘state actors’, but rather private actors not subject to constitutional claims.
As a counterpoint, the recent takeover of Twitter by Elon Musk may illustrate the risks of under-moderation. Concerns are particularly acute in light of Musk’s reinstatement of banned high-profile accounts – having stated he would wait until a new ‘content moderation council’ had convened – and his announcement of a general amnesty. This follows the removal of thousands of Twitter content moderators, and swift resurgence of hate speech and misinformation.
UK:
Returning to the UK, the wider position of freedom of expression is somewhat ambiguous.
On the one hand, the aforementioned Bill of Rights Bill (BORB) claims to improve safeguards: clause 4 requires judges to give ‘great weight’ to protecting freedom of expression. However, the former Deputy President of the Supreme Court, Lord Mance, has queried how different this is to the ‘particular regard’ provision in s 12(4) of the HRA. Other commentators have questioned whether this presumptive priority of Article 10 may in fact skew the balance in privacy actions, which rely on the presumptive parity between Articles 8 and 10. On either analysis, the BORB’s parallel statutory attempt to enshrine freedom of expression – recalling the OSB’s third objective – is not encouraging.
On the other hand, calls for greater online regulation have gained traction following the inquest into the death of the British teenager Molly Russell. The senior coroner found in October that the 14-year-old had suffered from ‘the negative effects of on-line content’, calling inter alia for ‘the effective regulation of harmful on-line content’, and for legislation ‘to ensure the protection of children’ against its effects. This offers a compelling policy argument in favour of the OSB’s second objective.
This overview of the Bill’s content and context provides the factual basis for a normative analysis of its criticisms and consequences in Parts III and IV.
Naomi Kilcoyne is a Visiting Lecturer in Public Law at City University, having completed her GDL there in 2021-22. She has a particular interest in the interplay between public and private law.
The Personal Data life cycle: Where to start the analysis? – Vladyslav Tamashev, Privacy lawyer at Legal IT Group
Have you ever thought about data on your computer? It doesn’t matter whether you are a content creator, programmer, or just a regular user thousands of different files were created, downloaded, and altered on your device. But what happens when some of that data becomes useless to you?
Usually, this data will be manually deleted to get some free space on your storage device or it will be wiped during the OS reinstallation. Everything that happened with that data starting from its creation or collection until its destruction is called the data life cycle.
The data life cycle is a sequence of stages that happened to a particular unit of data. The simplified life cycle model has 5 basic stages: Collection, Processing, Retention, Disclosure, Destruction. In practice, when we talk about personal data life cycle, this sequence can be dramatically different, dependant on the type of information, its usage, origin, company policies, personal data protection regulations and legislation.
Continue readingCopyright
Copyright under English law is primarily established under the Copyright Designs and Patents Act 1988. Copyright can extend to protect videos and images taken by you on your devices.
In such circumstances, these videos and images are protected 70 years from the end of the life of the taker. This can function to protect photographs and videos that you have taken from use by third parties. By enforcing your copyright ownership you can control who has the right to use and edit the images and/or footage in question. This is usually in the form of a cease and desist letter notifying the third party of your ownership of the material whilst asking that they stop usage as soon as possible.

Revisiting the right to be forgotten, the NT1 and NT2 case
The right to be forgotten or right to erasure under data protection legislation and enshrined from the Google Spain case allows significant protection of information regarding the individual. In this post, we consider the seminal case of NT1 and NT2 which is illustrative of this fact. Continue reading
Breach of confidence
Breach of confidence occurs when confidential information, as shared between parties in a manner which is confidential, is shared with a third party in breach of that duty of confidence. What imposes the duty to protect the information in a breach of confidence case is a pre-existing confidential relationship between the parties.
The case of Coco v A.N. Clark involved the claimant looking to bring a new form of moped to the market, parts of which were then sourced from a third party in breach of obligations of confidence. This case underpinned the three elements of the tort and highlights the most common scenario breach of confidence claims arise in; those involving business secrets and negotiations.
In relation to privacy breach of confidence tends to cover confidential conversations and communications where the nature of the information itself attracts a reasonable expectation of privacy. This may relate to communications with lawyers or medical professionals, for example.

Defamation
Defamation seeks to protect the individuals’ reputation from false statements which harm or may harm it. Slander and libel (more permanent forms of communication) refer to a statement publicized to a third party which has or is likely to cause serious harm to their reputation.
Defamation is a construct of the common law, built up over a series of legal cases. Defamation cases have been held to extend to social media, such as to tweets made by Katie Hopkins to food writer Jack Monroe.
Thornton v Telegraph Media Group Ltd [2011] EWHC 159 (QB) highlighted that defamation claims often cross the threshold to engage Article 8 privacy rights. In particular, the European Court of Human Rights has ruled that:
“In order for Article 8 to come into play, however, an attack on a person’s reputation must attain a certain level of seriousness and in a manner causing prejudice to personal enjoyment of the right to respect for private life…”
Claimants have to show the statement at issue is likely to cause serious harm to their reputation per s.1 Defamation Act 2013. This is typically via evidence such as circulation, subscribers and views of the statement at issue.
The defenses available to defamation are:
- Truth: That the statement itself was substantially true.
- Honest opinion: That the statement was one of opinion and that an honest person could have reasonably held that opinion.
- Public interest: That the matter was one which was in the public interest and the publisher of the statement reasonably believed it to be so.
- Privilege: This can be absolute (such as a Parliamentary statement) or qualified (e.g. job references). Qualified privilege does not protect the publisher of a statement where it was done so maliciously.
Passing off
Passing off is typically used to protect a person’s name or image, which has attracted goodwill as a business commodity. There are three well-established elements of passing off as stated in the case of Reckitt & Colman Products Ltd v Borden Inc & Ors [1990] RPC 341:

- Goodwill or reputation in the mind of the public attached to goods or services;
- The defendant misrepresented that their goods or services are that of the claimant’s; and
- The claimant suffered or is likely to suffer damages due to erroneous belief in the mind of the public that the defendant’s goods are the claimant’s.
A case which illustrates this is that of Fenty v Arcadia [2015] EWCA Civ 3, a case involving Rihanna bringing a passing off action against Topshop. The action arose from Topshop’s unauthorized use of an image of Rihanna on a line of t-shirts. It was first considered that:
“registered trade marks aside, no-one can claim monopoly rights in a word or a name. Conversely, however, no-one may, by the use of any word or name, or in any other way, represent his goods or services as being the goods or services of another person and so cause that other person injury to his goodwill and so damage him in his business” – p.34
However, it was concluded that all elements of the tort were made out by the claimant. Rihanna had a marked presence in the fashion industry and had generated significant goodwill. By using her image on its t-shirts Topshop created a likelihood of confusion between customers that the t-shirts were endorsed by Rihanna herself. They were not. It was, therefore, considered Rihanna suffered damage due to the unauthorized use of her image. This was despite the fact that there is no standalone right to protect one’s image at law.
The Fenty case is illustrative of how passing off can be used to protect elements of the person which are inherently private identifying factors. The foremost of these being the likeness of a person or their name.
It should be noted that the rationale from protection in passing off cases in protecting the goodwill which attaches to these elements of the person. The nature of a passing-off action is, therefore, more akin to other economic torts such as malicious falsehood. Notwithstanding this nature, the propensity for passing off actions to be used to protect elements of the persona that attract inherent private character is significant.