We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In The Secret Life of Copyright, copyright law meets Black Lives Matter and #MeToo in a provocative examination of how our legal regime governing creative production unexpectedly perpetuates inequalities along racial, gender, and socioeconomic lines while undermining progress in the arts. Drawing on numerous case studies – Harvard’s slave daguerreotypes, celebrity sex tapes, famous Wall Street statues, beloved musicals, and dictator copyrights – the book argues that, despite their purported neutrality, key rules governing copyrights – from the authorship, derivative rights, and fair use doctrines to copyright’s First Amendment immunity – systematically disadvantage individuals from traditionally marginalized communities. Since laws regulating the use of creative content increasingly mediate participation and privilege in the digital world, The Secret Life of Copyright provides a template for a more robust copyright system that better addresses egalitarian concerns and serves the interests of creativity.
Artificial Intelligence (AI) can collect, while unperceived, Big Data on the user. It has the ability to identify their cognitive profile and manipulate the users into predetermined choices by exploiting their cognitive biases and decision-making processes. A Large Generative Artificial Intelligence Model (LGAIM) can enhance the possibility of computational manipulation. It can make a user see and hear what is more likely to affect their decision-making processes, creating the perfect text accompanied by perfect images and sounds on the perfect website. Multiple international, regional and national bodies recognised the existence of computational manipulation and the possible threat to fundamental rights resulting from its use. The EU even moved the first steps towards protecting individuals against computational manipulation. This paper argues that while manipulative AIs which rely on deception are addressed by existing EU legislation, some forms of computational manipulation, specifically if LGAIM is used in the manipulative process, still do not fall under the shield of the EU. Therefore, there is a need for a redraft of existing EU legislation to cover every aspect of computational manipulation.
The rise in the use of AI in most key areas of business, from sales to compliance to financial analysis, means that even the highest levels of corporate governance will be impacted, and that corporate leaders are duty-bound to manage both the responsible development and the legal and ethical use of AI. This transformation will directly impact the legal and ethical duties and best practices of those tasked with setting the ‘tone at the top’ and who are accountable for the firm’s success. Directors and officers will have to ask themselves to what extent should, or must, AI tools be used in both strategic business decision-making, as well as monitoring processes. Here we look at a number of issues that we believe are going to arise due to the greater use of generative AI. We consider what top management should be doing to ensure that all such AI tools used by the firm are safe and fit for purpose, especially considering avoidance of potential negative externalities. In the end, due to the challenges of AI use, the human component of top corporate decision-making will be put to the test, to prudentially thread the needle of AI use and to ensure the technology serves corporations and their human stakeholders instead of the other way around.
Generative AI offers a new lever for re-enchanting public administration, with the potential to contribute to a turning point in the project to ‘reinvent government’ through technology. Its deployment and use in public administration raise the question of its regulation. Adopting an empirical perspective, this chapter analyses how the United States of America and the European Union have regulated the deployment and use of this technology within their administrations. This transatlantic perspective is justified by the fact that these two entities have been very quick to regulate the issue of the deployment and use of this technology within their administrations. They are also considered to be emblematic actors in the regulation of AI. Finally, they share a common basis in terms of public law, namely their adherence to the rule of law. In this context, the chapter highlights four regulatory approaches to regulating the development and use of generative AI in public administration: command and control, the risk-based approach, the experimental approach, and the management-based approach. It also highlights the main legal issues raised by the use of such technology in public administration and the key administrative principles and values that need to be safeguarded.
This chapter examines the transformative effects of generative AI (GenAI) on competition law, exploring how GenAI challenges traditional business models and antitrust regulations. The evolving digital economy, characterised by advances in deep learning and foundation models, presents unique regulatory challenges due to market power concentration and data control. This chapter analyses the approaches adopted by the European Union, United States, and United Kingdom to regulate the GenAI ecosystem, including recent legislation such as the EU Digital Markets Act, the AI Act, and the US Executive Order on AI. It also considers foundational models’ reliance on key resources, such as data, computing power, and human expertise, which shape competitive dynamics across the AI market. Challenges at different levels—including infrastructure, data, and applications—are investigated, with a focus on their implications for fair competition and market access. The chapter concludes by offering insights into the balance needed between fostering innovation and mitigating the risks of monopolisation, ensuring that GenAI contributes to a competitive and inclusive market environment.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
It is well-known that, to be properly valued, high-quality products must be distinguishable from poor-quality ones. When they are not, indistinguishability creates an asymmetry in information that, in turn, leads to a lemons problem, defined as the market erosion of high-quality products. Although the valuation of generative artificial intelligence (GenAI) systems’ outputs is still largely unknown, preliminary studies show that, all other things being equal, human-made works are evaluated at significantly higher values than machine-enabled ones. Given that these works are often indistinguishable, all the conditions for a lemons problem are present. Against that background, this Chapter proposes a Darwinian reading to highlight how GenAI could potentially lead to “unnatural selection” in the art market—specifically, a competition between human-made and machine-enabled artworks that is not based on the merits but distorted by asymmetrical information. This Chapter proposes solutions ranging from top-down rules of origin to bottom-up signalling. It is argued that both approaches can be employed in copyright law to identify where the human author has exercised the free and creative choices required to meet the criterion of originality, and thus copyrightability.
While generative AI enables the creation of diverse content, including images, videos, text, and music, it also raises significant ethical and societal concerns, such as bias, transparency, accountability, and privacy. Therefore, it is crucial to ensure that AI systems are both trustworthy and fair, optimising their benefits while minimising potential harm. To explore the importance of fostering trustworthiness in the development of generative AI, this chapter delves into the ethical implications of AI-generated content, the challenges posed by bias and discrimination, and the importance of transparency and accountability in AI development. It proposes six guiding principles for creating ethical, safe, and trustworthy AI systems. Furthermore, legal perspectives are examined to highlight how regulations can shape responsible generative AI development. Ultimately, the chapter underscores the need for responsible innovation that balances technological advancement with societal values, preparing us to navigate future challenges in the evolving AI landscape.
The rapid development of generative artificial intelligence (AI) systems, particularly those fuelled by increasingly advanced large language models (LLMs), has raised concerns of their potential risks among policymakers globally. In July 2023, Chinese regulators enacted the Interim Measures for the Management of Generative AI Services (“the Measures”). The Measures aim to mitigate various risks associated with public-facing generative AI services, particularly those concerning content safety and security. At the same time, Chinese regulators are seeking the further development and application of such technology across diverse industries. Tensions between these policy objectives are reflected in the provisions of the Measures that entail different types of obligations on generative AI service providers. Such tensions present significant challenges for implementation of the regulation. As Beijing moves towards establishing a comprehensive legal framework for AI governance, legislators will need to further clarify and balance the responsibilities of diverse stakeholders.
Generative artificial intelligence (GenAI) raises ethical and social challenges that can be examined according to a normative and an epistemological approach. The normative approach, increasingly adopted by European institutions, identifies the pros and cons of technological advancement. The main pros concern technological innovation, economic development and the achievement of social goals and values. The disadvantages mainly concern cases of abuse, use or underuse of Gen AI. The epistemological approach investigates the specific way in which Gen AI produces information, knowledge, and a representation of reality that differs from that of human beings. To fully realise the impact of Gen AI, our paper contends that both these approaches should be pursued: an identification of the risks and opportunities of Gen AI also depends on considering how this form of AI works from an epistemological viewpoint and our ability to interact with it. Our analysis compares the epistemology of Gen AI with that of law, to highlight four problematic issues in terms of: (i) qualification; (ii) reliability; (iii) pluralism and novelty; (iv) technological dependence. The epistemological analysis of these issues leads to a better framing of the social and ethical aspects resulting from the use, abuse or underuse of Gen AI.
The AI Act contains some specific provisions dealing with the possible use of artificial intelligence for discriminatory purposes or in discriminatory ways, in the context of the European Union. The AI Act also regulates generative AI models. However, these two respective sets of rules have little in common: provisions concerning non-discrimination tend not to cover generative AI, and generative AI rules tend not to cover discrimination. Based on this analysis, the Chapter considers what is currently the Eu legal framework on discriminatory output of generative AI models, and concludes that those expressions that are already prohibited by anti-discrimination law certainly remain prohibited after the approval of the AI Act, while discriminatory content that is not covered by Eu non-discrimination legislation will remain lawful. For the moment, the AI Act has not brought any particularly relevant innovation on this specific matter, but the picture might change in the future.
Generative AI promises to have a significant impact on intellectual property law and practice in the United States. Already several disputes have arisen that are likely to break new ground in determining what IP protects and what actions infringe. Generative AI is also likely to have a significant impact on the practice of searching for prior art, creating new materials, and policing rights. This chapter surveys the emerging law of generative AI and IP in the United States, sticking as close as possible to near-term developments and controversies. All of the major IP areas are covered, at least briefly, including copyrights, patents, trademarks, trade secrets, and rights of publicity. For each of these areas, the chapter evaluates the protectability of AI-generated materials under current law, the potential liability of AI providers for their use of existing materials, and likely changes to the practice of creation and enforcement.
There is growing global interest in how AI can improve access to justice, including how it can increase court capacity. This chapter considers the potential future use of AI to resolve disputes in the place of the judiciary. We focus our analysis on the right to a fair trial as outlined in Article 6 of the European Convention on Human Rights, and ask: do we have a right to a human judge? We firstly identify several challenges to interpreting and applying Article 6 in this new context, before considering the principle of human dignity, which has received little attention to date. Arguing that human dignity is an interpretative principle which incorporates protection from dehumanisation, we propose it provides a deeper, or “thicker” reading of Article 6. Applied to this context, we identify risks of dehumanisation posed by judicial AI, including not being heard, or not being subject to human judgement or empathy. We conclude that a thicker reading of Article 6 informed by human dignity strongly suggests the need to preserve human judges at the core of the judicial process in the age of AI.
Drawing on the extensive history of study of the terms and conditions (T&Cs) and privacy policies of social media companies, this paper reports the results of pilot empirical work conducted in January-March 2023, in which T&Cs were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc.), small and large sizes, and varying countries of origin. Our early findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradiction to their function as content generators rather than mere hosts for third party content.
The recent paradigm shift from predictive to generative AI has accelerated a new era of innovation in artificial intelligence. Generative AI, exemplified by large language models (LLMs) like GPT (Generative Pre-trained Transformer), has revolutionized this landscape. This transition holds profound implications for the legal domain, where language is central to practice. The integration of LLMs into AI and law research and legal practice presents both opportunities and challenges. This chapter explores the potential enhancements of AI through LLMs, particularly the CLAUDETTE system, focusing on consumer empowerment and privacy protection. On this basis, we also investigate what new legal issues can emerge in the context of the AI Act and related regulations. Understanding the capabilities and limitations of LLMs vis-à-vis conventional approaches is crucial in harnessing their full potential for legal applications.
This chapter explores the privacy challenges posed by generative AI and argues for a fundamental rethinking of privacy governance frameworks in response. It examines the technical characteristics and capabilities of generative AIs that amplify existing privacy risks and introduce new challenges, including nonconsensual data extraction, data leakage and re-identification, inferential profiling, synthetic media generation, and algorithmic bias. It surveys the current landscape of U.S. privacy law and its shortcomings in addressing these emergent issues, highlighting the limitations of a patchwork approach to privacy regulation, the overreliance on notice and choice, the barriers to transparency and accountability, and the inadequacy of individual rights and recourse. The chapter outlines critical elements of a new paradigm for generative AI privacy governance that recognizes collective and systemic privacy harms, institutes proactive measures, and imposes precautionary safeguards, emphasizing the need to recognize privacy as a public good and collective responsibility. The analysis concludes by discussing the political, legal, and cultural obstacles to regulatory reform in the United States, most notably the polarization that prevents the enactment of comprehensive federal privacy legislation, the strong commitment to free speech under the First Amendment, and the “permissionless” innovation approach that has historically characterized U.S. technology policy.
This chapter deals with the use of Large Language Models (LLMs) in the legal sector from a comparative law perspective, exploring their advantages and risks, the pertinent question as to whether the deployment of LLMs by non-lawyers can be classified as an unauthorized practice of law in the US and Germany, what lawyers, law firms and legal departments need to consider when using LLMs under professional rules of conduct - especially the American Bar Association Model Rules of Professional Conduct and the Charter of Core Principles of the European Legal Profession of the Council of Bars and Law Societies of Europe, and, finally, how the recently published AI Act will affect the legal tech market – specifically, the use of LLMs. A concluding section summarizes the main findings and points out open questions.
This chapter explores the intricate relationship between consumer protection and GenAI. Prominent tools like Bing Chat, ChatGPT4.0, Google’s Gemini (formerly known as Bard), OpenAI’s DALL·E, and Snapchat’s AI chatbot are widely recognized, and they dominate the generative AI landscape. However, numerous smaller, unbranded GenAI tools are embedded within major platforms, often going unrecognized by consumers as AI-driven technology. In particular, the focus of this chapter is the phenomenon of algorithmic consumers, whose interactions with digital tools, including GenAI, have become increasingly dynamic, engaging, and personalized. Indeed, the rise of algorithmic consumers marks a pivotal shift in consumer behaviour, which is now characterized by heightened levels of interactivity and customization.
Several criminal offences can originate from or culminate with the creation of content. Sexual abuse can be perpetrated by producing intimate material without the subject’s consent, while incitement to criminal activity can begin with a simple conversation. When the task of generating content is entrusted to artificial agents, it becomes necessary to delve into the associated risks posed by this technology. Generative AI changes criminal affordances because it simplifies access to harmful or dangerous content, amplifies the range of recipients, creates new kinds of harmful content, and can exploit cognitive vulnerabilities to manipulate user behaviour. Given this evolving landscape, the question that arises is whether criminal law should be involved in the policies aimed at fighting and preventing Generative AI-related harms. The bulk of criminal law scholarship to date would not criminalise AI harms on the theory that AI lacks moral agency. However, when a serious harm occurs, responsibility needs to be distributed considering the guilt of the agents involved, and, if it is lacking, it needs to fall back because of their innocence. Legal systems need to start exploring whether and how guilt can be preserved when the actus reus is completely or partially delegated to Generative AI.
This chapter will focus on how Chinese and Japanese copyright law balance content owner’s desire for copyright protection with the national policy goal of enabling and promoting technological advancement, in particular in the area of AI-related progress. In discussing this emerging area of law, we will focus mainly on the two most fundamental questions that the widespread adoption of generative AI pose to copyright regulators: (1) does the use and refinement of training data violate copyright law, and (2) who owns a copyright in content produced by or with the help of AI?