To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data Rights in Transition maps the development of data rights that formed and reformed in response to the socio-technical transformations of the postwar twentieth century. The authors situate these rights, with their early pragmatic emphasis on fair information processing, as different from and less symbolically powerful than utopian human rights of older centuries. They argue that, if an essential role of human rights is 'to capture the world's imagination', the next generation of data rights needs to come closer to realising that vision – even while maintaining their pragmatic focus on effectiveness. After a brief introduction, the sections that follow focus on socio-technical transformations, emergence of the right to data protection, and new and emerging rights such as the right to be forgotten and the right not to be subject to automated decision-making, along with new mechanisms of governance and enforcement.
Generative AI has catapulted into the legal debate through the popular applications ChatGPT, Bard, Dall-E, and others. While the predominant focus has hitherto centred on issues of copyright infringement and regulatory strategies, particularly within the ambit of the AI Act, it is imperative to acknowledge that generative AI also engenders substantial tension with data protection laws. The example of generative AI puts a finger on the sore spot of the contentious relationship between data protection law and machine learning built on the unresolved conflict between the protection of individuals, rooted in fundamental data protection rights and the massive amounts of data required for machine learning, which renders data processing nearly universal. In the case of LLMs, which scrape nearly the whole internet, this training inevitably relies on and possibly even creates personal data under the GDPR. This tension manifests across multiple dimensions, encompassing data subjects’ rights, the foundational principles of data protection, and the fundamental categories of data protection. Drawing on ongoing investigations by data protection authorities in Europe, this paper undertakes a comprehensive analysis of the intricate interplay between generative AI and data protection within the European legal framework.
There is a conflict in law and in journalism ethics regarding the appropriateness of truthful but scandalous information: What should be published and what should be edited out? In the past, judges routinely gave the press the right to make such determinations and often sided with journalists even in surprising situations in which the privacy of the individual seemed clear. In modern internet times, however, some courts are more willing to side with the privacy of individuals over First Amendment press freedoms – and the case brought by professional wrestler Hulk Hogan against the Gawker website for publishing his sex tape without permission is one example. This chapter uses that scenario to explore the clash between an individual’s privacy rights and the rights of the press to decide what is news.
This chapter deals with the relationship between digital monies and basic societal values such as privacy and individual freedom. Threats to privacy and related concerns have risen in the digital age. Information technologies allow companies and governments to collect, store, maintain, and disseminate information on all dimensions of individual and collective life. Privacy is a basic human need defended by legislations and constitutions worldwide. Privacy helps explaining the attractiveness of cash. Some of today’s commercial applications of information technology imply intrusions into the personal sphere. Societal concerns about anonymity, because it facilitates unlawful and criminal activity, must also be taken into consideration, but there are reasons why some privacy of monetary transactions should be preserved, and cash is uniquely suited for that. Another question concerns freedom to choose the money. This idea was proposed originally by the so-called Austrian school of thought. Followers of the school of thought associated with Friedrich von Hayek argued that currencies should compete with one another. That school however underestimated important objections; first and foremost is the collective interest ingredient of a well-functioning money, which makes private competition ill-suited as means for promoting good monies. The chapter concludes explaining why some of these objections apply to crypto assets as well.
This chapter deals with cash (banknotes and coins), the oldest and most traditional form of money in existence. Cash involves a paradox: On the one hand, it is technologically less advanced that modern means of payments like cards and apps, so one could presume that it should decline in use and eventually disappear. On the other, however, evidence for almost the whole world shows that the demand for cash is increasing, although it is used less frequently for certain types of transactions like online commerce, retail stores, and restaurants. Criminal activities may explain part of the puzzle, but not much. One advantage of cash is that it can be seen and touched, therefore appealing to the senses and conveying a sense of security. Another is that it ensures absolute privacy of transactions. Other important characteristics explaining the popularity of cash are that it is simple (it requires no technology or complication whatsoever); definitive (it instantly settles any financial obligation); private and personal (it appeals to the desire of confidentiality); and self-sufficient (it does not depend on any other infrastructure functioning). We conclude therefore that physical cash is a useful complement of a robust and diversified monetary system, in which digital means of payments gradually prevail.
Jewish experiences, from life in cramped Judenhäuser always subject to Gestapo violence, to the suffering of individuals and families in a variety of ghettos in eastern Europe, are discussed. This includes the geographies of the Holocaust, house committees and activities within and outside ghetto walls, and also communal organizations, economic activities, self-help, and familial strategies.
A potent investigative instrument in the fight against highly sophisticated criminal schemes camouflaged by layers of secrecy is the sting operation. However, its application provokes crucial questions of legality and admissibility. Additionally, lack of legal provisions governing sting operations in India has resulted in conflicting judicial stances, calling for clarity on this issue. Hence, this paper examines the intricate legal and ethical challenges surrounding sting operations, which, on one hand, aid in uncovering serious offences and foster public interest but, on the other hand, threaten to infringe privacy rights and fairness of trials. The paper analyses international practices in Canada and the United States of America, alongside judicial precedents and scholarly opinions in India, and recommends statutory inclusion of sting operations in the Indian legal system. The paper proposes stringent judicial control, elaborate ethical guidelines to avoid staging crimes, and regulations on media reporting to maintain the delicate balance of public interest versus personal rights. The paper concludes with a model draft for legislative reform that seeks to strengthen the idea of justice without weakening fundamental rights.
This chapter scrutinizes the operation of public sector privacy and data protection laws in relation to AI data in the United States, the United Kingdom and Australia, to assess the potential for utilizing these laws to challenge automated government decision-making. Government decision-making in individual cases will almost inevitably involve the collection, use, or storage of personal information, and may also involve drawing inferences from data already collected. At the same time increased usage of automated decision-making encourages the large-scale collection and mining of personal data. Privacy and data protection laws provide a useful chokepoint for limiting discrimination and other harms that arise from misuses of personal information.
The emergence of “FemTech”, a term used to describe technologically based or enabled applications serving women’s health needs, as a driver of capital investment in the past decade, is a notable development in advancing women’s health. Critics have raised important concerns regarding the pitfalls of FemTech, with privacy concerns being chief among them. This private market, however, should be integrated into creation of systemwide corrections of problems that plague women of color. To do so a derivate FemTech framework (hereinafter the “Framework”) clear limitations must concurrently be overcome to realize its possibilities.
Over the years, businesses have been trying to identify ways to segment their customer base and engage in price discrimination. The objective is to provide different prices to different consumers based on a range of factors, such as age, location, income, and other demographic characteristics, which are considered capable of revealing the reserve price of buyers. With the increasing use of digital technology, this practice has become even more accurate and sophisticated, leading to the emergence of personalized pricing. This pricing approach utilizes advanced algorithms and data analytics to approximate the exact willingness to pay of each purchaser with greater precision.
In an unprecedented ruling, in 2018, the Brazilian Consumer Protection Authority applied a fine to a popular online travel company named Decolar.com for allegedly favouring foreign consumers over Brazilian residents during the 2016 Olympics held in Rio de Janeiro. The accusation was that Decolar.com had offered hotel reservations at different prices according to the consumer’s location as identified through their internet protocol address, or IP address.
To our knowledge, this is the only case thus far in Brazil that reviewed the practice of charging different prices from different consumers based on their specific characteristics.
Personalized pricing is a form of pricing where different customers are charged different prices for the same product depending on their ability to pay, based on the information that the trader holds of a potential customer. Pricing plays a relevant role in the decision-making process by the consumers, and a firm’s performance can be determined by the ability of the business entities to execute a pricing strategy accordingly. Further, pricing also determines the quality, value, and willingness to buy. Usually the willingness of a consumer depends on transparency and fairness.
Technological developments have enabled online sellers to personalize prices of the goods and services.
As the personalization of e-commerce transactions continues to intensify, the law and policy implications of algorithmic personalized pricing (APP) should be top of mind for regulators. Price is often the single most important term of consumer transactions. APP is a form of online discriminatory pricing practice whereby suppliers set prices based on consumers’ personal information with the objective of getting as close as possible to their maximum willingness to pay. As such, APP raises issues of competition, privacy, personal data protection, contract, consumer protection, and anti-discrimination law.
This book chapter looks at the legality of APP from a Canadian perspective in competition, commercial consumer law, and personal data protection law.
Lay people often are misinformed about what is a secure password, what are the various types of security threats to passwords or password-protected resources, and the risks of certain compromising practices such as reusing passwords and required password expiration. Expert knowledge about password security has evolved considerably over time, but on many points, research supports general agreement among experts about best practices. Remarkably, though perhaps not surprisingly, there is a sizable gap between what experts agree on and what lay people believe and do. The knowledge gap might exist and persist because of intermediaries, namely professionals and practitioners as well as technological interfaces such as password meters and composition rules. In this chapter, we identify knowledge commons governance dilemmas that arise within and between different communities (expert, professional, lay) and examine implications for other everyday misinformation problems.
Generative artificial intelligence (AI) systems, such as large language models, image synthesis tools, and audio generation engines, present remarkable possibilities for creative expression and scientific discovery but also pose pressing challenges for privacy governance. By identifying patterns in vast troves of digital data, these systems can generate hyper-realistic yet fabricated content, surface sensitive inferences about individuals and groups, and shape public discourse at an unprecedented scale. These innovations amplify privacy concerns about nonconsensual data extraction, re-identification, inferential profiling, synthetic media manipulation, algorithmic bias, and quantification. This article argues that the current U.S. legal framework, rooted in a narrowly targeted sectoral approach and overreliance on individual notice and consent, is fundamentally mismatched to address the emergent and systemic privacy harms of generative AI. It examines how the unprecedented scale, speed, and sophistication of these systems strain core assumptions of data protection law, highlighting the misalignment between AI’s societal impacts and individualistic, reactive approaches to privacy governance. The article explores distinctive privacy challenges posed by generative AI, surveys gaps in existing U.S. regulations, and outlines key elements of a new paradigm to protect individual and collective privacy rights that (1) shifts from individual to collective conceptions of privacy; (2) moves from reactive to proactive governance; and (3) reorients the goals and values of AI governance. Despite significant obstacles, it identifies potential policy levers, technical safeguards, and conceptual tools to inform a more proactive and equitable approach to governing generative AI.
It is widely presumed that privacy is ‘factive’, i.e. that it cannot be diminished by accessing or disseminating falsehoods. But if this is so, what wrongs are committed in cases where others access documents of ours (letters, medical records, etc.) which contain false information? In this article, I examine various ways of explaining the wrongfulness of accessing and dissemination falsehoods (defamation; that privacy can be violated without being diminished; ‘control’ accounts of privacy; downstream revelations of truths; that falsehoods diminish ‘propositional’ or ‘attentional’ privacy). I lay out what each of these accounts misses about accessing falsehoods, about privacy, and/or about the right to privacy. I then propose two alternative ways of accounting for the intuitive wrongfulness of accessing and disseminating falsehoods: viewing them as merely ‘attempted’ privacy violations and weakening the truth condition of privacy diminishments.
There is no doubt that AI systems, and the large-scale processing of personal data that often accompanies their development and use, has put a strain on individuals’ fundamental rights and freedoms. Against that background, this chapter aims to walk the reader through a selection of key concerns arising from the application of the GDPR to the training and use of such systems. First, it clarifies the position and role of the GDPR within the broader European data protection regulatory framework. Next, it delineates its scope of application by delving into the pivotal notions of “personal data,” “controller,” and “processor.” Lastly, it highlights some friction points between the characteristics inherent to most AI systems and the general principles outlined in Article 5 GDPR, including lawfulness, transparency, purpose limitation, data minimization, and accountability.
This essay examines how nineteenth-century American literature paved the way for the modern exposure of private life in such disparate venues as the gossip column, social media, and reality television. In particular, this essay examines the sketch form, a popular nineteenth-century prose genre that has often been characterized as a minor form in comparison to the novel. In examining the history of the sketch form, this essay shows how the sketch conveyed reservations about the interiority and exposures central to the novel form. As practiced by Washington Irving, the earliest popularizer of this genre, the sketch advocated respectful discretion, the avoidance of private matters, and social stasis, the latter of which positioned the sketch in opposition to the social mobility characteristic of the novel. Irving presented the sketch as the genre of literary discretion, but its latter practitioner, Nathaniel Parker Willis, used the sketch to divulge confidences and violate social decorum. Willis adapted the sketch to become a precursor of the gossip column and to mirror the novel form in exposing private life.
Chapter 4 presents a review of the ISO 18000-63 protocol, including data encoding and modulation, and aspects of the transponder memory structure, security, and privacy, and presents real examples of reader–transponder transactions.
Competition law is experiencing a transformation from a niche economic tool to a Swiss knife of broader industrial and social policy. Relatedly, there is a narrative that sees an expansive role for competition law in broad areas such as sustainability, privacy, and workers and labour rights, and a counternarrative that wants to deny it that role. There is rich scholarship on this area, but little empirical backing. In this article, we present the results of a comprehensive empirical research into whether new goals and objectives such as sustainability, privacy, and workers and labour rights are indeed endorsed in EU competition law and practice. We do so through an investigation into the totality of Court of Justice rulings, Commission decisions, Advocate General opinions, and public statements of the Commission. Our findings inject data into the debate and help dispel misconceptions that may arise by overly focusing on cherry-picked high-profile decisions while overlooking the rest of the EU’s institutional practice.
We find that sustainability is partially recognised as a goal whereas privacy and labour rights are not. We also show that all three goals are more recent than classic goals, that EU institutions have not engaged much with the areas of sustainability, privacy, and workers and labour rights, and that the Commission’s rhetoric is seemingly out of pace with decisional practice. We also identify trends that may bode for change, and we contextualize our analysis through the lens of the history and nature of the EU’s integration and economic constitution.