The “Privacy Paradox” and Our Expectations of Online Privacy

rhombus loop with many words on it

The analysis presented here is based on my review of existing research on privacy expectations of people who create online content. This analysis concerns the full range of user interactions on what we used to call Web 2.0 platforms, focusing on social media systems like Facebook, Twitter, Reddit, Instagram, and Amazon. User interactions include posting original content (text, photos, videos, memes, etc.), and commenting on content posted by others. Reviews on Amazon and comments on news websites count as online content in this analysis. Photos uploaded to photo-sharing sites and original videos posted to YouTube also count. Anything in any format created by an individual from their own original thought and creative energy, and subsequently posted by the individual on social media platforms, counts as online content. In most instances the online content or interaction contains or is traceable to personally identifiable information, even if this is unintended by the content creator.

Book Review – The Panoptic Sort: A Political Economy of Personal Information, by Oscar H. Gandy, Jr.

graphic of a human eye surrounded by digital data

The academic field of surveillance studies has (thankfully in my view) become more crowded during the past few years in response to the increasing use of data technologies for social control. In the early 1990s, when some of us (e.g. me) were naively celebrating the liberating potential of the internet, Oscar H. Gandy, Jr. was critically examining earlier incarnations of data systems and practices that contributed to the entrenchment of existing systems of domination and social injustice. First published in 1993, his book The Panoptic Sort was a groundbreaking account of the history and rationalization of surveillance in service of institutional control and corporate profit at the expense of individual privacy and autonomy. In the a second edition, published by Oxford University press in 2021, Gandy updates his original book for the context of today’s increasingly ubiquitous technologies that collect, process, and commodify personal information for instrumental use by corporate interests.

The GDPR and (not) Regulating the Internet of Things

GDPR graphic

The European Union’s General Data Protection Regulation (GDPR) has been described as a “gold standard” for protecting personal privacy in the Internet age. Among its core principles is a requirement for the consent of individuals to the collection and processing of their personal data. Consent must be freely given, specific, informed, and unambiguous. Based on the language of the GDPR and an extensive literature review, I argue here that the possibility of such consent is undermined by increasingly ubiquitous Internet of Things (IoT) devices which collect a vast array of personal data, and the use of automated data processing that can produce significant social and legal impacts on individuals and groups. I outline the requirements of consent under the GDPR, and describe the challenges to the GDPR’s privacy protection principles in a world of rapidly evolving IoT technologies.

The Right to Privacy In Context: A Century of Debate

Woman walking on sand with blue sky

In 1890 Samuel Warren and Louis Brandeis published a groundbreaking article in the Harvard Law Review arguing that privacy protections are part of a “right to be let alone.” The article strongly influenced theories of privacy over subsequent decades, and has been referenced in important U.S. Supreme Court rulings. But since the 19th century, society has changed in profound ways. We now interact daily with technologies that closely track our communications and behavior, collecting personal data for targeted advertising, trade among data brokerages, and mining by governments for criminal and political investigations. More than ever, the right to be let alone would appear to be under siege. In this paper I present two prominent critiques of the Warren/Brandeis conception of the right to privacy, so as to begin addressing the inadequacies of privacy protections in today’s world of ubiquitous digital information. Richard A. Posner views privacy as a question of economics and market efficiency. He rejects the conception of privacy as “the right to be let alone,” and suggests that individual privacy has little economic value to society, in contrast to commercial privacy which can have great value in a capitalistic market-based economy. Daniel Solove offers a theory of privacy based on Ludwig Wittgenstein’s notion of family resemblances, accounting for the contextual value of privacy based on prevailing social practices and norms. I wrote this short article for an assignment in a doctoral class on the history and foundations of information science. Given the assignment parameters, the article represents only a few points on the spectrum of conceptions about privacy. I was unable to include the important theoretical work of many other scholars whose work is essential to understanding privacy in the digital age. In particular, Helen Nissenbaum's articulation of the "contextual integrity" of privacy is laying important groundwork for new conceptions of privacy protection. Julie E. Cohen calls for recognition of the social harms increasingly evident in the "biopolitical domain," a space where personal information is acquired and exploited as raw materials for various types of marketplace activities. Oscar H. Gandy, Jr. identifies the inherent power imbalances of the "panoptic sort," and offers a theoretical framework for social and policy interventions. These and other important contributions are not covered here, but will be elsewhere as my research continues.

Fueling the AdTech Machine: Google Analytics and the Commodification of Personal Data

Digital Marketing technology diagram

This paper concerns the role of online analytics in facilitating the rise of today's ubiquitous programmatic advertising, referred to herein as "AdTech." Most criticism of AdTech has focused on online tracking which captures user data, and digital advertising which exploits it for commercial purposes. Almost entirely lost in the discussion is the role of analytics platforms, which process personal data and make it actionable for targeted advertising. I argue that the role of analytics is key to the rise of AdTech, and has not been given the critical attention it deserves. I wrote this paper while pursuing my research as a PhD student at the University of Illinois School of Information Sciences. It has not been peer-reviewed or published elsewhere, and I’m posting it here to invite comments, criticism, and suggestions. Please feel free to send me email at jackb at illinois dot edu, or twitter message me @ jackbrighton.