The analysis presented here is based on my review of existing research on privacy expectations of people who create online content. This analysis concerns the full range of user interactions on what we used to call Web 2.0 platforms, focusing on social media systems like Facebook, Twitter, Reddit, Instagram, and Amazon. User interactions include posting original content (text, photos, videos, memes, etc.), and commenting on content posted by others. Reviews on Amazon and comments on news websites count as online content in this analysis. Photos uploaded to photo-sharing sites and original videos posted to YouTube also count. Anything in any format created by an individual from their own original thought and creative energy, and subsequently posted by the individual on social media platforms, counts as online content. In most instances the online content or interaction contains or is traceable to personally identifiable information, even if this is unintended by the content creator.
The academic field of surveillance studies has (thankfully in my view) become more crowded during the past few years in response to the increasing use of data technologies for social control. In the early 1990s, when some of us (e.g. me) were naively celebrating the liberating potential of the internet, Oscar H. Gandy, Jr. was critically examining earlier incarnations of data systems and practices that contributed to the entrenchment of existing systems of domination and social injustice. First published in 1993, his book The Panoptic Sort was a groundbreaking account of the history and rationalization of surveillance in service of institutional control and corporate profit at the expense of individual privacy and autonomy. In the a second edition, published by Oxford University press in 2021, Gandy updates his original book for the context of today’s increasingly ubiquitous technologies that collect, process, and commodify personal information for instrumental use by corporate interests.
The European Union’s General Data Protection Regulation (GDPR) has been described as a “gold standard” for protecting personal privacy in the Internet age. Among its core principles is a requirement for the consent of individuals to the collection and processing of their personal data. Consent must be freely given, specific, informed, and unambiguous. Based on the language of the GDPR and an extensive literature review, I argue here that the possibility of such consent is undermined by increasingly ubiquitous Internet of Things (IoT) devices which collect a vast array of personal data, and the use of automated data processing that can produce significant social and legal impacts on individuals and groups. I outline the requirements of consent under the GDPR, and describe the challenges to the GDPR’s privacy protection principles in a world of rapidly evolving IoT technologies.
Today we have an abundance of information resources undreamed of in past centuries, but are exposed via the Internet to more disinformation than any previous generation. Digital media technologies are being massively leveraged to spread propagandistic messages designed to undermine trust in all forms of information, and to stimulate strongly affective responses and an entrenchment of political, cultural, and social divisions. The critical demands of the digital age have outpaced development of a corresponding information literacy. Meanwhile journalists are accused by authoritarian leaders of being “enemies of the people” while facing layoffs from newsrooms no longer supported by a sustainable business model. Short of reinvention, professional journalism will be increasingly endangered and the relevance of news organizations will continue to decline. In this paper I propose a new collaborative model for news production and curation combining the expertise of librarians, journalists, educators, and technologists, with the objectives of addressing today’s information literacy deficit, bolstering the credibility and verifiability of news, and restoring reasoned deliberation in the public sphere.
The digital artifact known as Early English Books Online (EEBO) is a resource for research on British history and literature between 1473 and 1700. EEBO is a collection of 146,000 mostly English works accessible via an online database, available by subscription from ProQuest. In this article I first review the history of EEBO, which began with cataloging efforts more than a century ago, through the processes that developed the online version used by so many scholars today. I then critically review its limitations, and discuss some of the challenges and drawbacks inherent in the transformation of analog source materials into digital form, including information distortion and loss, format obsolescence, and the challenges of digital preservation.
In 1890 Samuel Warren and Louis Brandeis published a groundbreaking article in the Harvard Law Review arguing that privacy protections are part of a “right to be let alone.” The article strongly influenced theories of privacy over subsequent decades, and has been referenced in important U.S. Supreme Court rulings. But since the 19th century, society has changed in profound ways. We now interact daily with technologies that closely track our communications and behavior, collecting personal data for targeted advertising, trade among data brokerages, and mining by governments for criminal and political investigations. More than ever, the right to be let alone would appear to be under siege. In this paper I present two prominent critiques of the Warren/Brandeis conception of the right to privacy, so as to begin addressing the inadequacies of privacy protections in today’s world of ubiquitous digital information. Richard A. Posner views privacy as a question of economics and market efficiency. He rejects the conception of privacy as “the right to be let alone,” and suggests that individual privacy has little economic value to society, in contrast to commercial privacy which can have great value in a capitalistic market-based economy. Daniel Solove offers a theory of privacy based on Ludwig Wittgenstein’s notion of family resemblances, accounting for the contextual value of privacy based on prevailing social practices and norms. I wrote this short article for an assignment in a doctoral class on the history and foundations of information science. Given the assignment parameters, the article represents only a few points on the spectrum of conceptions about privacy. I was unable to include the important theoretical work of many other scholars whose work is essential to understanding privacy in the digital age. In particular, Helen Nissenbaum's articulation of the "contextual integrity" of privacy is laying important groundwork for new conceptions of privacy protection. Julie E. Cohen calls for recognition of the social harms increasingly evident in the "biopolitical domain," a space where personal information is acquired and exploited as raw materials for various types of marketplace activities. Oscar H. Gandy, Jr. identifies the inherent power imbalances of the "panoptic sort," and offers a theoretical framework for social and policy interventions. These and other important contributions are not covered here, but will be elsewhere as my research continues.
This paper concerns the role of online analytics in facilitating the rise of today's ubiquitous programmatic advertising, referred to herein as "AdTech." Most criticism of AdTech has focused on online tracking which captures user data, and digital advertising which exploits it for commercial purposes. Almost entirely lost in the discussion is the role of analytics platforms, which process personal data and make it actionable for targeted advertising. I argue that the role of analytics is key to the rise of AdTech, and has not been given the critical attention it deserves. I wrote this paper while pursuing my research as a PhD student at the University of Illinois School of Information Sciences. It has not been peer-reviewed or published elsewhere, and I’m posting it here to invite comments, criticism, and suggestions. Please feel free to send me email at jackb at illinois dot edu, or twitter message me @ jackbrighton.
In 2019 Ellen Lechman and I embarked on a research project to begin assessing the use of social media to spread propaganda in nations other than the United States. With this focus in mind, we conducted a literature review of recent research on propagandistic interventions occurring in Asian and European nations. We surveyed available academic and other institutional publications, and produced an annotated bibliography detailing a baker's dozen of sources we deemed most relevant.
In recent years, some have argued that if you can’t find information on Google, it might as well not exist. This assertion is problematic given that according to various estimates, the scope of Google’s search index range from 4 percent to .004 percent of the total Internet. Neils Kerssens examines these questions in the context of “positivist algorithmic ideology,” a normalizing force that frames certain practices as an established standard exempt from further interrogation.
I posted this short piece in 2015 on Medium where you can still find it. I'm republishing it here because, somewhat ironically given its topic of preservation, I'm less than fully confident that Medium will still be around in a few years, at least in its current "open" form. Exposure to archival practice came from my struggles as a media producer in the emerging digital age. I began designing websites with streaming and downloadable multimedia in 1997, and quickly realized that without an archival plan the situation was becoming hopeless. I saw how quickly technology was changing, and suspected that the media we published on the web at that time would be unplayable within a few years. And the challenge of preserving the audiovisual record has only grown larger since I wrote this in 2015.