Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence – Annotation & Notes

This paper looks at advances in artificial intelligence through the lens of critical science, post-colonial, and decolonial theory. The authors acknowledge the positive potential for AI technologies, but use this paper to highlight its considerable risks, especially for vulnerable populations. They call for a proactive approach to be adopted by AI communities using decolonial theories that use historical hindsight to identify patterns of power that remain relevant – perhaps more than ever – in the rapid advance of AI technologies. 

Critical data modeling and the basic representation model – Annotation & Notes

Chart showing Race disparities in US criminal justice system, late 2010s

Data models are foundational to information processing, and in the digital world they stand in for the real world. When machines are used to make algorithmically-informed decisions, their algorithms are informed by the data models they use. And the data structured by data models is numerical of necessity, since machines must perform logical operations, and not creative interpretations. It follows that data used in machine operations are machine-language translations of real-world phenomena, expressed in a data model designed for efficient processing. It should not be surprising then that as information systems increasingly make decisions that affect people and communities, their operations are in a very direct sense an extension of the messy human world. This has resulted in information systems that reflect human racism, sexism, and many otherisms, with real-world harm to individuals and communities. But given the black-box nature of “machine learning” algorithms, how do we know what happens inside the black box? How can we document machine bias so as to design algorithms that don’t perpetuate social harms?

The “Privacy Paradox” and Our Expectations of Online Privacy

rhombus loop with many words on it

The analysis presented here is based on my review of existing research on privacy expectations of people who create online content. This analysis concerns the full range of user interactions on what we used to call Web 2.0 platforms, focusing on social media systems like Facebook, Twitter, Reddit, Instagram, and Amazon. User interactions include posting original content (text, photos, videos, memes, etc.), and commenting on content posted by others. Reviews on Amazon and comments on news websites count as online content in this analysis. Photos uploaded to photo-sharing sites and original videos posted to YouTube also count. Anything in any format created by an individual from their own original thought and creative energy, and subsequently posted by the individual on social media platforms, counts as online content. In most instances the online content or interaction contains or is traceable to personally identifiable information, even if this is unintended by the content creator.

Information and Communication Technology and Society – Annotation & Notes

In this article Fuchs introduces “Critical Internet Theory” as a foundation for analyzing the Internet and society based on a Marxian critique. He illustrates Critical Internet Theory (hereinafter CIT for brevity) using the emergence of the so-called Web 2.0 as an Internet gift commodity strategy, wherein users produce content on free platforms, which commodify the content to increase their advertising revenues. Fuchs introduces the concept of the “Internet prosumer commodity” to describe this “free” exchange of labor and value. This strategy, he writes, “functions as a legitimizing ideology.”

The Panoptic Sort: A Political Economy of Personal Information, by Oscar H. Gandy, Jr. – Book Review

graphic of a human eye surrounded by digital data

The academic field of surveillance studies has (thankfully in my view) become more crowded during the past few years in response to the increasing use of data technologies for social control. In the early 1990s, when some of us (e.g. me) were naively celebrating the liberating potential of the internet, Oscar H. Gandy, Jr. was critically examining earlier incarnations of data systems and practices that contributed to the entrenchment of existing systems of domination and social injustice. First published in 1993, his book The Panoptic Sort was a groundbreaking account of the history and rationalization of surveillance in service of institutional control and corporate profit at the expense of individual privacy and autonomy. In the a second edition, published by Oxford University press in 2021, Gandy updates his original book for the context of today’s increasingly ubiquitous technologies that collect, process, and commodify personal information for instrumental use by corporate interests.

The Application of Artificial Intelligence to Journalism: An Analysis of Academic Production – Annotation & Notes

This paper presents a summary of academic research on AI use in journalism, based on the authors’ review of 358 texts published between 2010 and January 2021. The materials they reviewed were found through academic databases including Scopus and Web of Science, in addition to Google Scholar. Most of the articles were published in English, and the majority was from the United States. Given significant developments with AI, and AI in journalism, since 2021, this paper is really a snapshot of research published for the period covered. The authors do note a rapid increase in research until 2019, with a dropoff in 2020 presumably from disruptions of the COVID-19 pandemic.

Dealing with Digital Intermediaries: A Case Study of the Relations between Publishers and Platforms – Annotation & Notes

In this article, published in 2017 in the journal New Media & Society, Nielsen and Ganter report on a series of interviews with editors, senior management, and product developers at a large, well-established European news media organization regarding their experiences and perspective on relationships with the main digital platforms that are now central to news distribution, namely Facebook and Google. This paper documents the asymmetrical power relationship between a large, well-known and successful news organization, and the digital platforms on which it now depends for audience reach. And it points to a gap in similar research on smaller, more precarious news organizations. 

Platformisation, by Thomas Poell, David Nieborg & José van Dick – Annotation & Notes

In this paper, published in the journal Internet Policy Review, the authors define and contextualize the concept of platformisation from four distinct scholarly perspectives: business studies, software studies research, critical political economy, and cultural studies. They suggest a research agenda making use of these four dimensions, so as to provide insight into “ever-evolving dynamics of platformisation” as sites of both benefits and harms to individuals and society. And they offer ways to operationalize the concept of platformisation in critical research on the emergence and concentration of power among a small number of platform companies, and how they are transforming social relationships and key societal sectors. 

Autoethnography: An Overview – Annotation & Notes

It seems appropriate to use my first-person voice in this annotation. But let’s begin with the authors’ voice for context: “Autoethnography is an approach to research and writing that seeks to describe and systematically analyze (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (p.273).

When search engines stopped being human: menu interfaces and the rise of the ideological nature of algorithmic search – Annotation & Notes

Carol Kuhlthau’s Information Search Process with Google Search superimposed

In recent years, some have argued that if you can’t find information on Google, it might as well not exist. This assertion is problematic given that according to various estimates, the scope of Google’s search index range from 4 percent to .004 percent of the total Internet. Neils Kerssens examines these questions in the context of “positivist algorithmic ideology,” a normalizing force that frames certain practices as an established standard exempt from further interrogation.