The viability of news has been in rapid decline since the mid-2000s. This post presents a critical analysis of how news publishers themselves helped precipitate the crisis by enthusiastically adopting Big Tech platform technologies and audience-building strategies. I show how search and social media platforms disrupted publishers’ relationships with audiences and advertisers by appropriating control over news distribution and revenue. I use Anthony Giddens’ structuration theory and Bruno Latour’s actor-network theory to explore the restructuring of the news industry by the sociotechnical practices and surveillance economics of today’s dominating platforms. Leveraging Michel Serres’ discourse on social parasitism, I present a research framework for assessing symbiotic and parasitic relationships in sociotechnical systems using historical, quantitative, and qualitative methods, and to identify where news publishers still have agency to begin resolving the crisis. And I suggest the urgency of this research framework as publishers rapidly adopt new AI technologies.
Category: Research Methods
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence – Annotation & Notes
This paper looks at advances in artificial intelligence through the lens of critical science, post-colonial, and decolonial theory. The authors acknowledge the positive potential for AI technologies, but use this paper to highlight its considerable risks, especially for vulnerable populations. They call for a proactive approach to be adopted by AI communities using decolonial theories that use historical hindsight to identify patterns of power that remain relevant – perhaps more than ever – in the rapid advance of AI technologies.
Critical data modeling and the basic representation model – Annotation & Notes
Data models are foundational to information processing, and in the digital world they stand in for the real world. When machines are used to make algorithmically-informed decisions, their algorithms are informed by the data models they use. And the data structured by data models is numerical of necessity, since machines must perform logical operations, and not creative interpretations. It follows that data used in machine operations are machine-language translations of real-world phenomena, expressed in a data model designed for efficient processing. It should not be surprising then that as information systems increasingly make decisions that affect people and communities, their operations are in a very direct sense an extension of the messy human world. This has resulted in information systems that reflect human racism, sexism, and many otherisms, with real-world harm to individuals and communities. But given the black-box nature of “machine learning” algorithms, how do we know what happens inside the black box? How can we document machine bias so as to design algorithms that don’t perpetuate social harms?
Period, Theme Event: Locating Information History in History
When we focus primarily on innovations in information technology, we risk flirting with technological determinism while forgetting about the social context. The question for information historians is not “how did this information technology come about,” but “how can we explore history by examining social practices around information and its infrastructures.” But this question is so general it doesn’t provide much of an entry point for actual research. To address the problem of “where do we start,” Alistair Black and Bonnie Mak (2020) propose three specific lenses as an organizing paradigm: Period, Theme, and Event.
Autoethnography: An Overview – Annotation & Notes
It seems appropriate to use my first-person voice in this annotation. But let’s begin with the authors’ voice for context: “Autoethnography is an approach to research and writing that seeks to describe and systematically analyze (graphy) personal experience (auto) in order to understand cultural experience (ethno)” (p.273).



