The analysis presented here is based on my review of existing research on privacy expectations of people who create online content. This analysis concerns the full range of user interactions on what we used to call Web 2.0 platforms, focusing on social media systems like Facebook, Twitter, Reddit, Instagram, and Amazon. User interactions include posting original content (text, photos, videos, memes, etc.), and commenting on content posted by others. Reviews on Amazon and comments on news websites count as online content in this analysis. Photos uploaded to photo-sharing sites and original videos posted to YouTube also count. Anything in any format created by an individual from their own original thought and creative energy, and subsequently posted by the individual on social media platforms, counts as online content. In most instances the online content or interaction contains or is traceable to personally identifiable information, even if this is unintended by the content creator.
In 2019 Ellen Lechman and I embarked on a research project to begin assessing the use of social media to spread propaganda in nations other than the United States. With this focus in mind, we conducted a literature review of recent research on propagandistic interventions occurring in Asian and European nations. We surveyed available academic and other institutional publications, and produced an annotated bibliography detailing a baker's dozen of sources we deemed most relevant.
There's nothing surprising in the results of a 2016 study conducted for Buzzfeed by Echelon Insights and Hart Research. It shows that the majority of Americans who were likely voters in 2016 were under the age of 50, and that about half of them share news links on social media every week. But the results of this study conform with a growing body of other research that shows a massive shift in how people discover news, and how trust works in a media system increasingly dominated by social media platforms.
I started this project hoping to understand how social media data is harvested, processed, analyzed, and used for marketing and political communications. While compiling this bibliography I read a lot more research than I could include, and almost none of it reflected any consideration of how it might affect human beings, communities, politics, or the natural world. It should be expected that other people will insert their own values into applications of this technology, for good or ill. And that's exactly what's happening.
We've reached the final annotation in our series on "Social Media Data Collection, Processing, and Use in Research, Marketing, and Political Communication." Toward the end of the project my research drifted from traditional academic sources to investigative journalism. We now veer further off-track into blog posts and GitHub repos. Some videos and a course syllabus on Data Science for Social Systems. Tools, documentation, and related sources that don't fit neatly into any particular box. This isn't so much an annotation as a grab bag of annotated links. I apologize in advance.
In the third and final part of their undercover investigation, Channel 4 News captures chief executives from Cambridge Analytica explaining how the firm used social media analytics to win the 2016 U.S. presidential election for Donald Trump. After this report was aired in March, Cambridge Analytica executives denied using social media analytics to win the election for Donald Trump. In earlier parts of this report, they claim to always tell the truth, while adding that actually people may not know or care what's true and what isn't. Anyway if people were misled it's not their fault. Also, these are not the 'driods you're looking for.
The Cambridge Analytica story is what inspired me to pursue research on the details and methods of data processing that form the technical basis of using social media metadata for psychographic purposes, and the role of programming in accessing, collecting, processing, and using social media data, and the specific tools and workflow that enable this work. But I believe we can't fully understand the technical story without the political and social context. Technology isn't neutral, and our values are embedded in every tool we build.
London's Channel 4 News discovered the values held by senior executives of Cambridge Analytica, as they detail in part two of their investigative report.
So far in this project I've been annotating traditional academic sources. These sources explore methods of machine learning, Natural Language Processing, sentiment analysis, and the tools used to mine social media for research purposes. But the literature hasn't kept pace with the news, and social media data is being used for things other than academic research. Like maybe stealing elections. Here begins a series of three annotations of investigative reports by London's Channel 4 News. These are video stories about Cambridge Analytica and its methods and role in political campaigns in the U.S., Africa, Europe and beyond. These are of course non-traditional annotations. But I consider the source credible, and given the subject important to include.
If you ran digital strategy for a presidential campaign, and Facebook came knocking on your door and said "We want to help you win this election," would you turn them down? There's nothing like a little help from the mothership.
Finally, a break from annotating technical stuff. Don't get me wrong I like it but...here's where it hits the road: Changes in voting behavior. Here we have a study published in 2012 in Nature on a randomized controlled study of changes in users voting behavior after seeing different versions of messages on Facebook. You want to read this one.