- What privacy expectations do creators of online content or data have?
- What variables affect their considerations about online privacy?
The analysis presented here is based on my review of existing research on privacy expectations of people who create online content.
What do I mean by “creating” and “online content”?
This analysis concerns the full range of user interactions on what we used to call Web 2.0 platforms, focusing on social media systems like Facebook, Twitter, Reddit, Instagram, and Amazon. User interactions include posting original content (text, photos, videos, memes, etc.), and commenting on content posted by others. Reviews on Amazon and comments on news websites count as online content in this analysis. Photos uploaded to photo-sharing sites and original videos posted to YouTube also count. Anything in any format created by an individual from their own original thought and creative energy, and subsequently posted by the individual on social media platforms, counts as online content. In most instances the online content or interaction contains or is traceable to personally identifiable information, even if this is unintended by the content creator.
Population relevant to this analysis
Any human being anywhere who creates and posts online content on social media platforms.
I conducted a literature review of recent research (> 2017) concerning privacy expectations of people who use social media, drawing from studies in the U.S., Canada, and the UK. The papers reviewed used a variety of data collection methods including interviews, surveys, and user diaries. The authors draw from several theories, including Privacy Calculus Theory,1 Privacy Regulation Theory,2 Uses and Gratification Theory,3 and Adaptive Structuration Theory.4 In addition, most papers recognized Helen Nissenbaum’s Contextual Integrity Theory5 as an essential framework for understanding social media users’ expectations of privacy.
There is no single or simple answer. Privacy expectations of online content creators are inherently contextual. I conclude that Nissenbaum’s Contextual Integrity Theory provides the most useful framework for understanding content creators’ expectations of privacy in context with the social and technical conditions of a given social media environment, their motivations for posting, and the content of a given message.
Contextual integrity theory holds that expectations of information privacy depend on the conditions in which information is shared, including who it is shared with, and the social norms in a specific situation. For example, we may comfortably share personal health information with our physician, but would consider it an egregious violation of privacy if our physician proceeded to post that information on Facebook. This is as true in our use of social media as it is in offline life: We might relate a deeply embarrassing experience with a life partner, but for reasons of privacy elect not to put it out there on Reddit.
Quinn and Papacharissi note that in analog life we manage our privacy through “multiple and overlapping activities”6 such as locking our house, restricting access to our phone number, and sharing our Social Security Number only with financial institutions we deem to have a legitimate need for it. Our need or desire to share or protect personal information is intrinsically contextual based on what the information is, who it is shared with, where and when it is shared, and many other situational factors.
These factors are even more complex on digital platforms, where one-to-many and many-to- many information flows may reach people outside our expected or desired audience. This can easily happen when we post content to social media: We desire to share it with friends, but it may also reach co-workers, family, and disgruntled ex-spouses. Marwick and boyd refer to this as context collapse, as social media platforms “collapse multiple audiences into single contexts, making it difficult to use the same techniques online that they do to handle multiplicity in face-to-face conversation.”7
In social media environments, content creators must navigate the privacy policies of a given platform, the technical affordances and limits in the flow of their content and personally identifiable information, and their own motivations for posting a particular piece of content. People use social media for a variety of social gratifications including self-expression, connection with others, and desire for social status. Or they may post to Facebook as part of their professional activities, or as part of a class project. Given the potential wide range of situational variables, privacy is liquid: “a reflexive form of privacy that emerges and is readjusted as we scrutinize, critique, and censure not just our own self-disclosures but the contexts within which these take place and the privacy risks and gratifications that these contexts contain.”8 Content creators must also contend with an online environment where surveillance and datafication of their personal lives has become nearly ubiquitous, such that “the privacy problem is both produced and reproduced and remediated by the politics of platforms of liquid surveillance.”9
As a result, creators of online content gauge their privacy expectations and behaviors based on calculated trade-offs. In one case they might choose to post content for self-expression or to gain social capital. In another case, they might decide that gratification is not worth the risk of a breach of privacy, or even threats to physical security. For example, does a parent feel safe posting a photo on Facebook of their child playing in a nearby public park? Does the same parent consider it acceptable to post a photo of their neighbor’s child playing in the same park? The parent may feel they can protect their own child from predators who might see their child’s face and location on a Facebook post, but consider it a breach of trust to post a photo of their neighbor’s child, especially without their permission.
The contextual nature of user decisions about privacy on social media is further elaborated by Gruzd and Hernández-García, who report on a cross-national survey of 545 social media users. Their study shows that posting content involving self-disclosure is not a unitary consideration, but extends to five dimensions: amount, depth, polarity (positive or negative), accuracy, and intent. Users who are more aware of surveillance by organizations and advertisers tend to post content that is more positive and accurate, but the amount and depth of self-disclosure is reduced. Users concerned about privacy threats from social actors tend to reduce the accuracy of self-disclosure while increasing the amount and depth. Gruzd and Hernández- García conclude that users are “rational actors who recognize different privacy-related threats and adjust what and how they share information on social media accordingly.”10
The literature reviewed here further supports Daniel Solove’s argument that the so-called “privacy paradox” is a myth. The privacy paradox holds that people say they place high value on privacy, but their behavior reveals that they really do not. Solove points out that attitudes and behavior are very different things: “Behavior involves risk decisions within specific contexts; it is always context dependent. Attitudes are more general views about value and can exist beyond specific contexts.”11 We might say we are very concerned about privacy in the abstract, but we still have to operate in an online world where valued resources and social activities require sharing personal data. The privacy paradox fails to reflect reality because it ignores how we navigate the many specific contexts we encounter.
Keith Spiller likens the navigation of contextual factors on social media to Elijah Anderson’s “Code of the Street.”12 Much like many residents of marginalized communities operationalize rules for living in an environment where violence is commonplace from other residents and the police, users of social media are guided by certain codes: Codes of presentation relate to a desire to increase social capital, approval, and acceptance. Codes of protection guide behavior to avoid potential or real harm, such as a breach of security or privacy. And codes of surveillance relate to a person both “lurking” on other users’ content, and being aware that others may be surveilling them. The codes are heuristics that condition online privacy expectations and behaviors in any given online situation.13
Finally, it’s important to note that one of the most important dimensions of context for online content creators is the specific platform on which they post their content. Facebook is different from Twitter is different from Flickr. They may share certain architectural features, but they are different neighborhoods with different codes.
The claim that expectations of privacy on social media are intrinsically contextual does not justify the argument that people don’t care about their privacy. I think there is often a gap between people’s understanding of risk online, and the actual risk. It’s important that we work to broaden digital literacy, media literacy, information literacy, or whatever we choose to call knowledge of and critical thinking about the online world. But people generally care about their privacy wherever it is challenged, and they are smart enough to understand that their expectations for privacy and their need to account for privacy threats are deeply contextual, especially online.
1 Culnan, M. J., & Armstrong, P. K. (1999). Information Privacy Concerns, Procedural Fairness, and Impersonal Trust: An Empirical Investigation. Organization Science, 10(1), 104–115. https:// doi.org/10.1287/orsc.10.1.104
2 Altman, I. (1975). The Environment and Social Behavior: Privacy, Personal Space, Territory, Crowding. Brooks/Cole Publishing Company.
3 Katz, E. (1959). Mass communications research and the study of popular culture. Studies in Public Communication, 2, 1–6. See also Rubin, A. M. (2009). Uses and gratifications perspective on media effects. In J. Bryant & M. B. Oliver (Eds.), Media effects: Advances in theory and research (3rd ed., pp. 164–184). New York, NY: Routledge.
4 DeSanctis, G., & Poole, M. S. (1994). Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory. Organization Science, 5(2), 121–147. https://www.jstor.org/ stable/2635011
5 Nissenbaum, H. (2004). Privacy as contextual integrity. Washington Law Review, 79(1), 119– 157. https://nyuscholars.nyu.edu/en/publications/privacy-as-contextual-integrity
6 Quinn, K., & Papacharissi, Z. (2018). The Contextual Accomplishment of Privacy. International Journal of Communication, 12(0), 23. https://ijoc.org/index.php/ijoc/article/view/7016
7 Marwick, A. E., & boyd, danah. (2011). I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience. New Media & Society, 13(1), 114–133. https:// doi.org/10.1177/1461444810365313
8 Quinn, K., & Papacharissi, Z. ibid:60. 9 ibid.
10 Gruzd, A., & Hernández-García, Á. (2018). Privacy Concerns and Self-Disclosure in Private and Public Uses of Social Media. Cyberpsychology, Behavior, and Social Networking, 21(7), 418–428. https://doi.org/10.1089/cyber.2017.0709
11 Solove, D. J. (2021). The Myth of the Privacy Paradox (SSRN Scholarly Paper ID 3536265). Social Science Research Network. https://doi.org/10.2139/ssrn.3536265