Some criticism of misinformation research fails to accurately represent the field it critiques.
This essay was written by University of Washington Center for an Informed Public researchers Zarine Kharazian, Madeline Jalbert, and Saloni Dash, with contributions from Shahan Ali Memon, Kate Starbird, Emma S. Spiro, and Jevin D. West.
In a recent essay, philosopher Dan Williams provocatively argues that there is no way to advance a “science of misleading content.” He also puts forth his belief that concern over the impact of misinformation is a “moral panic,” asserting that clear-cut cases of false information are so rare in Western democracies that the exposure of a susceptible minority to them has little effect on societal outcomes. Moreover, he claims that misleading information is so difficult to define that systematic study of the problem by researchers is both impossible and undesirable. Williams’ line of critique is familiar, echoing recent pieces that frame the “field” that coalesced following Brexit and the election of Donald Trump in 2016 as a convenient liberal establishment response to the decline of trust in institutions in Western democracies.
As researchers studying rumors and the collective processes driving the spread of false, misleading, and harmful claims, we welcome discussion and critique of the terminologies and assumptions that undergird our area of study. We even share some of the concerns motivating Williams’ essay — for example, that some of the most visible (largely quantitative) research on “misinformation” often operationalizes the problem in reductive and even problematic ways. Additionally, as misinformation and related phenomena have attracted increasing scholarly and public attention, we believe that it is valuable to reflect on whether or not this attention is justified, and where the problems of misleading and/or deceptive information intersect with other societal concerns. However, we find that Williams’ argument — and some others like it — fails to accurately represent the field it critiques.
The scope of misinformation research is much broader than quantitative work measuring exposure and susceptibility.
Williams repeatedly refers to attempts to advance an elusive “science of misleading content” by misinformation researchers. His definition of “science,” however, seems to exclude a broad range of epistemological perspectives that researchers have brought to the systematic study of misinformation (and related concepts of rumors and disinformation). Indeed, we suspect that most misinformation researchers would agree with many of the limitations Williams notes regarding the specific paradigm of research he describes, but they would also recognize that the studies Williams cites are far from the only — or even the dominant — approach in the field.
For example, Williams cites experimental and survey-based work measuring exposure or susceptibility to misinformation as emblematic of the field, but he ignores the growing literature that draws on interpretivist philosophies (e.g., Donovan et al., 2019; Lee et al., 2021; Moran et al., 2022; Pasquetto et al., 2022; Starbird et al., 2019; Tripodi et al., 2023). Scholars drawing from these perspectives take exactly what Williams says — that “determining whether communication leads indirectly to inappropriate conclusions or the wrong beliefs in this sense is a complex, highly context-sensitive, and often value-laden task” — as a fundamental premise. But rather than concluding that therefore this research should not be done, they focus on investigating 1) how people engage with information to make sense of the world around them; and 2) how existing sociotechnical infrastructures interact with these complex sensemaking processes.
Other scholars in the field have advanced a critical approach that centers history, power, and social differentiation in the study of misinformation and disinformation (Kuo & Marwick, 2021; Mejia et al., 2018; Nguyễn et al., 2022). These approaches are the source of some of the most cogent critiques of misinformation research as it currently stands. Scholars in this tradition advocate for and engage in study of the antecedents of misinformation and disinformation, such as racial and imperial histories. They make a convincing case for why much of what is counted as modern misinformation research fails to adequately engage with these histories. On its face, these critiques echo Williams’ frustration with treating misinformation as a “cause” rather than a “symptom.” However, they do not reductively delegate misinformation to a mere downstream consequence, but instead recognize its interactive and reinforcing dynamics with institutional power and structures.
Research from the approaches highlighted above is far from Williams’ characterization of a “class of misinformation experts” determining what information is true or false from a position of a neutral arbiter. But much of this work employs mixed or qualitative methods, and is thus not well-represented in high-profile scientific journals that publish primarily quantitative and experimental work. One confounding issue here that Williams may be perceiving may have more to do with epistemic gate-keeping and amplification potential of Nature and Science than a problem with research into misinformation.
The study of misinformation has roots in long-standing fields.
Williams characterizes a field of misinformation studies as emerging after 2016, in the wake of Brexit and the election of Donald Trump. This is a common assumption, given that scholarly and public interest in the study of misinformation increased after 2016. Modern day misinformation research, however, has roots in several long-standing fields, ranging from social psychology to political communication to sociology to infrastructure studies (Lim & Donovan, 2020). While Williams claims that the “subtler forms of misinformation” that are most pervasive are too amorphous to study systematically, the fact is that researchers within these fields have been developing promising frameworks to study them for decades. Adapting and extending these frameworks is one of the core areas of work in misinformation studies today.
At the University of Washington, research on online misinformation began in the early 2010s, with work studying crisis events like the 2010 Deepwater Horizon oil spill in the Gulf of Mexico and 2013 Boston Marathon bombings (Spiro et al., 2012; Starbird et al., 2014). This work drew from the longstanding body of literature on rumors and rumoring, which dates back to the 1940s (Allport & Postman, 1946; Kapferer, 2013; Shibutani, 1966). Rumoring as a conceptual framework proved particularly useful in studying misinformation spreading amid crisis events for a few reasons (Spiro & Starbird, 2023). First, unlike other concepts in the misinformation literature — for example, “information disorder” (Wardle & Derakhshan, 2017) — it did not pathologize rumoring, but rather, recognized it as fulfilling an instinctive collective need to help communities “make sense” of the world around them in times of uncertainty. Second, rather than categorizing specific rumors within true-or-false binaries, it focused on assessing the potential of a rumor to take hold within a community along several dimensions, such as its novelty, emotional valence, and participatory potential.
More recently, researchers in our center have looked to literature within organizational sociology to understand how the collective sensemaking process can become corrupted along the way. Drawing from organizational theorist Karl Weick’s conceptualization of collective sensemaking in organizations, sociologist Erving Goffman’s perspective on frames and framing, and Klein et al.’s theory on the role of data as evidence, we have begun to analyze the manner in which frames produced by conservative political elites alleging a “rigged election” shaped interpretations of purported evidence circulating on social media (Goffman, 1974; Klein et al., 2007; Weick, 1995).
In the field of psychology — which encompasses many of the select studies Williams critiques — there is also a vast literature regarding the context-sensitive processes of belief and attitude formation (e.g., Bless et al., 1992; Johnson et al., 1993; Petty & Cacioppo, 1986). In fact, much of the current work in psychology focuses on understanding the conditions that lead individuals to believe, share, or act on information in the misinformation space — an approach that is incongruent with the idea of categorizing a communication as true, false, or misleading without taking context into account. As just a sample, you can find studies that consider the role of identity and societal context (Lewandowsky et al., 2017; Oyserman & Dawson, 2020), the importance of social motivations (Chen et al., 2015), how feelings influence judgment (Schwarz & Jalbert, 2020), and how individuals have a nuanced view of truth that takes gist into account (Langdon et al., 2024).
Misinformation’s participatory potential is worth further study.
Finally, Williams’ argument that concern over misinformation is a “moral panic” is, by his own admission, limited to Western liberal democracies. However, sidelining research focused on other countries misses the global picture.
First, we do not have sufficient evidence to evaluate whether Williams’ claim that exposure to misinformation is rare holds in non-Western contexts. While the Global South makes up the majority of the population, the majority of empirical misinformation scholarship focuses on the Global North, and particularly, Western contexts. Researchers have noted that it is particularly difficult to quantify exposure to and prevalence of misinformation in the Global South due to unequal access to the internet and reliance on person-to-person messaging platforms (Badrinathan & Chauchard, 2024; Blair et al., 2023). This is thus another area that merits further study with novel research methods.
What we do have evidence for, however, is that political actors across the world have become adept at using social media to consolidate authoritarian gains (Gunitsky, 2015). Research focused on the Philippines, Brazil, India, Myanmar, and beyond has documented coordinated disinformation campaigns, often targeting specific groups, such as activists and ethnic minorities (Fink, 2018; Jakesch et al., 2021; Ong & Cabañes, 2018; Ozawa et al., 2023). When these campaigns are spearheaded or encouraged by political elites, the consequences can be especially dire, justifying targeted political violence.
Williams argues that the term “misinformation” has become so broad that it is no longer analytically useful. We agree that the term does not sufficiently capture the range or complexity of phenomena we study, which is why our research team has returned to the foundational frameworks of rumoring and collective sensemaking. We also believe, however, that the studies we have highlighted point to a trend that Williams’ “moral panic” argument misses: in much of the work by those studying misinformation and disinformation, the threat lies not in the simple exposure of a broad swath of the population to misleading frames, but in those frames’ potential to mobilize a particular segment of people to take concerted action. In the United States, this mobilizing effort was the dynamic that fueled the participatory disinformation campaign (Prochaska et al., 2023; Starbird et al., 2023) motivating the January 6, 2021 attack on the Capitol.
Conclusion
In a seminal 1991 paper on research philosophies in information systems research, Wanda Orlikowsi and Jack Baroudi argue that “much can be gained if a plurality of research perspectives is effectively employed to investigate information systems phenomena” (Orlikowski & Baroudi, 1991). We agree. As Williams correctly observes, the systematic study of the collective sensemaking processes underlying the spread of misleading claims, and the corresponding outcomes of these processes, is admittedly a complex task. Luckily, many of the approaches to study these phenomena were not invented by a ‘class of misinformation experts’ in 2016. They are in fact built on decades-old bodies of research across a range of disciplines, only a few of which we have highlighted above.
As a multidisciplinary area of inquiry, misinformation studies perhaps does not lend itself to straightforward, systematic cataloging across what Orlikowski and Baroudi called “schools of thought.” The fact that there even are multiple competing perspectives may be missed altogether in large part because scholars working in disparate disciplines may not explicitly frame their work in conversation with one another. But we maintain that any maturing area of inquiry can only be made richer by more, not less, systematic study from varying epistemological perspectives.
Zarine Kharazian is a doctoral student in the UW Department of Human Centered Design & Engineering; Madeline Jalbert is a Center for an Informed Public postdoctoral scholar; Saloni Dash is an Information School doctoral student; Shahan Ali Memon is an iSchool doctoral student; Kate Starbird is a HCDE associate professor and CIP co-founder; Emma S. Spiro is an iSchool associate professor and CIP co-founder; and Jevin D. West is an iSchool associate professor and CIP co-founder.
References
- Allport, G. W., & Postman, L. (1946). An Analysis of Rumor. The Public Opinion Quarterly, 10(4), 501–517. https://www.jstor.org/stable/2745703
- Bless, H., Mackie, D. M., & Schwarz, N. (1992). Mood effects on attitude judgments: Independent effects of mood before and after message elaboration. Journal of Personality and Social Psychology, 63(4), 585.
- Chen, X., Sin, S.-C. J., Theng, Y.-L., & Lee, C. S. (2015). Why students share misinformation on social media: Motivation, gender, and study-level differences. The Journal of Academic Librarianship, 41(5), 583–592.
- Donovan, J., Lewis, B., & Friedberg, B. (2019). Parallel ports: Sociotechnical change from the alt-right to alt-tech.
- Fink, C. (2018). Dangerous speech, anti-Muslim violence, and Facebook in Myanmar. Journal of International Affairs, 71(1.5), 43–52.
- Goffman, E. (1974). Frame analysis: An essay on the organization of experience. Harvard University Press.
- Gunitsky, S. (2015). Corrupting the cyber-commons: Social media as a tool of autocratic stability. Perspectives on Politics, 13(1), 42–54.
- Jakesch, M., Garimella, K., Eckles, D., & Naaman, M. (2021). Trend Alert: A Cross-Platform Organization Manipulated Twitter Trends in the Indian General Election. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 379:1-379:19. https://doi.org/10.1145/3479523
- Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114(1), 3.
- Kapferer, J.-N. (2013). Rumors: Uses, interpretations, and images. Transaction Publishers.
- Klein, G., Phillips, J. K., Rall, E. L., & Peluso, D. A. (2007). A data-frame theory of sensemaking. Expertise out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making, 113.
- Kuo, R., & Marwick, A. (2021). Critical disinformation studies: History, power, and politics. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-76
- Langdon, J. A., Helgason, B. A., Qiu, J., & Effron, D. A. (2024). “It’s Not Literally True, But You Get the Gist:” How Nuanced Understandings of Truth Encourage People to Condone and Spread Misinformation. Current Opinion in Psychology, 101788.
- Lee, C., Yang, T., Inchoco, G. D., Jones, G. M., & Satyanarayan, A. (2021). Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3411764.3445211
- Lewandowsky, S., Ecker, U. K., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.
- Lim, G., & Donovan, J. (2020). Detect, Document, and Debunk: Studying Media Manipulation and Disinformation.
- Mejia, R., Beckermann, K., & Sullivan, C. (2018). White lies: A racial history of the (post)truth. Communication and Critical/Cultural Studies, 15(2), 109–126. https://doi.org/10.1080/14791420.2018.1456668
- Moran, R. E., Grasso, I., & Koltai, K. (2022). Folk Theories of Avoiding Content Moderation: How Vaccine-Opposed Influencers Amplify Vaccine Opposition on Instagram. Social Media + Society, 8(4), 20563051221144252. https://doi.org/10.1177/20563051221144252
- Nguyễn, S., Kuo, R., Reddi, M., Li, L., & Moran, R. E. (2022). Studying mis- and disinformation in Asian diasporic communities: The need for critical transnational research beyond Anglocentrism. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-95
- Prochaska, Stephen, et al. “Mobilizing Manufactured Reality: How Participatory Disinformation Shaped Deep Stories to Catalyze Action during the 2020 US Presidential Election.” Proceedings of the ACM on Human-Computer Interaction 7.CSCW1 (2023): 1-39.
- Ong, J., & Cabañes, J. V. (2018). Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines. Architects of Networked Disinformation: Behind the Scenes of Troll Accounts and Fake News Production in the Philippines. https://doi.org/10.7275/2cq4-5396
- Orlikowski, W. J., & Baroudi, J. J. (1991). Studying information technology in organizations: Research approaches and assumptions. Information Systems Research, 2(1), 1–28.
- Oyserman, D., & Dawson, A. (2020). Your fake news, our facts: Identity-based motivation shapes what we believe, share, and accept. In The psychology of fake news (pp. 173–195). Routledge.
- Ozawa, J. V., Woolley, S. C., Straubhaar, J., Riedl, M. J., Joseff, K., & Gursky, J. (2023). How Disinformation on WhatsApp Went From Campaign Weapon to Governmental Propaganda in Brazil. Social Media+ Society, 9(1), 20563051231160632.
- Pasquetto, I., Pasquetto, A., Tacchetti, L., Spada, A., & Riotta, G. (2022). Disinformation as Infrastructure: Making and maintaining the QAnon conspiracy on Italian digital media [Preprint]. SocArXiv. https://doi.org/10.31235/osf.io/btjuf
- Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Springer.
- Schwarz, N., & Jalbert, M. (2020). When (fake) news feels true: Intuitions of truth and the acceptance and correction of misinformation. In The psychology of fake news (pp. 73–89). Routledge.
- Shibutani, T. (1966). Improvised news: A sociological study of rumor. Ardent Media.
- Spiro, E. S., Fitzhugh, S., Sutton, J., Pierski, N., Greczek, M., & Butts, C. T. (2012). Rumoring during extreme events: A case study of deepwater horizon 2010. Proceedings of the 4th Annual ACM Web Science Conference, 275–283. https://doi.org/10.1145/2380718.2380754
- Spiro, E., & Starbird, K. (2023). Rumors Have Rules. Issues in Science and Technology, 29(3), 47–49. https://doi.org/10.58875/CXGL5395
- Starbird, K., Arif, A., & Wilson, T. (2019). Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–26. https://doi.org/10.1145/3359229
- Starbird, K., DiResta, R., & DeButts, M. (2023). Influence and Improvisation: Participatory Disinformation during the 2020 US Election. Social Media + Society, 9(2), 20563051231177943. https://doi.org/10.1177/20563051231177943
- Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston Marathon bombing. IConference 2014 Proceedings.
- Tripodi, F. B., Garcia, L. C., & Marwick, A. E. (2023). ‘Do your own research’: Affordance activation and disinformation spread. Information, Communication & Society. https://www.tandfonline.com/doi/abs/10.1080/1369118X.2023.2245869
- Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Vol. 27). Council of Europe Strasbourg.
- Weick, K. E. (1995). Sensemaking in organizations (Vol. 3). Sage.