Is ‘weird-checking’ the new fact-checking?

Aug 12, 2024

Examining social psychological principles that explain why Democrats’ strategy of calling ideas “weird” works.

This blog post was co-authored by Madeline Jalbert, a postdoctoral scholar at the University of Washington’s Center for an Informed Public, and Ira Hyman, a psychology professor at Western Washington University.

  • Democrats have recently started to call some Republican attitudes and behaviors “weird” — a strategy we refer to as weird-checking. The approach shares many similarities with social norm interventions that social psychologists have found to be effective.
  • Our own attitudes and behaviors are heavily guided by perceived social norms — what we think others believe and do. Unfortunately, people frequently have incorrect views of which ideas are widely shared. Extreme and minority views are often overrepresented in the media, making them appear to be more common and acceptable than they are.
  • Weird-checking communicates what others actually believe and can disrupt these inflated perceptions of consensus. It can also orient us to more carefully consider whether the attitude or behavior is consistent with societal values and expectations. This strategy can be used to address problematic attitudes and behaviors that can not be addressed through traditional fact-checking methods

That’s just weird. Over the last few weeks, you have probably seen Democrats referring to some Republican ideas and policy proposals as weird. Thanks to Tim Walz, the Minnesota Governor and Democratic vice presidential nominee, weird has become a central part of the political discourse. The news media is currently flooded with discussion of this new strategy and its success, with a few examples of recent headlines including “Democrats Embrace ‘Weird’ Messaging on Trump” (from The New York Times), “Why the ‘Weird’ Label is Working for Kamala Harris’” (from the BBC), and “‘Weird’ is Democrat’s Most Effective Insult” (from The Washington Post). This approach represents a shift away from Democrat’s standard fact-checking attempts (see this recent TechDirt piece by Mike Masnick for a discussion). We’ve started to refer to this strategy as “weird-checking” — like fact-checking, but checking if something is weird instead of checking if it’s true.

Why does weird-checking work? One key reason for its success can be explained by its appeal to social norms, which play a powerful role in whether we accept or reject an idea or action. As individuals, we look to what others believe and endorse to inform our own attitudes and behaviors (e.g., Schwarz & Jalbert, 2021). The phenomenon of checking our ideas and actions against what others believe and do is referred to as “social proof” (Cialdini, 2009) — if something has broad acceptance, there must be something to it (Festinger, 1954). When we see that others endorse a message, we’re also more likely to endorse it (Cialdini, 2009)

Unfortunately, we may not know what other people think. This is because our perceptions of what others believe and do are often constructed through our own experiences rather than from information about actual rates (Tankard & Paluck, 2016). For example, we typically assess the popularity of an opinion by relying on cues like how familiar it feels or how many times we recall seeing it in the news or on social media. We’re less likely to use information obtained through an opinion poll. Indeed, media exposure is our primary source of information on many issues (Shehata & Strömbäck, 2021; Su et al., 2015). Media is not, however, constructed to be representative of the actual distribution of beliefs and opinions that exist in the world. Instead, the news disproportionately shares extreme and uncommon views (e.g., Koehler, 2016), and our social media algorithms often prioritize sensational content that grabs and maintains our engagement (e.g., Bucher & Helmond, 2018; Dujeancourt & Garz, 2023)

Media exposure can shift our perception of norms (Gunther et al., 2006; Paluck, 2009), and disproportionate exposure to reports of minority attitudes and behaviors may make those attitudes and behaviors seem more common and acceptable than they actually are. Some of our work has found that the mere repetition of information increases perceptions that the information has consensus — an “illusory consensus” effect (Jalbert & Pillai, 2024). Other researchers have found that repeated exposure to reports of immoral behaviors makes them seem more common and, in turn, more acceptable (Pillai et al., 2023). These processes may help explain why people have a tendency to overestimate the extremity of views of those who do not share their political orientation or underestimate how many others actually share their own policy-related opinion (e.g., Levendusky & Malhotra, 2016; Yang et al., 2016). For example, most people believe that climate change is a substantial problem that their government should address but substantially underestimate the percentage of people who agree with them (Andre et al., 2024; Sparkman et al., 2022).

In addition to the effects of being exposed to information, other aspects of the messages may also lead people to (often incorrectly) believe those ideas have widespread consensus. Politicians frequently bake information about broad consensus into their messages. A recurring part of Trump’s rhetoric includes referring to the “many people” who say or believe the message he wants to promote. For example, in a press conference last Thursday, Trump (incorrectly) claimed that “They wanted to get rid of Roe v. Wade and that’s Democrats, Republicans, and Independents, and everybody. Liberals, conservatives, everybody wanted it back in the states” (Montanaro, 2024). As another example, U.S. House Speaker Mike Johnson (R-Louisiana) argued on May 8, “We all know, intuitively, that a lot of illegals are voting in federal elections.” In this case, Johnson was not only repeating false information (Swenson, 2024) but was also claiming that this was something widely known and accepted. Combined with the influences of disproportionate and repeated news coverage, these political messages can easily mislead people on which positions are widely held.

Weird-checking as a social norms intervention

A recent survey by Data for Progress asked US voters to judge how weird they found recent claims made and actions taken by members of the Republican party. Most voters found several of them — including claiming that Kamala Harris only recently became a Black person and supporting the monitoring of pregnant women to prevent them from traveling for reproductive healthcare — to be “very weird” (Springs, 2024). When left unchecked, the disproportionate and repeated coverage of these behaviors may make them especially susceptible to falsely inflated perceptions of consensus. Without looking at this poll, people may not know that the majority of other people also find these behaviors to be abnormal. By weird-checking unpopular beliefs, including these, Democrats are helping communicate more accurate perceptions of the true state of consensus.

Communicating information around consensus is a powerful intervention, well-established by social psychologists to be effective in promoting belief correction and behavior change across a variety of domains. For example, communicating doctors’ consensus around COVID-19 vaccines can increase vaccination rates (Bartoš et al., 2022), and sharing social norms around engaging in energy and water conservation habits can increase those behaviors (Goldstein et al., 2008; Nolan et al., 2008; Schultz et al., 2007). Communicating consensus information can also be used to reduce undesirable behaviors like littering (Kallgren et al., 2000) and drinking and driving (Perkins et al., 2010). And, more recently, sharing consensus information has been found to help reduce belief in misinformation (Ecker et al., 2023).

Communication around consensus also does not have to be explicit to change our minds. In some of our work, we’ve investigated how false information shared online is evaluated when it appears with social truth queries: questions posed by another user drawing attention to whether information is true (e.g., “How do you know this is true?”, “Is there evidence for that?”, “Do other people believe that?”). We have consistently found that the presence of these truth queries reduces belief in and intent to share false information. These truth queries are thought to be effective in part because the mere act of asking a question disrupts assumptions that the information has consensus and changes how we process it (Jalbert et al., 2023). Similarly, calling something weird may lead people to use a different frame than they normally would to guide how that information is interpreted and understood (see Starbird, 2023, for a relevant discussion).

An additional note is that these efforts may be effective even when they don’t convince everyone that a particular attitude or behavior is weird. Because people have a strong motivation to affiliate and receive the approval of others (Cialdini & Goldstein, 2004), just knowing that others consider a sentiment weird may make someone less likely to publicly endorse or share it. 

We also want to note an important limitation to our discussion of weird-checking so far. We have been focused on the effects of calling attitudes and behaviors weird. However, politicians have also been referring to the people who promote these attitudes and engage in these behaviors as weird too. Doing so may lead people to reconsider those politicians in the same fashion — e.g., are these people reasonable? How similar are they to what people expect of someone who holds their position? How many others generally agree with their beliefs and values?

Why weird-checking may sometimes be better than fact-checking

Why might weird-checking be helping Democrats change the narrative in places where typical fact-checking efforts have been unsuccessful? In many situations, fact-checks can be effective in getting people to update their beliefs (Walter & Murphy, 2018). However, fact-checking has its shortcomings. One particularly important one is that attitudes and the acceptability of behaviors can’t be fact-checked. You can’t fact-check, for example, whether someone should support the monitoring of pregnant women to restrict their travel. But you can weird-check this view.

In addition, the truth of a message is often nuanced and complicated, making it difficult to communicate and digest. Fact-checks ask us to focus on the specific details of an attitude held or action taken by one person. Weird-checking may allow us to bypass engaging these details (an often frustrating and not-so-fruitful task that distracts from the overarching takeaway) and instead do a more general gut check of whether the attitude or behavior is consistent with our values and norms and those our society endorses.

Why weird-checking works

Why does weird-checking work? Calling an attitude or behavior weird communicates information about social norms and consensus, factors that play a critical role in guiding our own beliefs, attitudes, and behaviors. Extreme and minority views are often overrepresented in the media, and repeated exposure to them may make them appear to be more common and acceptable than they actually are. Referring to an attitude or behavior as weird disrupts inflated perceptions of consensus, provides information about the views of others, and orients us to more carefully consider whether the attitude or behavior is consistent with societal values and expectations. You don’t have to use the word weird to get this effect. You could use a more traditional approach like sharing opinion poll information. Or you could try out another phrasing like unusual, strange, bizarre, or out-of-touch. But weird works.

Acknowledgments

We would like to thank Drew Gorenz, Michael Grass, Angela Harwood, and Rachel Moran-Prestridge for their thoughtful input and suggestions on this piece.

Illustration at top based on icons via The Noun Project.


References

Other News