The Center for an Informed Public has awarded Innovation Fund grants to four project proposals, funding that will help support collaborative, multi-disciplinary and timely work intended to advance the CIP’s mission to resist strategic misinformation, promote an informed society and strengthen democratic discourse. The Innovation Fund is intended to seed promising new ideas and generate proofs-of-concept for further development. Award amounts average around $10,000 and project periods are 6-12 months.
The CIP’s Innovation Fund grants for 2024 wouldn’t be possible without the generous financial support of the John S. and James L. Knight Foundation and the University of Washington’s Technology & Social Change Group (TASCHA).
The four projects awarded funding are detailed below:
Conspiracy theories and democratic backsliding in 21st Century: Dynamics of repression and toleration of political rivals in democratic societies
- Lead Applicant: Mert Can Bayar, CIP Postdoctoral Scholar, Department of Human Centered Design and Engineering
Summary: In recent years, conspiracy theories (CTs) have gained traction in democratic politics. Illiberal parties and politicians wield these theories to delegitimize and repress political adversaries. But how do CTs impact voters’ attitudes? This project aims to investigate when and how these theories harm democratic processes. Using surveys and social media data from the United States and Turkey, the project team aims to shed light on this critical issue and provide valuable insights for journalists, civil society organizations, and practitioners of democratic politics.
***
Enhancing crowd-sourced fact-checking using large language models
- Lead Applicant: Martin Saveski, Assistant Professor, Information School
Summary: The fact that almost everyone can post false information on social media without any accountability is often too much for professional fact-checkers to measure up to, but would it help stem the spread of false information if everyone could also do high-quality fact-checking? This project will explore the development of a fact-checking writing assistant using LLM to help users complete high-quality fact-checking notes, building on the existing Community Notes feature of the X (formerly Twitter) platform. The assistant will effectively improve the quality of fact-checking notes produced by users through two key steps: revision and rating.
***
Effect of transportation and identification on learning outcomes in a misinformation education game
- Lead Applicant: Jin Ha Lee, Professor, Information School
- Co-applicant: Chris Coward, Senior Principal Research Scientist, Information School
Adopting misinformation isn’t just about facts; it’s a socio-emotional dance. Personal feelings and circumstances play a starring role. Enter narratives — the secret sauce. Research shows that emotionally charged stories can correct misinformed opinions. And guess what? Games are masters at weaving narratives. They pull us in, make us care, and maybe even change our beliefs. Now, the Loki’s Loop group, which created several misinformation education games, is gearing up for a randomized controlled trial. They’ll explore how players connect with game narratives and how that affects their media literacy. The project aims to find if stronger identification with the story might just boost our truth-seeking superpowers.
***
Identifying and modeling information integrity threats to online knowledge commons
- Lead Applicant: Benjamin Mako Hill, Associate Professor, Department of Communication
- Co-applicant: Zarine Kharazian, PhD student, Department of Human Centered Design & Engineering
Peer-to-peer knowledge-sharing platforms, such as Wikipedia, are an important way for many people to access information. However, its popularity has also become a target for actors who seek to use platforms such as Wikipedia to give alternative interpretations of particular historical and social events. So how exactly does the veracity of Wikipedia information differ from language to language in the present day? This project aims to utilize publicly available data from Wikipedia’s 326 language editions to investigate how different language editions resist information integrity attacks and will apply econometric techniques and computational simulations to test hypotheses developed through qualitative work. The goal of the project is to enhance our understanding of information integrity challenges and contribute to safeguarding online knowledge resources.