News and Updates

Get the latest news and updates from Crisis Lab as we continue to design professional development programs for senior professionals, host in person labs focused on community resilience, and host special programs focused on global issues and providing international perspectives.

Why We’re Wired to Believe (and Spread) Misinformation

Nov 08, 2024
Crisis Lab Blog image: Why We’re Wired to Believe (and Spread) Misinformation

When a crisis hits, the flow of information can be as unpredictable as the event itself. Today, a new complication arises: AI-generated misinformation. With the ability to create lifelike fake images, videos, and “official” statements, misinformation can shape public perception before the real facts come to light.

This situation reveals a major challenge in crisis response: fact-checking. During emergencies, misinformation often spreads faster than it can be verified, leaving a lasting impact on public opinion and even on policy decisions. The delay in truth creates a gap that misinformation eagerly fills. And, over time, these gaps fuel a larger issue—“truth decay,” or the gradual erosion of public trust in reliable information. Addressing this problem takes more than just traditional fact-checking; it requires rethinking how we communicate the truth in real-time.

The Crisis of Speed: When Truth Can’t Keep Up

In an ideal world, fact-checkers would verify every piece of information before it reached the public. But in the reality of a crisis, that’s nearly impossible. Fact-checking traditionally relies on stable information sources, access to sites, and reliable networks—all of which are often disrupted during emergencies. Misinformation, meanwhile, doesn’t have these limitations. AI-generated content circulates instantly, with social media amplifying its spread across the globe.

Why Speed Matters: AI-generated misinformation is quick, persuasive, and designed to travel fast. By the time fact-checkers verify a claim, the misinformation may have already shaped public perception. A deepfake video or an AI-altered image can trigger real-world actions—panic buying, evacuation, or fear—before the facts are confirmed. Once this happens, even debunked information sticks in people’s minds.

This challenge raises a critical question: Is it time to consider real-time fact-checking, or does the entire concept of “fact-checking” need to evolve in the age of AI-driven misinformation?

How Crisis Misinformation Wins Hearts Before Facts

In a crisis, emotions run high and people’s reactions are often driven by fear, empathy, or anger. These emotions make us more susceptible to believing and spreading information that aligns with our feelings, even if it isn’t verified. This is especially true with AI-generated content, which often targets emotions directly, like an image of destruction or a “breaking news” alert.

The Appeal of Unverified Information: When official sources are slow to respond, unofficial narratives fill the void. AI allows people to create and share “news” that seems real, even if it isn’t verified. This phenomenon isn’t just about the failure of fact-checking—it reflects our natural tendency to trust information that confirms our fears, beliefs, or needs.

Interestingly, misinformation in crises can also provide a sense of control. People might hold on to unverified information because it aligns with their need to make sense of chaos. This often-overlooked aspect shows that misinformation doesn’t just exploit our emotions—it becomes a kind of “comfort” in chaotic situations.

This dynamic raises the question: How can fact-checking adapt to address not only the accuracy but the emotional appeal of misinformation in crisis situations?

Truth Decay: The Long Shadow of Crisis Misinformation

Once misinformation spreads, it’s nearly impossible to retract completely. Studies show that people tend to remember initial stories, even if they’re later debunked. This is where “truth decay” enters the picture.

What is Truth Decay?

Truth decay refers to the gradual decline in the public’s trust in factual information, particularly during crises when information is essential. Each time misinformation spreads unchecked, it not only damages trust in the current crisis but in future ones as well.

An aspect that’s often overlooked is that truth decay doesn’t just lead people to stop believing in facts; it leads them to question the sources of those facts. When AI can create realistic but false images or videos, people start to question whether anything—even from official sources—is trustworthy. This isn’t just a public issue; it affects every institution that relies on public trust for effective crisis response.

A Different Perspective: There’s a tendency to assume that truth decay results solely from misinformation. But it’s equally fueled by a lack of timely and transparent communication from official sources. When the truth can’t keep up with falsehoods, people look for alternatives—even if those alternatives are unverified. This insight challenges the popular idea that fact-checking alone will restore trust; instead, speed and transparency must work alongside accuracy.

Rethinking Fact-Checking: Is It Time for a New Model?

The traditional approach to fact-checking—a slow, methodical process—may not work well in today’s crisis landscape. To counter the speed of AI-driven misinformation, fact-checking itself may need an upgrade. Imagine a model where AI isn’t just a tool for creating misinformation but also for detecting it. Real-time AI-powered tools could flag misinformation as it emerges, giving people quick, reliable updates that counter false narratives.

Can AI Counter AI?

This approach isn’t without risks. Using AI for fact-checking raises ethical questions: If AI can flag or censor content, does it become the authority on what’s true? And if so, who decides what’s “true” in a complex, fast-changing situation? This question is worth exploring, as the role of AI in fact-checking brings both promising potential and considerable risks.

The Role of Platforms: Social media platforms play a big part in this issue. They often flag misinformation but rarely offer immediate, credible alternatives, which only leaves gaps that invite further speculation. Should platforms be required to prioritize verified information in crises, even if that means flagging or suppressing other voices?

The Future of Trust: Balancing Speed, Truth, and Transparency

The implications of crisis misinformation go beyond any single event. Each instance of unchecked, AI-generated misinformation erodes trust in the long term. Reversing truth decay will require rethinking how we communicate truth. This doesn’t mean sacrificing accuracy for speed; rather, it means finding new ways to communicate quickly and transparently.

Truth decay isn’t just a problem with technology; it’s a societal challenge. People want information that’s quick, clear, and consistent. If we want to prevent truth decay from eroding trust in reliable sources, we must confront a challenging reality: truth itself must adapt to meet the demands of an instant, AI-driven world.

Moving Forward in the Age of Crisis Misinformation

The stakes are high. AI-driven misinformation in crises isn’t just a passing trend; it’s a fundamental issue that calls for a rethinking of how we handle truth in emergencies. Real-time fact-checking, emotionally resonant truth-telling, and accountability for platforms are only the beginning. Rebuilding trust requires a shift in how we approach truth—crisis by crisis, fact by fact. If we don’t rise to this challenge, we risk a future where people stop trusting information altogether.

Subscribe to our Newsletter

Explore the latest news and updates in the crisis and emergency management domain. Subscribe to our newsletter for valuable insights and fresh perspectives!