In the midst of professional uncertainty, Meta's truth-sleuths grapple with a rampaging wildfire of conspiracy theories
As flames engulfed California, firefighters scrambled to contain the wildfires, but a different battle was raging online. Meta's fact-checking partners, still operating within the company, aimed to suppress the viral misinformation spreading around the blazes.
Rumors and theories about the disaster ignited like embers online, eventually igniting a storm of vast conspiracy theories.
"Disbanding fact-checkers from social platforms is akin to dismantling your fire department," Alan Duke, a former CNN journalist and co-founder of Lead Stories, a fact-checking outlet, lamented. Duke and his team at Lead Stories were among numerous fact-checking organizations worldwide supported by Meta.
However, Meta has not yet announced when it will formally end its fact-checking program, but a source close to the program revealed it could conclude as early as March. Once Meta's financial support dries up, it will force some of its fact-checking partners to let go of staff or shut down entirely.
Duke, a Los Angeles resident, observed the orange glow of the fires from his home while working to disprove conspiracy theories about the blazes that resulted in at least two dozen fatalities.
"Fires and looting. A regular Democratic-run city," read the caption to an Instagram video displaying men removing a television from a home amid the flames.
After Lead Stories verified the men were not looters but the resident's family saving their belongings, Meta slapped a fact-check label on the video. When a post is labeled as false or misleading, Meta imposes penalties that significantly reduce its distribution, so fewer people see it, and also informs users who attempt to share it.
PolitiFact, a Pulitzer Prize-winning fact-check organization and part of Meta's program, debunked a viral post on Threads claiming Los Angeles police were searching for three individuals connected to a MAGA website at the fire's source.
PolitiFact also debunked an Instagram post that appeared to show the Hollywood sign on fire. PolitiFact revealed the misleading image was likely manipulated using artificial intelligence.
Much of the online misinformation carried a distinctively partisan charge and spread beyond Meta's platforms, propagated by some of the most-followed and influential figures on the internet.
False claims promoting President-elect Donald Trump on his Truth Social platform accused the Democratic Party of orchestrating the wildfires. Meanwhile, Elon Musk downplayed climate change's role while repeatedly blaming diversity, equity and inclusion (DEI) policies for the fires on his X platform. "DEI means people with DIE," Musk commented.
Conspiracy theorist Alex Jones claimed the fires were part of a "globalist plot" to wage economic warfare and deindustrialize the United States before triggering total collapse on X. Musk agreed with Jones: "True."
Similar false conspiracy theories surfaced after the Maui wildfires in 2023 and Hurricanes Helene and Milton in 2024, claiming the fires were deliberately set by the government or that the government was controlling the weather and manipulating winds to spread the fires.
"False claims about the wildfires create distrust in emergency agencies, making it more challenging for them during the crisis," Duke explained to CNN. "The same thing happened after the Maui fires in 2023. Space lasers were blamed. It was claimed to be a conspiracy to steal the land. Unless false claims are debunked with facts by experts, the myths and distrust will persist."
However, Zuckerberg plans to dismantle professional fact-checkers in favor of something similar to X's Community Notes feature. Community Notes, a crowd-sourced fact-checking system where platform users can append notes to debunk or provide additional context to posts, is set to replace the fact-checking program established after Trump's 2016 election.
Community Notes appear on posts when users with diverse perspectives agree a post merits them. However, unlike fact-checking journalists, community users are not bound by ethical guidelines, and their assessments can be unfair or inaccurate.
A somewhat cryptic explanation on X's website outlines how Community Notes work. "Community Notes doesn't rely on majority rule," the company remarks. "To identify notes that are helpful to a wide range of people, notes require agreement between contributors who sometimes disagree in their past ratings."
However, as proven in case studies, Community Notes are prone to weaknesses, such as:
- Slowness: Community Notes take time to be written and approved, possibly allowing misinformation to spread unchecked during this delay.
- Invisibility: Many notes are not visible to users, and those that are not attributed to fact-checkable posts may not halt the misinformation.
- Bias and Polarization: The system can be biased, making it challenging to obtain agreement in a hyperpolarized world.
Despite some successes with Community Notes in, for instance, reducing retweets of false information, the approach is less reliable than professional fact-checking programs in combating misinformation and conspiracy theories. The system is still an experiment, and its implementation on vast platforms such as Facebook and Instagram raises significant concerns about its ability to handle the scale and complexity of misinformation effectively.
"Transitioning to a community-driven fact-checking system, as proposed by Zuckerberg, could potentially impact tech companies' business models. If Meta relies on Community Notes to combat misinformation, they might need to adapt their business strategies to accommodate this shift."
"Tech giants like Meta relied heavily on fact-checking partners, such as Lead Stories, to combat misinformation and conspiracy theories that surged during major events like wildfires. With Meta's potential withdrawal from this program, the tech industry's role in maintaining business ethics and truthfulness online could be significantly altered."