The billionaire leaders of social media giants have long been under pressure to quell the spread of mis- and disinformation. No system to date, from human fact-checkers to automation, has satisfied critics on the left or the right.
One novel approach winning plaudits recently has been Community Notes. The crowdsourced method, first introduced by Twitter before Elon Musk acquired it and rebranded it as X, allows regular users to submit additional context to posts, offering up supporting evidence to set the record straight. For Musk, the system is the centerpiece of his “free speech” claims, a democracy that circumvents traditional gatekeepers of information. “You are the media,” he tells his 220 million followers.
Starting Tuesday, Mark Zuckerberg’s Meta Platforms Inc. will broadly expand the method when it begins testing its own Community Notes system for Facebook, Instagram and Threads, citing X as its inspiration. In what was seen as a controversial about-face after years of paying professional fact-checkers, Zuckerberg said its existing initiatives had become “too politically biased.” An army of volunteer users would do a “better job,” he said. YouTube began testing a version of Community Notes on its site in June.
The system has advantages over the alternatives, but its limits as an antidote to misinformation are clear. So are its benefits for executives who have been dogged by intense scrutiny over misinformation and censorship for the better part of a decade. It allows them to outsource responsibility for what happens on their platforms to their users. And also the blame.
A Bloomberg media analysis of 1.1 million Community Notes — written in English, from the start of 2023 to February 2025 — shows that the system has fallen well short of counteracting the incentives, both political and financial, for lying, and allowing people to lie, on X.
Furthermore, many of the most cited sources of information that make Community Notes function are under relentless and prolonged attack — by Musk, the Trump administration, and a political environment that has undermined the credibility of truly trustworthy sources of information.
Eliminating the rewards for promoting misinformation would go much further than crowdsourcing to clean up social media. But in a social media world of growing incentives to make money in the viral casino, Community Notes is ultimately fighting a losing battle. This column seeks to fully examine how the people behind it are fighting that battle, and what strengths and weaknesses Meta and YouTube stand to inherit by adopting its practices.
The proponents of Community Notes can point to some successes. The system has proved to be faster and is regarded as more trustworthy and transparent than professional fact-checkers. On X, offending posts receive fewer retweets and are more likely to be deleted. Internally the company felt Community Notes did a better job than traditional media of minimizing the spread of doctored or misattributed images of violence in the Israel-Gaza conflict (though a Bloomberg News analysis suggested it failed to stop a flood of deceit). It limited the virality of some hoaxes during the Los Angeles wildfires, with notes users pouncing on false images of the famous Hollywood sign aflame.
And indeed, as Musk has repeatedly stated, Community Notes often corrects him — 167 of his posts have received a note since Community Notes began.
Just Scratching the Surface
On X, users who volunteer for Community Notes can submit one to any post, adding context and links to trustworthy or original sources of information. The note’s helpfulness is then voted upon by other volunteers. If enough people agree it is worth publishing, it will be made visible to all X users under the original post. However, this happens only if a consensus is reached among users who have disagreed on other topics in the past, as judged by a bridging algorithm. The developers behind the system say this indicates the discovery of a common ground less likely to be biased in any direction.
Among Community Notes’ main achievements is its speed in addressing misinformation relative to fact-checking operations staffed with researchers or reporters. In January 2023, the median time it took to attach a note to a misleading X post was 30 hours. By February 2025, it was less than 14 hours. In contrast, on Meta, fact-checkers could sometimes take more than a week, according to one analysis.
But even with these improvements, notes typically appear after a post’s most viral stage of diffusion — in other words, after the damage is already done.
It’s unclear how much misinformation is on X — if it could be counted, it could be deleted. But from X’s data, it’s obvious that most misleading posts go unaddressed. A high algorithmic bar for flagging misinformation means less than 10% of notes are regarded as “helpful” by the required quorum of users with diverse viewpoints — a percentage that has been trending downward as the system has scaled up
*Excludes notes pertaining to scams and terms of service violations. Community Notes that are currently rated “helpful” are considered published.
One reason for this downward trend is that a significant amount of published notes are later unpublished. Notes regarding divisive topics are routinely trapped in purgatory as users cannot agree — or rather see Community Notes as yet another online battlefield. Analysis for this column shows that even notes initially rated “helpful” — and published — get removed 26% of the time after disagreement sets in.
The removal rate is even higher for certain contentious topics and figures. From a sample of 2,674 notes about Russia and Ukraine in 2024, the data suggests more than 40% were unpublished after initial publication. Removals were driven by the disappearance of 229 out of 392 notes on posts by Russian government officials or state-run media accounts, based on analysis of posts that were still up on X at the time of writing. It is not uncommon to see instancesof pro-Russia voices corralling their followers to collectively vote against a proposed or published note.
Note: Analysis of 4,684 Community Notes that mention Ukraine, Russia, Kyiv, Moscow, Zelenskiy or Putin.
Community Notes on Musk’s posts are also more likely than the average to be removed once published. According to data collected by research group Bright Data, of Musk’s 167 noted posts, just 88 still had a note publicly visible at the time of writing. So, while Musk maintains he couldn’t “change a Community Note if someone put a gun to my head,” as he told podcaster Lex Fridman, he often doesn’t need to: His supporters frequently see to it for him.
Reliable Sources Still in Demand
Despite Musk’s support of Community Notes, he has recently signaled annoyance at some of its conclusions.
When Musk shared content alleging President Volodymyr Zelenskiy of Ukraine was polling unfavorably among his citizens, Community Notes users set the record straight (his approval rating is typically above 50% and has risen more recently). Musk lashed out, saying he would “fix” Community Notes because it was “increasingly being gamed by governments & legacy media.”
In truth, our analysis showed that these sources provided the backbone for Community Notes to function. Musk’s frequent attacks on journalism, such as calling for CBS journalists to be jailed, willfully ignore this.
Bloomberg Opinion’s analysis suggests the mainstream media was the leading source of information in published Community Notes between January 2023 and February of this year: Sites categorized by online security group Cloudflare as “news & media” and “magazines” accounted for 31% of links cited within notes. Social networks were the next leading category with 20%, followed by educational sites with 11%5.
A closer examination of the top 40 most-referenced domains within Community Notes, which accounted for more than 50% of all notes, showed the sources Musk most maligns are doing essential legwork in providing trustworthy reporting referenced in “helpful” notes. They included the Reuters news agency (“the most deceptive news organization on earth,” Musk said), the BBC (“British Pravda”) and NPR (“run by woke Stasi”).
Cited more often than any other single news source, however, is Wikipedia. The online encyclopedia, touted as the definitive model of how crowdsourced information gathering can provide a reliable resource, has had its funding challenged by Musk and his acolytes who say the platform is “controlled by far-left activists.” Musk drawslittle distinction between Wikipedia and the “legacy media,” given Wikipedia’s strict policies on acceptable sources.
One rebuttal to the importance of “legacy media” within Community Notes is that many notes link directly to source material, such as court documents or, particularly in the case of influencer or celebrity gossip, other social media posts. Indeed, the two most cited domains within Community Notes were X.com — meaning other posts on X — and clips on YouTube.com.
Still, an examination of this material shows mainstream media plays an important role. A random sampling of 400 notes citing X posts as a source showed 12% were posts by professional journalists, or directly referenced the work of media organizations. In a sample of 400 notes referencing YouTube clips, mainstream media footage was present in 29%.
Research suggests Community Notes benefits from a curious quirk of human nature: Users seem to more readily believe a stranger on the internet who links to a single New York Times article, for example, than they do the New York Times itself when it offers a fact check directly.
It is the online equivalent of podcaster Joe Rogan searching Google during a show, or a friend pulling out Wikipedia to settle a debate in a bar. But, as our analysis makes clear, for this approach to work, high-quality information must be available. Musk’s attacks, and Meta’s yanking of funding from fact-checking organizations, are damaging this ecosystem. So, too, are the large-scale job cuts being made by many prominent news organizations.
As well as losing money from Meta, international news organizations and fact-checking outfits are sounding the alarm over critical funding shortfalls as a result of Musk’s sweeping DOGE agenda.
The Trump administration has also taken a hacksaw to several government websites that are reservoirs of reliable sources, such as the website for the Centers for Disease Control and Prevention.
Meta Broadens the Experiment
The Community Notes concept will face a bigger test when it is introduced to Meta’s apps — Facebook, Instagram and Threads — which are used by some 3.3 billion people.
From what is known so far, much of it will be operationally identical to how it works on X — though Meta has not committed to publishing data on its performance. In recent years, Meta has taken steps to limit researcher access to audit what takes place on its apps.
Another question is whether Zuckerberg can foster the same kind of enthusiasm among his users as Musk has been able to do on X, where Community Notes has benefited from harnessing users’ desire to support Musk’s “free speech” agenda. This enthusiasm extends to doing the kind of work that might be expected of paid moderation staff. In February, almost a third of submitted Community Notes were addressing basic terms of service violations (such as posting gambling advertisements) or warning of scams; in other words, free labor for the network owned by the world’s richest man.
Zuckerberg is a less popular figure than Musk, and much of what is said on Meta’s apps is in more private spaces like groups or instant messaging. Still, more than 200,000 volunteer users had signed up for the new initiative, the company said. Never underestimate the innate human urge to correct someone who is wrong on the internet.
Competing Interests
It may well be that no system could ever work sufficiently well at scale to counteract tech leaders’ opposing incentives to encourage as much highly engaging content as possible.
At the same time that it is ditching its fact-checkers, Meta is boosting its programs for dishing out money to popular creators. YouTube has similar revenue-sharing arrangements with its users. Some of X’s most notorious users often share the thousands of dollars the platform has handed them as a reward for their popular posts. Stories too good to be true, or too shocking to be ignored, are an easy shortcut to attention and success.
All this is happening as X, Meta and Google all rush to promote the use of generative AI tools that make manipulating video and images significantly easier and cheaper. In a relatively short amount of time, AI “slop” has made our information ecosystem murkier.
In tackling the clear-cut cases, Community Notes has been partially effective. When issues are politically contentious, the system becomes paralyzed and weak. It’s in these areas where our information crisis festers, when details are messy and ground truths are harder to establish. Facts can evolve, experts can and do change their minds.
These nuances are Community Notes’ most glaring weakness. It allows tech leaders and their companies to wash their hands of the responsibility to adequately police their own platforms — outsourcing as much as they can to users.
To truly stem the spread of misinformation and disinformation on their platforms, social media executives need to remove the incentives that encourage it instead of hiding behind the crowd and hollow proclamations about free speech.