European report on the spread of mis/disinformation on social media platforms

MonkeeSage

Senior Member.
The Structural Indicators to Monitor Online Disinformation Scientifically (SIMODS) project recently released their second report about the state of online disinformation on social media platforms.

SIMODS is a consortium of European organizations with a goal of characterizing and tracking the spread of mis/disinformation online.
External Quote:
SIMODS brings together eight leading fact-checking and research organizations from across Europe to create methodologically rigorous tools to track the prevalence of misinformation on major platforms.

While platforms committed to decrease the spread of misinformation under the Code of Practice on Disinformation, SIMODS is aiming to develop robust Structural Indicators (SIs) to evaluate the prevalence of misinformation on digital platforms and analyze whether accounts spreading misinformation benefit from greater visibility and reach.

These indicators track multiple aspects of misinformation, including its prevalence, sources, monetization, and cross-platform reach. The study covers six major social media platforms, including the Very Large Online Platforms (VLOPs) that are signatories of the Code of Practice on Disinformation—such as Facebook, Instagram, YouTube, TikTok, LinkedIn, along with Twitter/X in four European Union languages: French, Polish, Slovak, and Spanish.
Source: https://checkfirst.network/project/simods/

The report is "Second Measurement of the State of Online Disinformation in Europe on Very Large Online Platforms", SIMODS Project, March 2026.
https://science.feedback.org/wp-content/uploads/2026/03/SIMODS-Report-2.pdf

They report some interesting findings related to prevalence, sources and impact of mis/disinfo across various social media platforms for four EU member state languages.

External Quote:

Executive Summary


The consortium led by Science Feedback and including Newtral, Demagog SK, Pravda, Check First, and the Universitat Oberta de Catalunya (UOC) presents the second large-scale, cross-platform, scientifically sound measurement of Structural Indicators of Disinformation. These indicators assess how permeable Very Large Online Platforms (VLOPs) are to mis/disinformation in Europe, how influential repeat misinformers are relative to credible sources, and the extent to which such content is monetised.

Against a backdrop of platforms walking back earlier commitments to counter disinformation, this second report brings something no single measurement can offer: a basis for comparison. The consistency of results across two independent measurement periods strengthens the credibility of the findings and confirms that what we are measuring is not noise, but structural features of the platforms themselves.

WHAT WE MEASURED

Across six VLOPs (Facebook, Instagram, LinkedIn, TikTok, X/Twitter, YouTube) and four EU Member States (France, Poland, Slovakia, Spain), we report five Structural Indicators: Prevalence of mis/disinformation; Sources (relative influence of repeat misinformers vs. credible actors); Monetisation; AI-generated mis/disinformation (new this wave); and Audience growth (new this wave).

The second data collection period ran throughout October 2025, covering five topics (the Russia–Ukraine war, climate change, health, migration, and national politics) and yielding approximately 3.3 million posts. A view-weighted random sample (500 posts per platform and per country) approximates widely seen content; professional fact-checkers annotated posts to assess misinformation.

Data access note. Despite DSA Article 40.12 requests, only LinkedIn supplied the requested random sample of posts. TikTok and YouTube provided API access, which required additional effort to produce comparable results. This concerns publicly available data: the barrier platforms are erecting against independent researchers has no technical justification. For non-public data, including monetisation records, there was no cooperation from any platform. This opacity makes it practically impossible to study the systemic risks these platforms impose on society, as the DSA requires.

KEY FINDINGS

1) Prevalence. TikTok shows the highest prevalence of mis/disinformation (~25% of exposure-weighted posts), up from ~20% in the first measurement period. YouTube also saw a notable increase, from ~8.5% to ~12%. Facebook (~15%), X/Twitter (~11%), and Instagram (~8%) remained broadly stable. LinkedIn continues to show the lowest prevalence at ~1%.

When including abusive (e.g., hate speech) and borderline content (content that reinforces a disinformation narrative without making an outright false claim), levels are substantially higher: TikTok reaches ~43% problematic content, Facebook ~34%, X/Twitter ~32%, YouTube ~27%, Instagram ~16%, and LinkedIn ~4%. Notably, three platforms (TikTok, X/Twitter, and YouTube) now show more problematic content than credible content in our samples, compared to only one (X/Twitter) in the first measurement period.

Health misinformation remains the dominant category across all platforms (~43% of all mis/disinformation posts).

2) Sources. Across almost all platforms, low-credibility accounts receive disproportionately high engagement relative to their audience size, a pattern we term the "misinformation premium". On most platforms, this premium persisted or worsened compared to the first measurement period: on X/Twitter it rose from ~4 to ~10, and on YouTube from ~8.5 to ~11. This means that on X/Twitter, an account posting false or misleading information repeatedly now receives around 10 times as much engagement per post as a credible source with a comparable following.

3) Monetisation. Monetisation data remain entirely inaccessible on four of the six platforms. On YouTube, 81% of eligible low-credibility channels appear to benefit from monetisation, compared to 90% of eligible high-credibility channels. On Facebook, the gap is wider (22% vs. 51%). In both cases, the fact that a high proportion of eligible low-credibility accounts appear to be monetised indicates that demonetisation policies are not functioning as intended. These results are consistent with those of the first measurement period: platforms are, to a meaningful extent, benefiting from and financially sustaining the very accounts that repeatedly spread misleading content.

4) Consistency across measurement periods. The overall coherence of results between the two waves is a key finding in itself. Prevalence estimates, the misinformation premium and monetisation patterns are consistent with those observed in the first wave. This reproducibility confirms that our methodology is sound and that the phenomena we measure are structural, and not incidental.

5) AI-generated disinformation. This wave introduces a new indicator tracking the share of mis/disinformation that is AI-generated. On video platforms, AI-generated content accounts for approximately one quarter of all identified mis/disinformation on TikTok (24%) and approximately one fifth on YouTube (19%). For a phenomenon that barely existed a few years ago, these figures indicate rapid growth and a significant and escalating risk to the quality of public information. Health misinformation accounts for the largest share of AI-generated mis/disinformation on both platforms.

Critically, the overwhelming majority of this content carries no label: across all platforms, only 16.5% of AI-generated mis/disinformation was visibly marked as synthetic. This is a failure by platforms to inform their users of what they are watching, and to protect them from manipulation and deception. The prevalence of unlabelled AI-generated health misinformation, including fabricated videos featuring AI avatars posing as medical professionals, illustrates concretely the real-world harms this failure enables.

6) Audience growth. This wave also introduces a new indicator tracking the relative growth rate of audiences for high- and low-credibility accounts. On most platforms, no statistically significant difference in follower growth was observed between the two groups. One exception is X/Twitter, where low-credibility accounts are growing their audiences at ~3.5 times the rate of high-credibility accounts. X/Twitter thus appears to favour the expansion of accounts that repeatedly share misleading content.

WHY THIS MATTERS

Two waves of measurement, using a consistent methodology now point to the same conclusion: the structural permissiveness of major online platforms to misleading content appears to be a persistent feature of how these platforms are designed and operated.

The integration of the Code of Conduct on Disinformation into the DSA framework in 2025 creates, for the first time, a legal basis for enforcement. The indicators developed by the SIMODS project are designed to serve that purpose: they are comparable across platforms, reproducible over time, and grounded in independent, transparent methodology. What is now required is the political and regulatory will to use them.
 

Attachments

Back
Top