Meta, the parent company of Facebook, Instagram, and Threads, announced on Thursday that it will begin testing its new Community Notes feature in the United States starting March 18, 2025. The move marks a significant shift in the company’s content moderation strategy, replacing its long-standing third-party fact-checking program with a crowd-sourced model powered by an open-source algorithm originally developed by Elon Musk’s X.
The decision, which comes two months after Meta scrapped its fact-checking initiative amid pressure from conservatives, is being positioned as a less biased and more community-driven approach to tackling misinformation on its platforms.
The rollout of Community Notes follows Meta’s January decision to end its reliance on third-party fact-checking organisations. A program launched in 2016 to curb the spread of false information following criticism over misinformation during the U.S. presidential election that year. CEO Mark Zuckerberg had cited a “cultural tipping point” toward prioritising free speech, sparked by President Donald Trump’s 2024 election victory, as a key driver behind the change.
Trump, a vocal critic of social media companies for allegedly silencing conservative voices, praised Meta’s initial move in January, suggesting it may have been influenced by his past threats against Zuckerberg and the company.
Community Notes will allow users to write and rate brief notes, capped at 500 characters, to flag false or misleading content across Meta’s platforms. Unlike the previous system, which relied on nearly 100 certified fact-checking organisations worldwide operating in over 60 languages, the new model shifts responsibility to the user base.
Meta reported that over 200,000 U.S. users have already signed up as potential contributors ahead of the public beta launch next week. To participate, contributors must be over 18 and include a supporting link with their notes. The system will initially support six languages: English, Spanish, Chinese, Vietnamese, French, and Portuguese.
Drawing inspiration from X’s Community Notes, revamped from its earlier “Birdwatch” feature in 2022, Meta will utilise X’s open-source algorithm as the backbone of its rating system.
On X, the feature has been hailed by some as a democratic alternative to traditional moderation, allowing users to add context or debunk claims collaboratively. Meta emphasised that notes will not carry author names and will only be published if contributors with diverse viewpoints agree they provide “helpful context.” During the testing phase, notes will not appear publicly as the company refines the system to ensure accuracy and effectiveness.
“This approach empowers our community to decide what needs context, reducing the perception of bias that came with third-party fact-checking,” said Joel Kaplan, Meta’s head of global policy, in a blog post.
Kaplan, a long-time Republican operative who assumed the role of chief global affairs officer in January, credited X’s system as a proven model, noting its success in fostering a range of perspectives. The company expects Community Notes to be “less biased” than its predecessor, which faced accusations from conservatives of disproportionately targeting right-wing content.
The shift represents Meta’s most substantial overhaul of content moderation in recent years and aligns with Zuckerberg’s efforts to mend ties with the incumbent Trump administration.

Since Trump’s November 2024 victory, Meta has taken several steps to signal goodwill, including a $1 million donation to his inauguration fund and the appointment of Trump allies like Ultimate Fighting Championship CEO, Dana White to its board. Zuckerberg’s personal meeting with Trump at Mar-a-Lago in late 2024 further underscored this repositioning, a stark contrast to the rocky relationship during Trump’s first term, when Meta banned him from its platforms following the January 6, 2021, Capitol riot.
Once Community Notes go live, third-party fact-check labels will disappear from Meta’s U.S. platforms, though the company has not indicated plans to end the program globally. With over 3 billion users worldwide, Meta’s pivot to a crowd-sourced model has sparked both optimism and concern.
Supporters, including some free-speech advocates, argue it decentralises control over information and reduces reliance on potentially biased experts. Critics, however, warn that it could exacerbate the spread of misinformation, particularly without the penalties, such as reduced visibility, that accompany fact-check labels.

Zuckerberg acknowledged this trade-off in January, stating, “It means we are going to catch less bad stuff, but we’ll also reduce the number of innocent people’s posts that we accidentally take down.”
The announcement has reignited debates over the efficacy of community-driven moderation. Studies on X’s Community Notes have yielded mixed results, with some praising its ability to counter falsehoods, like misinformation about COVID-19 vaccines, while others highlight its limitations in addressing rampant conspiracy theories due to its reliance on unpaid volunteers. Experts fear that Meta’s vastly larger user base and algorithmic amplification could further complicate the system’s scalability.
As Meta prepares to launch Community Notes, the tech giant is betting on a lighter-touch approach to content moderation at a time of heightened political scrutiny. Whether this shift will satisfy conservatives, appease advertisers, and maintain platform integrity remains to be seen as the public beta rolls out.
This post was originally published on here