By Jeff Kao, ProPublica, and Priyanjana Bengani, Tow Center for Digital Journalism –
ProPublica –
“My sisters have died,” the young boy sobbed, chest heaving, as he wailed into the sky. “Oh, my sisters.” As Israel began airstrikes on Gaza following the Oct. 7 Hamas terrorist attack, posts by verified accounts on X, the social media platform formerly called Twitter, were being transmitted around the world. The heart-wrenching video of the grieving boy, viewed more than 600,000 times, was posted by an account named “#FreePalestine 🇵🇸.” The account had received X’s “verified” badge just hours before posting the tweet that went viral.
Days later, a video posted by an account calling itself “ISRAEL MOSSAD,” another “verified” account, this time bearing the logo of Israel’s national intelligence agency, claimed to show Israel’s advanced air defense technology. The post, viewed nearly 6 million times, showed a volley of rockets exploding in the night sky with the caption: “The New Iron beam in full display.”
And following an explosion on Oct. 14 outside the Al-Ahli Hospital in Gaza where civilians were killed, the verified account of the Hamas-affiliated news organization Quds News Network posted a screenshot from Facebook claiming to show the Israel Defense Forces declaring their intent to strike the hospital before the explosion. It was seen more than half a million times.
None of these posts depicted real events from the conflict. The video of the grieving boy was from at least nine years ago and was taken in Syria, not Gaza. The clip of rockets exploding was from a military simulation video game. And the Facebook screenshot was from a now-deleted Facebook page not affiliated with Israel or the IDF.
Just days before its viral tweet, the #FreePalestine 🇵🇸 account had a blue verification check under a different name: “Taliban Public Relations Department, Commentary.” It changed its name back after the tweet and was reverified within a week. Despite their blue check badges, neither Taliban Public Relations Department, Commentary nor ISRAEL MOSSAD (now “Mossad Commentary”) have any real-life connection to either organization. Their posts were eventually annotated by Community Notes, X’s crowdsourced fact-checking system, but these clarifications garnered about 900,000 views — less than 15% of what the two viral posts totaled. ISRAEL MOSSAD deleted its post in late November. The Facebook screenshot, posted by the account of the Quds News Network, still doesn’t have a clarifying note. Mossad Commentary and the Quds News Network did not respond to direct messages seeking comment; Taliban Public Relations Department, Commentary did not respond to public mentions asking for comment.
An investigation by ProPublica and Columbia University’s Tow Center for Digital Journalism shows how false claims based on out-of-context, outdated or manipulated media have proliferated on X during the first month of the Israel-Hamas conflict. The organizations looked at over 200 distinct claims that independent fact-checks determined to be misleading, and searched for posts by verified accounts that perpetuated them, identifying 2,000 total tweets. The tweets, collectively viewed half a billion times, were analyzed alongside account and Community Notes data.
The ongoing conflict in Gaza is the biggest test for changes implemented by X owner Elon Musk since his acquisition of Twitter last year. After raising concerns about the power of platforms to determine what speech is appropriate, Musk instituted policies to promote “healthy” debate under the maxim “freedom of speech, not reach,” where certain types of posts that previously would have been removed for violating platform policy now have their visibility restricted.
Within 10 days of taking ownership, Musk cut 15% of Twitter’s trust and safety team. He made further cuts in the following months, including firing the election integrity team, terminating many contracted content moderators and revoking existing misinformation policies on specific topics like COVID-19. In place of these safeguards, Musk expanded Community Notes. The feature, first launched in 2021 as Birdwatch, adds crowdsourced annotations to a tweet when users with diverse perspectives rate them “helpful.”
“The Israel-Hamas war is a classic case of an information crisis on X, in terms of the speed and volume of the misinformation and the harmful consequences of that rhetoric,” said Michael Zimmer, the director of the Center for Data, Ethics, and Society at Marquette University in Wisconsin, who has studied how social media platforms combat misinformation.
While no social media platform is free of misinformation, critics contend that Musk’s policies, along with his personal statements, have led to a proliferation of misinformation and hate speech on X. Advertisers have fled the platform — U.S. ad revenue is down roughly 60% compared to last year. Last week, Musk reinstated the account of Alex Jones, who was ordered to pay $1.1 billion in defamation damages for repeatedly lying about the 2012 Sandy Hook school shooting. Jones appealed the verdict. This week, the European Union opened a formal investigation against X for breaching multiple provisions of the Digital Services Act, including risk management and content moderation, as well as deceptive design in relation to its “so-called Blue checks.”
ProPublica and the Tow Center found that verified blue check accounts that posted misleading media saw their audience grow on X in the first month of the conflict. This included dozens of accounts that posted debunked tweets three or more times and that now have over 100,000 followers each. The false posts appear to violate X’s synthetic and manipulated media policy, which bars all users from sharing media that may deceive or confuse people. Many accounts also appear to breach the eligibility criteria for verification, which state that verified accounts must not be “misleading or deceptive” or engage in “platform manipulation and spam.” Several of the fastest-growing accounts that have posted multiple false claims about the conflict now have more followers than some regional news organizations covering it.
We also found that the Community Notes system, which has been touted by Musk as a way to improve information accuracy on the platform, hasn’t scaled sufficiently. About 80% of the 2,000 debunked posts we reviewed had no Community Note. Of the 200 debunked claims, more than 80 were never clarified with a note.
When clarifying Community Notes did appear, they typically reached a fraction of the views that the original tweet did, though views on Community Notes are significantly undercounted. We also found that in some cases, debunked images or videos were flagged by a Community Note in one tweet but not in others, despite X announcing, partway through the period covered by our dataset, it has improved its media-matching algorithms to address this. For tweets that did receive a Community Note, it typically didn’t become visible until hours after the post.
This last finding expands on a recent report by Bloomberg, which analyzed 400 false posts tagged by Community Notes in the first two weeks after the Oct. 7 attack and found it typically took seven hours for a Community Note to appear.
For the tweets analyzed by ProPublica and the Tow Center, the median time that elapsed before a Community Note became visible decreased to just over five hours in the first week of November after X improved its system. Outliers did exist: Sometimes it still took more than two days for a note to appear, while in other cases, a note appeared almost instantaneously because the tweet used media that the system had already encountered.
Multiple emails sent to X’s press inbox seeking comment on our findings triggered automated replies to “check back later” with no further response. Keith Coleman, who leads the Community Notes team at X, was separately provided with summary findings relevant to Community Notes as well as the dataset containing the compiled claims and tweets.
Via email, Coleman said that the tweets identified in this investigation were a small fraction of those covered by the 1,500 visible Community Notes on X about the conflict from this time period. He also said that many posts with high-visibility notes were deleted after receiving a Community Note, including ones that we did not identify. When asked about the number of claims that did not receive a single note, Coleman said that users might not have thought one was necessary, pointing to examples where images generated by artificial intelligence tools could be interpreted as artistic depictions. AI-generated images accounted for around 7% of the tweets that did not receive a note; none acknowledged that the media was AI-generated. Coleman said that the current system is an upgrade over X’s historic approaches to dealing with misinformation and that it continues to improve; “most importantly,” he said, the Community Notes program “is found helpful by people globally, across the political spectrum.”
Community Notes were initially meant to complement X’s various trust and safety initiatives, not replace them. “It still makes sense for platforms to keep their trust and safety teams in a breaking-news, viral environment. It’s not going to work to simply fling open the gates,” said Mike Ananny, an associate professor of communication and journalism at the University of Southern California, who is skeptical about leaving moderation to the community, particularly after the changes Musk has made.
“I’m not sure any community norm is going to work given all of the signals that have been given about who’s welcome here, what types of opinions are respected and what types of content is allowed,” he said.
ProPublica and the Tow Center compiled a large sample of data from multiple sources to study the effectiveness of Community Notes in labeling debunked claims. We found over 1,300 verified accounts that posted misleading or out-of-context media at least once in the first month of the conflict; 130 accounts did so three or more times. (For more details on how the posts were gathered, see the methodology section at the end of this story.)
Musk overhauled Twitter’s account verification program soon after acquiring the company. Previously, Twitter gave verified badges to politicians, celebrities, news organizations, government agencies and other vetted notable individuals or organizations. Though the legacy process was criticized as opaque and arbitrary, it provided a signal of authenticity for users. Today, accounts receive the once-coveted blue check in exchange for $8 a month and a cursory identity check. Despite well-documented impersonation and credibility issues, these “verified” accounts are prioritized in search, in replies and across X’s algorithmic feeds.
If an account continuously shares harmful or misleading narratives, X’s synthetic and manipulated media policy states that its visibility may be reduced or the account may be locked or suspended. But the investigation found that prominent verified accounts appeared to face few consequences for broadcasting misleading media to their large follower networks. Of the 40 accounts with more than 100,000 followers that posted debunked tweets three times or more in the first month of the conflict, only seven appeared to have had any action taken against them, according to account history data shared with ProPublica and the Tow Center by Travis Brown. Brown is a software developer who researches extremism and misinformation on X.
Those 40 accounts, a number of which have been identified as the most influential accounts engaging in Hamas-Israel discourse, grew their collective audience by nearly 5 million followers, to around 17 million, in the first month of the conflict alone.
A few of the smaller verified accounts in the dataset received punitive action: About 50 accounts that posted at least one false tweet were suspended. On average, these accounts had 7,000 followers. It is unclear whether the accounts were suspended for manipulated media policy violations or for other reasons, such as bot-like behavior. Around 80 accounts no longer have a blue check badge. It is unclear whether the accounts lost their blue checks because they stopped paying, because they had recently changed their display name (which triggers a temporary removal of the verified status), or because Twitter revoked the status. X has said it removed 3,000 accounts by “violent entities,” including Hamas, in the region.
On Oct. 29, X announced a new policy where verified accounts would no longer be eligible to share in revenue earned from ads that appeared alongside any of their posts that had been corrected by Community Notes. In a tweet, Musk said, “the idea is to maximize the incentive for accuracy over sensationalism.” Coleman said that this policy has been implemented, but did not provide further details.
False claims that go viral are frequently repeated by multiple accounts and often take the form of decontextualized old footage. One of the most widespread false claims, that Qatar was threatening to stop supplying natural gas to the world unless Israel halted its airstrikes, was repeated by nearly 70 verified accounts. This claim, which was based on a false description of an unrelated 2017 speech by the Qatari emir to bolster its credibility, received over 15 million views collectively, with a single post by Dominick McGee (@dom_lucre) amassing more than 9 million views. McGee is popular in the QAnon community and is an election denier with nearly 800,000 followers who was suspended from X for sharing child exploitation imagery in July 2023. Shortly after, X reversed the suspension. McGee denied that he had shared the image when reached by direct message on X, claiming instead that it was “an article touching it.”
Another account, using the pseudonym Sprinter, shared the same false claim about Qatar in a post that was viewed over 80,000 times. These were not the only false posts made by either account. McGee shared six debunked claims about the conflict in our dataset; Sprinter shared 20.
Sprinter has tweeted AI-generated images, digitally altered videos and the unsubstantiated claim that Ukraine is providing weapons to Hamas. Each of these posts has received hundreds of thousands of views. The account’s follower count has increased by 60% to about 500,000, rivaling the following of Haaretz and the Times of Israel on X. Sprinter’s profile — which has also used the pseudonyms SprinterTeam, SprinterX and WizardSX, according to historical account data provided by Brown — was “temporarily restricted” by X in mid-November, but it retained its “verified” status. Sprinter’s original profile linked to a backup account. That account — whose name and verification status continues to change — still posts dozens of times a day and has grown to over 25,000 followers. Sprinter did not respond to a request for comment and blocked the reporter after being contacted. The original account appears to no longer exist.
Verification badges were once a critical signal in sifting official accounts from inauthentic ones. But with X’s overhaul of the blue check program, that signal now essentially tells you whether the account pays $8 a month. ISRAEL MOSSAD, the account that posted video game footage falsely claiming it was an Israeli air defense system, had gone from fewer than 1,000 followers, when it first acquired a blue check in September 2023, to more than 230,000 today. In another debunked post, published the same day as the video game footage, the account claimed to show more of the Iron Beam system. That tweet still doesn’t have a Community Note, despite having nearly 400,000 views. The account briefly lost its blue check within a day of the two tweets being posted, but regained it days after changing its display name to Mossad Commentary. Even though it isn’t affiliated with Israel’s national intelligence agency, it continues to use Mossad’s logo in its profile picture.
“The blue check is flipped now. Instead of a sign of authenticity, it’s a sign of suspicion, at least for those of us who study this enough,” said Zimmer, the Marquette University professor.
Of the verified accounts we reviewed, the one that grew the fastest during the first month of the Israel-Hamas conflict was also one of the most prolific posters of misleading claims. Jackson Hinkle, a 24-year-old political commentator and self-described “MAGA communist” has built a large following posting highly partisan tweets. He has been suspended from various platforms in the past, pushed pro-Russian narratives and claimed that YouTube permanently suspended his account for “Ukraine misinformation.” Three days later, he tweeted that YouTube had banned him because it didn’t want him telling the truth about the Israel-Hamas conflict. Currently, he has more than two million followers on X; over 1.5 million of those arrived after Oct. 7. ProPublica and the Tow Center found over 20 tweets by Hinkle using misleading or manipulated media in the first month of the conflict; more than half had been tagged with a Community Note. The tweets amassed 40 million views, while the Community Notes were collectively viewed just under 10 million times. Hinkle did not respond to a request for comment.
All told, debunked tweets with a Community Note in the ProPublica-Tow Center dataset amassed 300 million views in aggregate, about five times the total number of views on the notes, even though Community Notes can appear on multiple tweets and collect views from all of them, including from tweets that were not reviewed by the news organizations.
X continues to improve the Community Notes system. It announced updates to the feature on Oct. 24, saying notes are appearing more often on viral and high-visibility content, and are appearing faster in general. But ProPublica and Tow Center’s review found that less than a third of debunked tweets created since the update received a Community Note, though the median time for a note to become visible dropped noticeably, from seven hours to just over five hours in the first week of November. The Community Notes team said over email that their data showed that a note typically took around five hours to become visible in the first few days of the conflict.
Aviv Ovadya, an affiliate at Harvard’s Berkman Klein Center For Internet & Society who has worked on social media governance and algorithms similar to the one Community Notes uses, says that any fact-checking process, whether it relies on crowdsourced notes or a third-party fact-checker, is likely to always be playing catch-up to viral claims. “You need to know if the claim is worth even fact-checking,” Ovadya said. “Is it worth my time?” Once a false post is identified, a third-party fact-check may take longer than a Community Note.
Coleman, who leads the Community Notes team, said over email that his team found Community Notes often appeared faster than posts by traditional fact-checkers, and that they are committed to making the notes visible faster.
Our review found that many viral tweets with claims that had been debunked by third-party fact- checkers did not receive a Community Note in the long run. Of the hundreds of tweets in the dataset that gained over 100,000 impressions, only about half had a note. Coleman noted that of those widely viewed tweets, the ones with visible Community Notes attached had nearly twice as many views.
To counter the instances where false claims spread quickly because many accounts post the same misleading media in a short time frame, the company announced in October that it would attach the same Community Note to all posts that share a debunked piece of media. ProPublica and the Tow Center found the system wasn’t always successful.
For example, on and after Oct. 25, multiple accounts tweeted an AI-generated image of a man with five children amid piles of rubble. Community Notes for this image appeared thousands of times on X. However, of the 22 instances we identified in which a verified account tweeted the image, only seven of those were tagged with a Community Note. (One of those tweets was later deleted after garnering more than 200,000 views.)
We found X’s media-matching system to be inconsistent for numerous other claims as well. Coleman pointed to the many automatic matches as a sign that it is working and said that its algorithm prioritizes “high precision” to avoid mistakenly finding matches between pieces of media that are meaningfully different. He also said the Community Notes team plans to further improve its media-matching system.
The false claims ProPublica and the Tow Center identified in this analysis were also posted on other platforms, including Instagram and TikTok. On X, having a Community Note added to a post does not affect how it is displayed. Other platforms deprioritize fact-checked posts in their algorithmic feeds to limit their reach. While Ovadya believes that continued investment in Community Notes is important, he says changing X’s core algorithm could be even more impactful.
“If X’s recommendation algorithms were built on the same principles as Community Notes and was actively rewarding content that bridges divides,” he said, “you would have less misinformation and sensationalist content going viral in the first place.”
___
Republished with permission under Creative Commons License CC BY-NC-ND 3.0 from
ProPublica.
___
If you support truth in reporting with no paywall, and fearless writing with no popup ads or sponsored content, consider making a contribution today with GoFundMe or Patreon or PayPal. We just tell it like it is, no sensational clickbait or pretentious BS.
Before you continue, I’d like to ask if you could support our independent journalism as we head into one of the most critical news periods of our time in 2024.
The New American Journal is deeply dedicated to uncovering the escalating threats to our democracy and holding those in power accountable. With a turbulent presidential race and the possibility of an even more extreme Trump presidency on the horizon, the need for independent, credible journalism that emphasizes the importance of the upcoming election for our nation and planet has never been greater.
However, a small group of billionaire owners control a significant portion of the information that reaches the public. We are different. We don’t have a billionaire owner or shareholders. Our journalism is created to serve the public interest, not to generate profit. Unlike much of the U.S. media, which often falls into the trap of false equivalence in the name of neutrality, we strive to highlight the lies of powerful individuals and institutions, showing how misinformation and demagoguery can harm democracy.
Our journalists provide context, investigate, and bring to light the critical stories of our time, from election integrity threats to the worsening climate crisis and complex international conflicts. As a news organization with a strong voice, we offer a unique, outsider perspective that is often missing in American media.
Thanks to our unique reader-supported model, you can access the New American journal without encountering a paywall. This is possible because of readers like you. Your support keeps us independent, free from external influences, and accessible to everyone, regardless of their ability to pay for news.
Please help if you can.
American journalists need your help more than ever as forces amass against the free press and democracy itself. We must not let the crypto-fascists and the AI bots take over.
See the latest GoFundMe campaign here.
Don't forget to listen to the new song and video.
Just because we are not featured on cable TV news talk shows, or TikTok videos, does not mean we are not getting out there in search engines and social media sites. We consistently get over a million hits a month.
Click to Advertise Here