Blog Entry #4: Assessing Current Platforms' Attempts to Curb Misinformation
I've chosen to analyze X and TikTok. I chose these two platforms, as there's lots of discourse surrounding their accuracy and capabilities of curbing misinformation. I'll start with individual analyses and then end with a summary comparing the two and suggestions I have.
X
In 2022, X introduced their "crisis misinformation policy." According to their blog, teams worked on the policy for about a year with the definition of crisis as, "situations in which there is a widespread threat to life, physical safety, health, or basic subsistence." The blog states that per the definition that aligns with the United Nations definition of crisis, X (known as Twitter at the time) will be informed of global crises by the United Nations Inter-Agency Standing Committee.
The goal is to help decipher whether information is true or false, which can be difficult during a crisis. The blog states that X would use information from a range of sources, including conflicting sources, to help verify information. As soon as information has been found to be possibly misleading, the post will not be amplified or recommended on any pages, and adding warnings to the post will be prioritized. Posts that fall under those bounds do not include fact checking/debunking attempts, strong commentary, or first person accounts.
X implemented the first version of this policy following the start of the war in Ukraine (three months prior to this new policy). According to the blog, X found that it is more effective to require users to click through the warning and to not recommend a post over removing it. On occasion, X states that disabling interactions with a post is more effective. I've attached an example from the blog of what a post with the warning would look like.
Despite their efforts, it appears that X has failed to effectively curb misinformation. In 2023, the EU warned that the biggest source of misinformation comes from X. According to the same Forbes article, misinformation and disinformation has been on the rise on X ever since Elon Musk acquired it.
Musk also ended free access to X's APIs. APIs are essentially a tool for softwares to share and utilize information with one another. The repercussions of losing free access means that users now have limited access to APIs unless if they want to spend hundreds to thousands of dollars per month. This means that users can no longer monitor and analyze public conversations or conduct academic or journalistic research for free. Musk claimed that this was in part to protect the misuse of data on X. While this doesn't seem directly related to curbing misinformation, it is.
Per the United Nations' page on countering disinformation, maximizing access to information not only helps curb misinformation and disinformation, it also increases transparency which increases trust. In other words, Musk's choice to end free access to APIs and in turn limit information, not only harms users, but it also harms X.
Additionally, through Musk's version of the platform, in 2024, the BBC claims that they've found users who make "thousands of dollars" by spreading misinformation on the platform. One of the ways they do this is by interacting with one another's posts (especially by posting them in groups and forums). By that very act, they go around X's policy of avoiding recommending flagged posts, as more interaction boosts a post.
TikTok
As of 2020, TikTok stated that their "Our Community Guidelines prohibit misinformation that could cause harm to our community or the larger public, including content that misleads people about elections or other civic processes, content distributed by disinformation campaigns, and health misinformation." In order to meet these guidelines, TikTok said that they are partnered with fact checking experts, including PolitiFact and Lead Stories and the U.S. Department of Homeland Security. Another step they've taken is releasing a "Be Informed" series that aims to educate users on media literacy. TikTok has also been releasing transparency reports in an attempt to keep users and lawmakers informed of its actions.
On the users' end, TikTok offers opportunities for users to debunk videos. It's similar to the crowd-sourced fact checking system that X set up (as explained above), and will not replace current partnerships with fact-checkers. As of April 16th, 2025, anyone can become a "footnotes" contributor if they are over the age of 18, have not recently violated community guidelines, and have been on the platform for at least six months. As this is a recent development, I don't have an example to share. However, Harvard did conduct a study on the effects of users correcting one another in stitched videos. Per the study definition:
"In the present study (N=1,169), participants either watched and rated the credibility of (1) a misinformation video, (2) a correction video, or (3) a misinformation video followed by a correction video (“debunking”). Afterwards, participants rated both a factual and a misinformation video about the same topic and judged the accuracy of the claim furthered by the misinformation video."
The results showed that there was a significant margin that proves the debunking videos as effective. In theory, this means that both X's and TikTok's community based fact checking should be effective.
Unfortunately, similar to X, the BBC and CNN both report that TikTok contains lots of misinformation. As "footnotes" is three days old as of this post, I can't share information on its effectiveness. I am curious to see if the rates of misinformation will go down with its implementation.
Summary and recommendations
Personally, I do think the information I've found and researched goes pretty well with my experiences on both platforms. That being said, I think TikTok contains less misinformation than X. To be fair, TikTok's algorithm is one of the best, and I'm sure that contributes to and changes the amount of misinformation that each user sees. However, I often find myself exiting out of X very quickly, as it appears to be filled with much more hatred and extremism compared to the political videos that appear on my TikTok. People on TikTok (or at least those that appear on my feed) tend to be much more eloquent and the comment section appears to be more respectful than that of X.
In summary, I would argue that both platforms need improvement in the same areas. Both platforms have utilized crowd-sourced information and fact checking, yet they still face lots of accusations (and proven statistics) of large amounts of misinformation being shared on their platforms. They've placed a lot of the responsibility on the user (which I do agree is important), but neglected to take responsibility for themselves. Just as the user is condoning misinformation by neglecting to fact-check before sharing or choosing not to speak out against misinformation, platforms are also making misinformation acceptable by choosing not to actively act against it; by that I mean take some of the responsibility to combat misinformation as their own.
Interestingly, one study of X found that requiring users to go through an extra click or to assess posts before sharing did not decrease or increase the chances of sharing a true post. On the other hand, offering a fact-check decreased the odds of a true post being shared by 7.8%. Despite this, when users were "primed" to misinformation through the use of a warning message, "Please think carefully before you retweet. Remember that there is a significant amount of false news circulating on social media," true post shares increased by 8.1%. I think this is due to the fact that users want their bias to be confirmed, and the easiest way to do that is to not bother fact-checking something. The warning message triggers a user's conscience instead, which is effective, as most people are uncomfortable with a troubled conscience. While there is a lot of discourse surrounding the intersection of conscience and religion, for the sake of this post, the two are very distinct and separate. Researcher and theologist Dr. David G. Kirchhoffer perfectly explains why the warning message is so effective through the lens of conscience, "Human persons are bound to follow their conscience because this is their subjective relationship to objective truth."
Ultimately, as it would be wrong for a platform to take down every single post that contains misinformation--this includes satire and the fact that removing evidence or history has never done anyone any good--I think an approach from the conscience would be the most effective. While this technique would not eliminate misinformation, it would prompt people to use media literacy skills to figure out what is true and what is not. If everyone is actively fighting against misinformation and avoiding echo chambers, it will be much harder to spread misinformation and for people to be comfortable with confirming their bias.