In recent weeks, TikTok has been inundated with a wave of bigoted and disrespectful content. This deeply alarming trend has prompted substantial outrage and skepticism about the platform’s content moderation efforts. The recent increase in AI-generated videos has worsened the problem, with millions of views already logged on these harmful shorts. Most of these videos are aimed at Black people, frequently portraying us in racist, violent, or otherwise grotesque and dehumanizing manners—such as equating us to monkeys.
This rapid spread of this material comes after a staggering increase in the use of deepfake technology. This technology took a beating in the media last year. This technology has a pretty deep and troubling connection to all forms of misinformation. It has even contributed to the widespread generation of synthetic pornographic media, further complicating the online media ecosystem. Now, users are simply abusing these tools to create malicious content. At the same time, platforms like TikTok are confusingly trying to figure out how to stop this behavior.
The Impact of AI Deepfakes
AI text-to-image generators were responsible for generating nearly all of the deepfake content in the recent explosion of this misleading content. Over the past year, these generators have given everyday internet users unprecedented ability to create realistic—but completely made-up—videos that frequently violate ethical standards. Since then, this technology has spread into the realm of social media. Like many technologies, it can be weaponized by people who want to push hate speech and racism.
One widely publicized case was that of a Maryland principal who found himself the target of a false AI-created deepfake video. This video inaccurately depicted the principal as racist, putting them in the national spotlight and firestorm of criticism. This case highlights the sobering real world impacts of deepfake technology. Reputations can be irreparably damaged by fake news.
Racism and Content Moderation Challenges
The emergence of racist AI videos on TikTok primarily targets Black individuals, reflecting a disturbing trend within the platform’s content ecosystem. These videos showing Black people in humiliating ways received millions of views, showing the dangerous draw of hateful content. This pattern poses serious questions about TikTok’s content moderation policies and their efficacy in dealing with hateful conduct.
Even with new restrictions and more strict guidelines, TikTok continues to grapple with the near-impossible task of regulating its millions of posts created by users each day. The platform is expanding quickly and continuing to draw a diverse user base. This increase complicates our ability to track and respond quickly to cases of racism. As racist videos proliferate, there is a growing concern among users and advocacy groups regarding the platform’s responsibility to maintain a safe environment.
Growing Fears Surrounding AI Misuse
With recent examples of AI technology being used for nefarious purposes, concerns over AI’s possible use in sowing hate and spreading misinformation have grown even deeper. Every time a new, potentially dangerous application is developed, creators discover new and powerful ways to exploit these tools, increasing the harms caused by deepfake content. The cause of concern has led to heated debates over the technology’s ethical ramifications. Now, Americans want the tech sector to be held accountable to reduce its harmful effects.
