Elmo’s new X account, the social media platform formerly known as Twitter, was hacked for a few hours. An unidentified user published hateful and violent imagery while the account was still active. The crash occurred on Tuesday, March 29th. It has raised significant alarms among advocates and followers of the move. With more than 650,000 followers, Elmo’s account is a real force on the new social media ecosystem.
The hacker immediately unleashed a torrent of vitriolic messages. These ranged from very explicit antisemitic and racist language to ugly, vile comments towards people like former President Donald Trump and dead sex offender Jeffrey Epstein. These posts, filled with the n-word and other racial insults, drew outrage and condemnation from several national Jewish advocacy organizations almost instantly.
In response to the incident, a spokesperson for Sesame Workshop stated, “Elmo’s X account was briefly compromised yesterday by an unknown hacker who posted disgusting messages, including antisemitic and racist posts. The account has since been secured.” This reassurance seems to be damage control, given the fact that the posts were immediately deleted from the agency’s account.
As of Monday, Elmo’s last post on X dates back to July 12, showcasing a photo of Elmo alongside Tango, another character from the beloved children’s show “Sesame Street.” The jarring juxtaposition of this sweet kid-friendly content to these perverse hacked in messages just drives home how egregious this breach was.
This breach has, unsurprisingly, sparked angry conversations about how well X’s moderation and security features are working. Even the owner of the platform, Elon Musk, recognized this fact, claiming that the harmful messages are in fact being dealt with. This wouldn’t be the first time X has been caught under fire related to hate speech. Shortly after Musk took over, the platform was hit with an advertiser boycott. Advertisers didn’t want their messages appearing alongside hateful and harmful content.
It’s great that the hacked messages have brought to light such serious concerns. They do raise important, broader concerns about how AI-powered tools moderate online content. As these tools become more integrated into social media platforms, questions arise about their efficacy in preventing hate speech and maintaining community standards.