Social media platform X incentivized the spread of misinformation following the Bondi Beach shootings, according to a digital media expert. In the hours after the attack, which left 15 people dead and dozens injured, false and misleading claims flooded social media platforms.
Misinformation and Death Threats
Sydney man Naveed Akram received death threats after being wrongly identified on X as one of the gunmen, with some posts including his personal information, such as his university. Mr. Akram, who shares a name with one of the shooters, pleaded with people to stop spreading the misinformation in a video posted on the Facebook page of the Pakistan consulate in Sydney.
“It was a real nightmare for me, seeing photos of my face shared on social media, wrongfully being called the shooter,” the Pakistani national told the ABC. He reported the misinformation to police, but was told they could not take action and advised him to deactivate his accounts.
While some posts misidentifying him were taken down, others remain online on X and other platforms. “I am still shaking. This has put me at risk, and also my family back home in Pakistan [at risk],” he said. “My mum broke down and feels in danger.”
Social media users also posted video of fireworks, falsely characterizing it as celebrations in the western Sydney suburb of Bankstown by “Arabs” or “Islamists.” A local community organization clarified that the display was for Christmas celebrations. Community notes were later added to some posts, and some were deleted, but the video continued to be reposted and mischaracterized.
Other misinformation shared on X included wrongly identifying the shooters as former Israel Defense Forces members, or as being from Pakistan, claims of shootings in other eastern suburbs, and the assertion that the tragedy was a “false flag” operation. One shooter was originally from India, while the other was born in Australia.
‘Economy Around Disinformation’
Disinformation expert Timothy Graham said X continues to be an influential platform where “key false narratives” originate and spread. These narratives can be unintentionally misleading or deliberately deceptive, according to Dr. Graham, an associate professor in Digital Media at the Queensland University of Technology.
“The biggest takeaway for me really is that the platforms, X in particular, really incentivise this through their design features … unfortunately, this both propels and rewards [misleading content],” he said. Dr. Graham attributes this to X’s monetization program, where users are paid for engagement on their posts. X states that earnings are calculated based on verified engagements, such as likes and replies.
Dr. Graham explained that following events like the Bondi shooting, people are desperate for information, and misleading content often exploits this desperation, driven primarily by financial motives. “People are incentivised to share content that they know is going to get a lot of clicks irrespective of its quality, irrespective of whether it’s true or factual, simply because they can make money out of it, and this is obviously a really big issue,” he said. “There’s basically an economy around disinformation now.”
X’s Creator Revenue Sharing program restricts monetization of content relating to tragedy, conflict, mass violence, or exploitation of controversial political or social issues, but it is unclear when these conditions are enforced. To join the program, accounts must have significant engagement, including 5 million organic impressions in the past three months and at least 500 verified followers, and must be a paid X subscriber.
Moderation by ‘Community Notes’
Dr. Graham also said X’s “community notes” moderation system is unsuitable for fast-breaking, divisive news events like the Bondi shooting. Under the system, users can collaboratively add helpful notes to potentially misleading posts. X states that notes only appear when rated “helpful” by people with diverse perspectives.
Dr. Graham said community notes work for some content but not for polarizing events, which require agreement between people with opposing views. He noted that notes often take too long to appear, or never appear at all, while the misinformation continues to spread. “Meanwhile, they’re racking up the views. They are being reported on. They are being picked up on by [other channels]. It’s spreading like wildfire, and you know 10, 12, 24 hours later we still don’t see any context added.”
Social Media Misinformation an ‘Infrastructure Problem’
Dr. Graham said solutions to misinformation are complex and require balancing free speech with public safety. He suggested addressing the incentives offered by platforms and increasing access to social media data. “We’re living in a dark age of access to social media data,” he said.
He stated that stakeholders previously understood the levels of hate speech, foreign interference, and the types of content being shared. He advocated for regulations requiring platforms to share specifications of their algorithms and content boosting practices. Earlier this month, the European Union fined X 120 million euros ($210 million) for breaches of its Digital Services Act, including creating barriers for researchers accessing public data.
“The European Union’s digital services act faces this issue head on, and I think it’s doing a really excellent job of trying to get inroads into the platforms,” Dr. Graham said. “They need to share data with people. We need to know what’s going on.” Dr. Graham concluded that misinformation on social media is an “infrastructure problem.”
“We need to recognise that platforms like X are now modern infrastructure, like bridges are infrastructure, like telephone wires are infrastructure,” he said. “If there’s something problematic about those, then we need to change them; otherwise, they’re going to keep doing the same things.”
Discover more from Archyworldys
Subscribe to get the latest posts sent to your email.