UK Riots: It Can Happen Here
The recent far-right riots in the UK should serve as a warning about what can happen here in the US.
The riots began after a teenage boy attacked a children's dance class, killing three young girls and injuring eight others on July 29 in Southport, England. False information was then spread online claiming the perpetrator was a Muslim migrant when in fact he was born and raised in Britain and is not Muslim. His parents immigrated from Rwanda before he was born.
Extremist groups spread the misinformation and urged their followers to riot. Migrants, asylums seekers, and Muslims were targeted. The original source of the misinformation was a site called Channel3Now, which was initially reported to have ties to the Russian government, but that also appears to be false. The BBC found that the site was being run by a guy named Kevin in Houston, Texas who wouldn't provide his full name. At the least, pro-Kremlin Telegram channels were amplifying the misinformation.
As of Wednesday, over 1,000 people have been arrested and 575 have been charged.
All the elements that led to the UK riots are present here in the US:
We have active right-wing extremist groups who have racist and xenophobic sentiments.
Social media companies have cut back on their efforts to prevent the spread of misinformation. And the owner of X, formerly Twitter, is himself a purveyor of online misinformation. In reaction to the UK Riots, Elon Musk contributed to the incitement by posting that "Civil war is inevitable" in response to the accusation that the riots were caused by "mass migration and open borders."
We have political leaders spreading fears with race-baiting and anti-immigrant messages. In the UK, Nigel Farage, the leader of a right-wing party, helped stoke and spread misinformation. And here in the US, Republican presidential candidate Donald Trump often promotes fears of migrants with misinformation and messaging that has gone from subtlety racist to blatantly racist, such as this post:
What can we do?
First, do no harm. Make sure you aren't helping to spread misinformation online by checking your sources and being extra careful about sharing breaking news.
Second, if you see friends or family members sharing misinformation, particularly dangerous misinformation, say something. Most aren't intentionally spreading misinformation and are open to the truth. For the rest, you can at least place the correct information in their comments.
Consider these steps a civic duty. We can all play a role in preventing something like the UK Riots from happening here.
Additional reading:
Arc Digital: “Homegrown Violence, Globalized Extremism”
Reuters: “Explainer: Why are there riots in the UK and who is behind them?”
The Dispatch: “The U.K. Riots, Explained”
Watch:
NPR: “Some Christians have been primed for a kind of religious revival centered on Trump”
AVC board member Caleb Campbell was on NPR Thursday on the topic of TPUSA’s outreach to pastors. Listen here:
Or on the NPR website here.
What Else We’re Reading
RS: “How Elon Musk and X Became the Biggest Purveyors of Online Misinformation”
The removal of Twitter’s (imperfect) guardrails meant that suddenly, for the first time, a major online resource many relied on for news and information was overrun by the manipulative trolls formerly relegated to the fringes of the social web. Misinformation about wars, health, climate change, elections and more flourished alongside violent rhetoric and hate speech, in a digital forum that has actual influence on the course of human events.
At the center of it all is Musk, whose turn to hard-right ideology has led him to spout and amplify untruths with abandon, algorithmically forcing them onto an audience of millions. But he wasn’t always so deep into the reservoir of easily debunked rumors and bogus claims. In this timeline, we trace how he turned X into a misinformation machine.
WaPo: “See why AI detection tools can fail to catch election deepfakes”
Deepfake detectors have been marketed as a silver bullet for identifying AI fakes, or “deepfakes.” Social media giants use them to label fake content on their platforms. Government officials are pressuring the private sector to pour millions into building the software, fearing deepfakes could disrupt elections or allow foreign adversaries to incite domestic turmoil.
But the science of detecting manipulated content is in its early stages. An April study by the Reuters Institute for the Study of Journalism found that many deepfake detector tools can be easily duped with simple software tricks or editing techniques.
Meanwhile, deepfakes and manipulated video are proliferating.