A series of Twitter hashtags falsely accusing Muslims around the world of deliberately spreading the novel coronavirus has pushed Islamophobic disinformation and hate speech to 170 million users since the outbreak of the pandemic, according to new research.
The report is published by Equality Labs, a New York-based South Asian community advocacy group. It shows that the hashtag #Coronajihad has run rampant on Twitter since late March. Posts featuring the hashtag and a range of anti-Muslim rhetoric have also been shared widely on platforms including Facebook, WhatsApp and Instagram.
“What happens on social media matters,” said Equality Labs’ executive director Thenmozhi Soundararajan. “When platforms like Twitter fail in responding to addressing hate speech and disinformation in a timely manner, there are consequences. This was a preventable tragedy.”
The organization calculates that more than 293,000 conversations pushing Islamophobic Covid-19 content have taken place on Twitter, where they have generated more than 700,000 points of engagement, including likes, clicks, shares and comments. It has also found that the majority of users creating and sharing such content are young men between the ages of 18 and 34, based in India or the United States.
The report, which is due to be published tomorrow, notes that Islamophobic coronavirus-related hate speech and disinformation first appeared on Twitter as early as March 1, weeks before countries around the world began to enforce lockdowns.
In many cases, Islamophobic content blaming Muslims for the spread of the virus was first posted to Twitter by Indian Hindu nationalists, but was later amplified by global Islamophobic individuals and groups. Hate speech and disinformation tied to Covid-19 also emanated from Islamophobic social media accounts, pages and groups based in the West.
According to the report, #Coronajihad first gained popularity in India as part of an ongoing campaign by Hindu nationalists targeting Indian Muslims. This campaign includes a widely criticized tweet made last month by India’s ruling Bharatiya Janata Party, declaring that it would “remove every single infiltrator from the country, except Buddha, Hindus and Sikhs,” via the introduction of a national registry.
The term “infiltrators” is commonly seen in India as an Islamophobic dog whistle, pointing to minority groups, including millions of Indian Muslims, Bangladeshi immigrants and Rohingya refugees.
Anti-Muslim sentiments have been on the rise in India for years. In late February, Muslim neighborhoods and shops in Delhi were targeted, after posts inciting violence went viral on both Facebook and Twitter. The rampages that followed resulted in 53 deaths, over 200 injuries and 2,000 arrests.
“As a result of these hashtags, we saw Muslims in India being denied healthcare, women who were pregnant and in labor were turned away from hospitals, there was widespread discrimination against Muslim businesses, which were boycotted,” said Soundararajan. “This discrimination goes beyond Muslims. Public health requires a sense of collective trust. All of that stops when you have the arms of a government targeting one community and social media platforms look unwilling to do anything about it.”
The Equality Labs report reveals that that Islamophobic social media content related to Covid-19 often reflects common themes. These include Muslims being depicted as the virus and linked to bioterrorism, with Covid-19 as the weapon of choice. Other posts have falsely claimed that Muslims are testing positive for the coronavirus at a higher rate than others and that they are intentionally spreading the disease to non-Muslims as a form of “jihad.”
#Coronajihad was just one in a number of social media hashtags which attempted to blame Muslims in India for spreading Covid-19. Other included #BiologicalJihad, #MuslimVirus, #MuslimDistancing, #Jihadivirus and #BanTheBook, referring to the Qur’an, the holy book of Islam.
While #Coronajihad has recently been blocked in Twitter search results, Soundararajan said the platform’s slow response amounted to serious negligence.
“The reaction of platforms like Twitter to Islamophobic disinformation and hate speech is a case of way too little and way late,” she said. “They have the ability to move and take down content as soon as it is launched. Yet, as our report shows, Islamophobic content has been targeting Muslims in a dangerous way for months. Twitter just chose not to do anything about it. The failure of moderation is a complete dereliction of duty for the market,” she added.
The report’s authors make a number of recommendations for social media platforms. These include prioritizing the removal and prevention of Islamophobic coronavirus-related disinformation and hate speech. Other recommendations include more attentive moderation and increased engagement with internet freedom experts, Muslim civil society advocates and public health officials.
Twitter declined to answer specific questions about Islamophobic hashtags and content asked for this article via email. A spokesperson for the company did, however, email a statement detailing the platform’s zero-tolerance approach to online threats, including instances of harassment and hateful conduct.
“We’re prioritizing the removal of content when it has a call to action that could potentially cause harm,” it read. “Since introducing these new policies, and as we’ve continued to double down on tech, our automated systems have challenged more than 4.3 million accounts which were targeting discussions around Covid-19 with spammy or manipulative behaviors.”