Platforms like Twitch have tried to combat harassment against women. It’s not working
Last June the gaming industry had a moment of reckoning. Starting as a trickle, female and non-binary online gamers and streamers began posting stories of online and offline harassment by men, a problem they said was rife across the industry.
Accusations of misogynistic comments, threats of rape, revenge porn and doxxing spread to other social media platforms, not just the ones centered on gaming. From YouTube to Twitter and Reddit, big-name content creators — almost exclusively male — were getting away with creating online worlds that amplified the harassment women have fought for decades to eradicate from their workplace.
That online platforms function as workplaces for content creators and that harassment and abuse threaten their livelihoods is rarely addressed in discussions on platform regulation and the industry’s self-policing policies. Many of the users involved are financially dependent on ad revenue from their online streams or posts, making these platforms essentially their place of business.
“In a normal workplace if your employees are getting harassed this way, you would do something,” said Karen Skardzius, who researches Twitch streamers and platform regulation at York University in Toronto. “If this was a store and you had someone come in and scream all these expletives and hateful language at one of your employees, you would chuck them out of the store. But instead, here, that’s engagement with the platform.”
Over several days in June 2020, there were dozens, and later hundreds of posts ranging from accusations of inappropriate comments during gaming streams and rape at industry conferences. One New York-based streamer, Jessica Richey, started collecting the posts in a Google spreadsheet. By July 3, it listed over 400 accusations of harassment, manipulation and sexual assault.
Platforms and gaming executives responded with promises to rid their spaces of bad behavior. “I’ve heard your voices,” tweeted Twitch CEO Emmett Shear on June 23. He promised to extinguish “systemic sexism, racism and abuse” in the gaming industry. Nearly six months later, Twitch, which is owned by Amazon, announced a new hateful conduct and harassment policy, which went into effect on January 22.
The updated policy eschewed any major upheaval on the company’s gaming network. It now promises a “much lower tolerance for objectifying or harassing behavior” and a ban on sending unsolicited nude images and videos, an issue many users flagged in June 2020. Users can now report other gamers for making repeated comments on perceived attractiveness.
“Our recent policy updates take a clearer and more consistent stance against hate and harassment, and enable us to enforce our guidelines more consistently,” said a Twitch spokesperson in an email. They added that the policy was just one of “a number of projects underway to address hate and harassment.”
The upgraded policy puts Twitch a step ahead of other platforms, such as YouTube, Reddit, and Twitter, where female and non-binary content creators say harassment is also pervasive.
For YouTuber Pieke Roelofs, the threats haven’t stopped coming. After accusing professional YouTuber Alexander McKechnie of raping her in 2016, Roelofs has faced a stream of death threats and hateful messages from his fans, who number in the millions. McKechnie’s videos about science and the future have been collectively viewed over 145 million times on his channel. Roelofs says this dedicated fan base has been waging a relentless campaign against her.
YouTube, Twitter, Reddit — name the platform and Roelofs has been harassed, doxed and threatened by other users. After a court case in the Netherlands, where Roelofs lives, opened a criminal investigation in 2018, the abuse only intensified. Today she is left fending off online hate from McKechnie’s fans with the limited toolkit provided by the platforms where she makes her living.
Roelofs’ income took a big hit. Searching her name online pulls up pages of tweets, subreddits and comment threads calling her a liar and worse. With limited options for reporting and removing the content, she says she’s left with a choice: putting up with the stream of online hate or leaving her professional field entirely. Since the abuse began she’s turned off comments on her videos, resulting in YouTube’s algorithm downgrading her content.
“People are taken hostage by these companies and their rules, people feel they can’t leave,” Roelofs said. “I want them to start taking responsibility for these huge worlds they have created online.”
McKechnie, known by his username Exurb1a, has not responded to multiple requests for comment.
Google-owned YouTube has faced several high-profile calls for greater accountability for harboring harassment and hate speech since Roelofs first reported McKechnie’s channel in 2017.
In June 2019, YouTube came under scrutiny for allowing Steven Crowder, a right-wing video blogger, who at the time had 3.8 million subscribers, to stay on the platform. Crowder had been sending homophobic and racist messages to journalist Carlos Maza. A public outcry followed, with YouTube responding with updates to its anti-harassment policy. The platform also demonetized Crowder’s account, temporarily stripping him of the ability to generate ad revenue on his videos, but still keeping him on YouTube. The actions made Crowder a hero on the right and he gained 1.4 million subscribers since his temporary ban.
Accusations that Twitter, Facebook, YouTube and other social media giants over-regulate content to discriminate against conservative voices are in stark contrast to the voices who say platforms are toothless when it comes to content moderation.
Moments of reckoning come and go, say many women and non-binary content creators, marked by initial optimism, strongly worded statements from industry CEOs and an ensuing lack of meaningful change.
In some cases, as with Crowder, a platform’s actions — such as demonetizing an account or a temporary ban from the website — backfire, ultimately fueling increased popularity for the person targeted by a platform for punitive measures.
New moderation policies unveiled by platforms with much fanfare are often framed as “updates” or “clarifications” to pre-existing language.
Skardzius says she often sees women resorting to posting their own detailed harassment policies on their channels, outlining what will get another user banned or reported. As for male streamers, “their profiles don’t have that,” she said. “When they do have rules it’s something like ‘no politics and no religion.’ They’re just not targeted by this kind of stuff.”
Other streamers rely on customized bots to filter out hateful content on their accounts. Natalie Casanova — known as ZombiUnicorn on Twitch, where they were named streamer of the year in 2020 — protects their profile with bots that screen out messages from other users for words like “whore” and “rape.”
Casanova has especially relied on these filters since last June, when they accused British YouTuber Tom Cassell, known online as Syndicate, of rape. Cassel still streams on Twitch to three million followers nearly every day. He has almost 10 million subscribers on YouTube.
Cassell did not respond to a request to comment for this piece.
Hate-filled messages continue to find Casanova, despite the bots and the complaint they have submitted to YouTube and Twitch. YouTube never responded to Casanova’s allegations against Cassell. Twitch sent Casanova a confirmation email on July 8, 2020 saying it had received the message and asking for more information. Casanova attached the police report they filed concerning the alleged rape, contact information for the police detective examining the case and dozens of witness statements. Casanova has not heard from the platform since.
“Nothing has literally ever happened,” Casanova said. “I think at least getting an email from this alleged investigation team saying, ‘Hey, sorry it’s been like eight months, but we came to a decision and we couldn’t do anything’ … that would at least be a first step.”
The women and non-binary gamers and streamers interviewed for this piece did offer their own ideas for reporting tools they wish were on the negotiating table. Such as bans based on IP addresses, rather than usernames, which could slow down how quickly harassers are able to re-join platforms after being banned, said Casanova.
Skardzius says many of the women she spoke to for her dissertation demanded more reliable channels for contacting actual people, not anonymous chat bots, working at these companies about complaints.
In the meantime, the abuse across the platforms carries on for creators like Roelofs and Casanova. Roelofs’ most recent death threat came via a direct message on Twitter on January 31, accompanied by photographs of dead bodies. Casanova is messaged by the same troll every day from an anonymous account on Twitter. Every time they block the account, the user messages again from a new one.
The story you just read is a small piece of a complex and an ever-changing storyline that Coda covers relentlessly and with singular focus. But we can’t do it without your help. Show your support for journalism that stays on the story by becoming a member today. Coda Story is a 501(c)3 U.S. non-profit. Your contribution to Coda Story is tax deductible.
How the global anti-LGBTQ movement found a home in Turkey
The smart city where everybody knows your name
Silicon Savanna: The workers taking on Africa's digital sweatshops
Sectarian violence in Manipur is a mirror for Modi's India
How space traffic in orbit could spell trouble on Earth