Twitch updates policy to stop misinformation going viral on platform
Caspar Camille RubinIn a bid to stamp out viral misinformation on their platform, Twitch updated its Community Guidelines to prohibit “Harmful Misinformation Actors” from using their services.
Online platforms have put in increasing measures to try and tackle rampant misinformation from going viral.
Services like Spotify and TikTok have ramped up efforts to stamp out misinformation; the former added new disclaimer messages to controversial podcast episodes, while the latter did the same in the form of video banners.
On March 4, Twitch announced they’d joined the fray too after revealing they’d updated their Community Guidelines to prohibit what they described as “Harmful Misinformation Actors” from using their services.
“In order to reduce harm to our community and the public without undermining our streamers’ open dialogue with their communities, we prohibit harmful misinformation superspreaders who persistently share misinformation,” the company stated.
“We seek to remove users whose online presence is dedicated to persistently sharing widely disproven and broadly shared harmful misinformation topics.”
The key word is persistent. It will not apply to individual statements and discussions, with the company stressing regular content creators won’t be affected.
Twitch revealed the decision was brought about after partnering with “over a dozen researchers and experts” to understand how harmful misinformation spreads online and how to nip it in the bud on their platform.
However, they assured users they’ll only enforce against those who persistently share widely disproven and broadly shared harmful misinformation topics — which will be reviewed by a dedicated team on a case-by-case basis.