- Twitch's inadequate moderation tools and widespread child predatory behavior on the platform led the UK Internet authority to connect with Twitch to discuss the site's poor child safety.
- Twitch made improvements to its platform to increase the safety of its young users after discovering that more than 1,900+ users' accounts had following lists with 70% children.
- Twitch announced that it is updating the default privacy settings for its DM function, Whispers, and broadening the signals for identifying and terminating accounts linked to individuals under 13.
After receiving criticism for allegedly encouraging child predatory behavior, Twitch (1), the popular teen and youth-based live-streaming platform for video games made adjustments. These improvements are intended to increase safety for its young users.
Twitch, owned by Amazon.com Inc., stated on Tuesday that grooming is especially pernicious because it can be concealed in plain sight, and there are lesser-established industry practices for detecting it.
Twitch added that such predators are not welcome and will not be tolerated on the platform and provided an update on their ongoing work to combat them.
In a report released in September by Bloomberg, which detailed the widespread child predatory behavior on Twitch and the platform's inadequate moderation capabilities, it was also noted that 1,976 Twitch accounts had following lists with at least 70% children or young adults (2).
According to data from a researcher investigating live-streaming networks, predators targeted more than 279,016 kids. In the aftermath of the revelation that the UK internet regulator Ofcom contacted Twitch to discuss its poor child safety on the site, Bloomberg also found that predatory news accounts and more youngsters were being targeted (3).
A spokesperson for Ofcom stated in an email that they are actively reviewing whether Twitch's security measures are adequate to stop the most harmful content from being uploaded.
According to critics, the ease with which young people can register for an account on Twitch, lie about their age, and immediately Livestream themselves to anonymous and untraceable audiences is the root of child predatory behavior.
TikTok also announced plans to raise its live-streaming age requirement from 16 to 18; this change became effective yesterday, November 23, and YouTube, a service owned by Alphabet Inc., by default does not list or make mobile livestreams from users under 17 searchable.
While it is encouraging to see some online services taking the first steps toward adopting age assurance techniques that protect user privacy in the wake of the current tech tsunami and new laws around the globe, none of the largest global platforms or those that are most popular with children have yet adopted sufficiently thorough, audited age checks for keeping kids safe online.
Many websites underestimate the dangers that younger users of their service may encounter, especially when they facilitate contact with potentially harmful adults about whom parents sometimes have little or no knowledge.
Twitch also eliminated its "just started" function in 2020. This feature made it simpler for predators to detect underage accounts for only two content categories, although the practice still exists in practically all others.
Twitch confirmed on Tuesday that it is expanding the signals it uses to identify and remove accounts belonging to users under the age of 13 and that it has also updated the default privacy settings for its direct messaging feature, Whispers, and disabled the use of specific search terms to find content on the platform.
Additionally, it has intensified cooperation with a third-party group that monitors grooming trends in the market and reports inappropriate behavior on the site.
The purchase of Spirit AI by Twitch, which uses language processing to comb through online chat features to aid in the development of tools detecting harm written in text on Twitch, was also completed, according to Twitch.