Social Media Algorithms Exposing Teenage Boys to Violent Content, Former Employees Warn

In a troubling revelation, former social media analysts have highlighted how algorithms on platforms like TikTok and Instagram are unintentionally directing teenage boys to violent and misogynistic content. This issue has raised serious concerns about the safety of young users online.

The story centers around Cai, a 16-year-old who initially enjoyed harmless content on his social media feeds, only to be suddenly bombarded with disturbing videos of violence and misogyny. “It was like everything took a dark turn,” Cai told BBC Panorama. “One minute it was a cute dog, the next, I was seeing people getting hit by cars and disturbing influencer rants.”

Andrew Kaung, a former user safety analyst at TikTok, found similar patterns during his tenure. Working from December 2020 to June 2022, Kaung and a colleague analyzed content recommended to young users and were alarmed by the prevalence of harmful material targeted at teenage boys. Despite AI tools designed to filter out inappropriate content, Kaung discovered that many harmful videos bypassed early moderation stages and reached young audiences.

TikTok and other social media giants use AI to screen most harmful content, but Kaung’s findings revealed that these tools often failed to catch everything. At TikTok, for instance, videos were only reviewed manually if they crossed a threshold of 10,000 views. This lag meant that harmful content could circulate widely before being addressed.

Kaung raised concerns about these practices but faced resistance due to fears over the cost and labor required to implement changes. His recommendations, including more specialized moderators and clearer content labeling, were not adopted at the time. TikTok has since claimed to have improved its moderation system and says that 99% of removed content is flagged by AI or human moderators before reaching 10,000 views.

Meta, which owns Instagram and Facebook, has similarly been criticized for its approach to content moderation. Former employees have echoed Kaung’s concerns, noting that while algorithms effectively identify popular content, they often fail to differentiate between harmful and harmless material.

For Cai, efforts to filter out violent content have been unsuccessful. Despite using the platforms’ tools to indicate disinterest in such material, he continues to receive disturbing recommendations. “You get these images stuck in your head,” he said. “It’s like they stain your brain.”

The issue of harmful content is not isolated to boys. Ofcom, the UK’s media regulator, has noted that while harmful content affecting young women, such as videos promoting eating disorders, has been highlighted, the algorithms driving hate and violence towards young men have received less attention.

New UK regulations set to be enforced by 2025 aim to address these issues by requiring social media companies to verify users’ ages and restrict the recommendation of harmful content. Ofcom has indicated that it will impose fines and could pursue criminal prosecutions if companies fail to comply.

TikTok has stated that it employs over 40,000 safety personnel and invests heavily in content moderation. Similarly, Meta claims to offer numerous tools for a positive experience and actively seeks feedback for policy improvements.

As Cai continues to navigate the challenges of social media, he advocates for more effective tools to manage content preferences. “It feels like social media companies don’t respect user opinions as long as they’re making money,” he said.

For now, the debate continues over how to balance user engagement with the need for safer online spaces for young people.

      Background

Social media platforms like TikTok and Instagram have become integral parts of teenage life, offering a mix of entertainment and social interaction. However, recent investigations have unveiled that the algorithms driving these platforms are inadvertently exposing young users, particularly boys, to violent and misogynistic content.

    Cai’s Experience

  • Initial Exposure: Cai, now 18, first encountered disturbing content on his feeds at the age of 16. His experience began with innocuous videos but quickly shifted to more violent and misogynistic material.
  • Content Analysis: Cai observed a pattern where his interest in UFC (Ultimate Fighting Championship) led to an increased exposure to violent content. Despite his attempts to filter out such material, the algorithms continued to push similar content to his feed.
  • Psychological Impact: Cai described the impact of these videos as deeply unsettling, noting that the violent images “stain your brain” and affect his thoughts throughout the day.

      Andrew Kaung’s Findings

  • Role at TikTok: Andrew Kaung worked as an analyst for TikTok, where he investigated the content recommendations being served to users, including teenagers.
  • Algorithmic Issues: Kaung discovered that TikTok’s algorithms often recommended harmful content to teenage boys. The AI tools, designed to filter content, were found lacking, particularly when videos did not reach the threshold for manual review.
  • Comparative Analysis: Kaung’s observations were not limited to TikTok. Similar issues were noted during his previous tenure at Meta, which owns Instagram. He reported that both platforms’ reliance on engagement metrics allowed harmful content to proliferate.

       Industry Response

  • TikTok’s Measures: TikTok claims to remove 99% of content violating its rules through AI and human moderators before it reaches 10,000 views. The platform also says it has improved its moderation practices since Kaung’s departure.
  • Meta’s Approach: Meta emphasizes its use of over 50 tools to ensure positive experiences for teens. However, concerns remain about the effectiveness of these tools in filtering out harmful content.

       Regulatory and Safety Measures

  • Ofcom’s Findings: The UK media regulator Ofcom has highlighted the need for more stringent controls on content recommended to young users. While harmful content affecting young women has been in the spotlight, the pathways driving hate and violence towards teenage boys are now receiving increased attention.
  • Upcoming Regulations: The UK is set to introduce new regulations by 2025 that will enforce age verification and restrict the recommendation of harmful content. Ofcom will oversee the implementation and compliance of these regulations.

    Challenges and Criticisms

  • Moderation Challenges: The effectiveness of content moderation on social media platforms is an ongoing challenge. Both TikTok and Meta face criticism for the lag in addressing harmful content and the difficulty in balancing user engagement with safety.
  • Industry Pushback: Efforts to improve moderation practices have faced resistance due to concerns over costs and the complexity of implementing changes. Kaung’s recommendations for clearer labeling and more specialized moderators were initially rejected.

      Public and Expert Opinions

  • Cai’s Advocacy: Cai continues to advocate for better content filtering tools and greater respect for user preferences. He believes that social media companies need to take user feedback more seriously to create a safer online environment.
  • Expert Views: Experts and former employees stress the need for a fundamental shift in how social media platforms handle content recommendations, emphasizing the importance of prioritizing user safety over engagement metrics.
  • The issue of social media algorithms promoting harmful content to teenagers underscores a broader debate about online safety and the responsibility of tech companies. As regulatory measures are introduced and platforms refine their practices, the goal remains to create a safer and more respectful digital space for young users.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top