Site icon The American Front

Social Media Algorithms Exposing Teenage Boys to Violent Content, Former Employees Warn

In a troubling revelation, former social media analysts have highlighted how algorithms on platforms like TikTok and Instagram are unintentionally directing teenage boys to violent and misogynistic content. This issue has raised serious concerns about the safety of young users online.

The story centers around Cai, a 16-year-old who initially enjoyed harmless content on his social media feeds, only to be suddenly bombarded with disturbing videos of violence and misogyny. “It was like everything took a dark turn,” Cai told BBC Panorama. “One minute it was a cute dog, the next, I was seeing people getting hit by cars and disturbing influencer rants.”

Andrew Kaung, a former user safety analyst at TikTok, found similar patterns during his tenure. Working from December 2020 to June 2022, Kaung and a colleague analyzed content recommended to young users and were alarmed by the prevalence of harmful material targeted at teenage boys. Despite AI tools designed to filter out inappropriate content, Kaung discovered that many harmful videos bypassed early moderation stages and reached young audiences.

TikTok and other social media giants use AI to screen most harmful content, but Kaung’s findings revealed that these tools often failed to catch everything. At TikTok, for instance, videos were only reviewed manually if they crossed a threshold of 10,000 views. This lag meant that harmful content could circulate widely before being addressed.

Kaung raised concerns about these practices but faced resistance due to fears over the cost and labor required to implement changes. His recommendations, including more specialized moderators and clearer content labeling, were not adopted at the time. TikTok has since claimed to have improved its moderation system and says that 99% of removed content is flagged by AI or human moderators before reaching 10,000 views.

Meta, which owns Instagram and Facebook, has similarly been criticized for its approach to content moderation. Former employees have echoed Kaung’s concerns, noting that while algorithms effectively identify popular content, they often fail to differentiate between harmful and harmless material.

For Cai, efforts to filter out violent content have been unsuccessful. Despite using the platforms’ tools to indicate disinterest in such material, he continues to receive disturbing recommendations. “You get these images stuck in your head,” he said. “It’s like they stain your brain.”

The issue of harmful content is not isolated to boys. Ofcom, the UK’s media regulator, has noted that while harmful content affecting young women, such as videos promoting eating disorders, has been highlighted, the algorithms driving hate and violence towards young men have received less attention.

New UK regulations set to be enforced by 2025 aim to address these issues by requiring social media companies to verify users’ ages and restrict the recommendation of harmful content. Ofcom has indicated that it will impose fines and could pursue criminal prosecutions if companies fail to comply.

TikTok has stated that it employs over 40,000 safety personnel and invests heavily in content moderation. Similarly, Meta claims to offer numerous tools for a positive experience and actively seeks feedback for policy improvements.

As Cai continues to navigate the challenges of social media, he advocates for more effective tools to manage content preferences. “It feels like social media companies don’t respect user opinions as long as they’re making money,” he said.

For now, the debate continues over how to balance user engagement with the need for safer online spaces for young people.

      Background

Social media platforms like TikTok and Instagram have become integral parts of teenage life, offering a mix of entertainment and social interaction. However, recent investigations have unveiled that the algorithms driving these platforms are inadvertently exposing young users, particularly boys, to violent and misogynistic content.

    Cai’s Experience

      Andrew Kaung’s Findings

       Industry Response

       Regulatory and Safety Measures

    Challenges and Criticisms

      Public and Expert Opinions

 

Exit mobile version