TikTok and other social media companies use AI tools to remove the vast majority of harmful content and to flag other content for review by human moderators, regardless of the number of views they have had. But the AI tools cannot identify everything.

Andrew Kaung says that during the time he worked at TikTok, all videos that were not removed or flagged to human moderators by AI - or reported by other users to moderators - would only then be reviewed again manually if they reached a certain threshold.

He says at one point this was set to 10,000 views or more. He feared this meant some younger users were being exposed to harmful videos. Most major social media companies allow people aged 13 or above to sign up.

TikTok says 99% of content it removes for violating its rules is taken down by AI or human moderators before it reaches 10,000 views. It also says it undertakes proactive investigations on videos with fewer than this number of views.

When he worked at Meta between 2019 and December 2020, Andrew Kaung says there was a different problem. […] While the majority of videos were removed or flagged to moderators by AI tools, the site relied on users to report other videos once they had already seen them.

He says he raised concerns while at both companies, but was met mainly with inaction because, he says, of fears about the amount of work involved or the cost. He says subsequently some improvements were made at TikTok and Meta, but he says younger users, such as Cai, were left at risk in the meantime.

  • Storksforlegs@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    “he was met mainly with inaction because, he says, of fears about the amount of work involved or the cost”

    No kidding, engagement drives their whole business model. And nothing engages and addicts people more than violent, hateful shit.

    And its not just young men (though they are more heavily targeted) the algorithm can hijack almost any viewing pattern and steer it in a violent, xenophobic direction in a remarkably short time.

    They arent going to change the algorithm unless serious action is taken by government regulation, or something (which is not a promising thought)