Welcome to TikTok’s endless cycle of censorship and error



It’s no surprise that these videos bring news. People record their videos because they work. Getting a look is one of the more effective strategies for launching a large platform to fix something over the years. Tiktok, Twitter and Facebook made it easier for users to report abuse and policy violations by other users. But when these companies seem to be violating their own policies, people often find that the best way forward is simply to try to post about it on the platform itself, hoping it will go viral and attract attention that leads to some sort of solution. For example, Tyler’s two Marketplace videos have more than 1 million views.

“Content becomes tagged because it’s someone from a marginalized group who talks about their experiences with racism. Hate speech and hate speech can look very similar to the algorithm.”

Casey Fiesler, University of Colorado, Boulder

“I probably get tagged about something about once a week,” says Casey Fiesler, an assistant professor at the University of Colorado, Boulder, who studies technology ethics and online communities. She is active on TikTok, with more than 50,000 followers, but while everything she sees doesn’t feel like a justified concern, she says the regular app release parade is real. has had several such mistakes over the past few months, all of which have disproportionately affected marginalized groups on the platform.

MIT Technology Review asked TikTok about each of these recent examples, and the answers are similar: after an investigation, TikTok finds that the problem was created by mistake, stresses that the blocked content in question does not violate their policies and the company’s support links to such groups.

The question is whether this cycle – some technical or policy error, a viral response and an apology – can be changed.

Troubleshoot problems before they occur

“There are two types of damage from this probably algorithmic moderation of content that people observe,” Fiesler says. “One is false negative. People ask, ‘Why is there so much hate speech on this platform and why isn’t it being removed?’ ”

The other is false positive. “Their content becomes tagged because they are someone from a marginalized group who talks about their experiences with racism,” she says. “Hate speech and hate speech can look very similar to an algorithm.”

Both of these categories, she noted, harm the same people: those who are disproportionately targeted for abuse end up being algorithmically censored for what they say.

TikTok’s mysterious referral algorithms are part of its success“But its vague and ever-changing boundaries are already affecting some users.” Fiesler notes that many TikTok creators self-censor words on the platform to avoid launching reviews. And while she’s not sure exactly how much this tactic achieves, Fielser herself started doing it just in case. Account bans, algorithmic mysteries, and bizarre moderation decisions are a constant part of the conversation in the app.


Source link


Please enter your comment!
Please enter your name here