How Instagram and TikTok create more violent, intolerant and sexist young people.

The increase in sexist, xenophobic and homophobic behavior among young people is attributed to the influence of violent and misogynistic content on platforms such as TikTok. Although networks claim to moderate these contents, the reality shows that algorithms promote interactions that generate profits, prioritizing impact over morality.

I'm sure you've spoken at breakfast, or in a tardeoas lately we have been seeing a increase in macho, xenophobic and homophobic behavior among young people. A trend that, after decades of having been declining thanks to social policies and the opening of the media to visualize other realitiesfor the past five years, we have been astonished to see how the attitude seems to be on the rise.

How can it be that young people are apparently exposed to a social culture that encourages inclusive and tolerant behavior, while discouraging sexist attitudes, there are more and more misogynistic young people.

The answer lies in the content to which young people are exposed.

While traditional media and social policies morally punish sexist, xenophobic and homophobic behaviors, the videos that kids find on their cell phones when scrolling are exactly that: sexist, xenophobic and homophobic..

And it is not a perception, it is a reality exposed, among others, by Andrew Kaungwho worked as user security analyst at TikTok from 2020 to 2022. He and a colleague decided to examine what 16-year-olds and older were being advised to watch by examining the application algorithms. What they discovered was that they were being visually bombarding, without their request, with videos containing violence, pornography and misogynistic ideas..

In contrast, girls were recommended content based on their interests.

TikTok, and the rest of the content broadcasting platforms, are excused saying that the videos pass a first AI-based filter that finds harmful content and sends it for human review.. The funny thing about this is that, as Andrew Kaung explains, for this to happen, the videos must first pass a level of viewsIn other words, the videos must have already hurt enough kids to trigger an alert. It's like if we see a fire in a house, but the fire department is not called until the family living inside is burned.

That chilling threshold of visualizations is of 10,000 views or moresays Kaung. And it should be remembered that most of the network companies social broadcasting allow children under 13 years of age to register.

However ByteDancethe Chinese company that owns TikTok, says it removes 99% from content that violates its rules, either by its AI or by human moderators, before it reaches 10,000 views.

A research study, carried out in collaboration between the UCLthe University of Kent and the Association of School and College Leaders (ASCL) last February, it found a four-fold increase in the level of misogynistic content on the "For You" page of TikTok accounts in just five days on the platform, in an algorithmic modeling study.

For its part, the BBC decided to do some research on the subject and found that several former employees of network companies social positioned themselves in the same place as Andrew Kaung by having experiences that coincided with those of the analyst.. They also contacted users, such as the British Cai who shared his experience, and the BBC verified it.

Cai explained that, as a user, since he was 16 years old, he had been unpleasant videos for him, such as someone being hit by a car.monologues, monologues of influencers with misogynistic opinions, and others with violent fights. And he did not understand why this content was appearing. Cai tried using one of Instagram's tools and a similar one from TikTok to indicate that he was not interested in that content, and despite that the recommendations continued.

Today Cai is 18 years old and says that continues to see violent and misogynistic content recommended on both Instagram and TikTok.. The BBC checked his Instagram account and they saw an image with two people side by side, one of whom had bruises, and a caption: "My love language". Another image showed a person being hit by a truck.

Cai also exposes how these videos have conditioned some of his friends. According to him, one of his friends was attracted to the contents of a influencer controversial and began to adopt misogynistic views.

And what do these networks gain by showing, and encouraging, the consumption of violent content? Interaction.

If the tobacco companies saw gold in the nicotine contained in tobacco, the companies that manage networks know that your business is based on interactionin search of profitability.

It doesn't matter if you like or dislike a piece of content, the algorithm doesn't care if you are happy to see it, or angry. What interests it is that you interact with it. Interaction is what makes the platform money.

How do you choose the content the user will see? It takes into account the tastes that the user has expressed when registering (if they did), but also where they are, what users of the same age and gender see. That way, content is going to depend on what others seem to like. If a video of a homeless guy being beaten up seems to be seen by a lot of people in your area by people your age, it's going to show up for you, whether you like it or not.

BBC contacted a former employee of TikTok who told them how some 16-year-old boys may end up viewing violent content within seconds of signing up to the platformjust because other similar teenage users have been viewing this type of content. It's not about these teens hitting the "like"of these contents, just by watching the videoor (because it has an impact on them) the platform already categorizes that content as "recommended". What matters is that the user spends time on the platform because that allows ads to appear, which is what feeds them.

Impactful contents are those that, by definition, will attract more attention. Whether it is hate speech, acts of violence or natural disasters. This content is therefore what the platforms are interested in because it is what keeps users in front of the screen.

All this talk of money being spent by TikTok, Meta, Google and others on damage control (monitoring of harmful content) is like a tobacco company saying that it is going to remove the nicotine addiction from cigarettes. No one with a minimum of common sense can believe that a business model based on interaction is going to get rid of the content that allows precisely that interaction to exist.

This article is based on a BBC article available at https://www.bbc.com/mundo/articles/cd9d5nnz3lwo
Cover image by engin akyurt


Discover more from Situación Crítica, el Blog

Subscribe to get the latest posts sent to your email.