News

The Google examine promotes YouTube’s efforts to eradicate right-wing extremist content material, however ignores the function of algorithms

“It feels very derivative and simple,” Elon University researcher Megan Squire told USA Today. “I didn’t learn anything from it, and that’s disappointing.”

“They underscore the role their own technology and platforms play in driving people to extremism,” commented writer and podcaster Bridget Todd. “The individual certainly has a responsibility not to allow himself to be immersed in extremist content. However, when you are a platform like Google you cannot just emphasize individual responsibility and completely obscure the fact that your massive platform has allowed extremist online content to fester and become so popular. “

The report was created by a Google technology incubator, Jigsaw, and compiled from interviews with ex-extremists that detailed how the internet has become a greenhouse for hate groups. The discussion on YouTube just describes different steps the company has taken. All of these steps emphasize setting guidelines to limit hate speech:

In June 2019, the company went a step further and updated its hate speech policy with a new language that bans content claiming a group is superior to justify discrimination based on traits, such as race, religion, or sexual orientation. Thousands of the videos that were previously disabled have been completely removed. Over the next three months, YouTube increased the number of videos removed five times and closed the number of channels due to hate speech.

… There is clearly more to be done and there is no single solution or approach to dealing with hateful content. YouTube continues to develop new methods of detecting and removing violent content and working with experts to better understand the evolving threat posed by violent white supremacist organizations.

The main driver behind radicalization, however, is not simply the presence of such content, but rather the algorithms the company uses to recommend content to its users that ultimately creates the rabbit holes of conspiracy and disinformation that are the breeding grounds for far-away radicalization -right ideologies . Google’s approach to the problem continues to focus on weeding out hateful content rather than changing the algorithms themselves – which will certainly dilute the problem but never solve it.

A study published earlier this year analyzed more than 72 million YouTube comments, tracked users and watched them migrate to increasingly hateful content. It found that “users consistently migrate from milder to more extreme content; and that a large percentage of users who consume alt-right content have consumed Alt-Lite and IDW content in the past. ”

The researchers concluded that a “radicalization pipeline” clearly exists on YouTube and that their algorithm also speeds up the radicalization process. It has been reported that “it is possible to find alt-right content from recommended channels but not from recommended videos,” noting that personalization (which has not been investigated) could affect the process, “but we still do found a path where users could find extreme content from major media channels. “

However, the same algorithms are key to the profitability of all of these companies as they are responsible for improving the “user engagement” that Internet users use to scroll through their websites and consume content. As long as the algorithms create rabbit holes that humans can dive into, they will.

This became evident in the March 2019 massacre in two mosques in Christchurch, New Zealand by an Australian white supremacist who was mainly radicalized online. The latest government commission report on Brenton Tarrant’s murder rush made it clear that YouTube was the main source of his radicalization.

The report states that Tarrant “claimed he was not a frequent commentator on far-right websites and that YouTube was a far more significant source of information and inspiration for him.” It added that “the evidence we saw points to more substantial use of YouTube and therefore is in line with what he told us.”

“When you talk to people who were in the (white supremacist) movement, or if you read in the chat rooms these people are talking in, almost everything is about YouTube,” said Squire, a computer science professor at Elon University. studying online extremism. USA Today said, “Your ‘red pill’ moment is almost always on YouTube.”

Google, Facebook and the other major platforms cite freedom of speech concerns when reforming their websites. It is becoming clear, however, that the main source of their concerns is how effective reform can be, detrimental to business results. Then the question arises: do we have to sacrifice a healthy society on the altar of its profits?

Related Articles