An article in the tech world declared: “There are problems on the Internet.” But the problem is not with Russian ads or Twitter harassers. This is a children’s video.
The article was written by technical writer James Bridle and was published immediately following a report in The New York Times that described the disturbing issues of the popular YouTube kids app. Parents have been giving their children an iPad to watch videos of Peppa Pig or Elsa in Frozen, which are just some disturbing versions offered by so-called family-friendly platforms. In clips disguised in more exciting videos, Peppa drinks bleach instead of naming vegetables. Elsa may look like a blood-covered zombie, and even plays a role of sexual compromise in Spider-Man.
To say the least, this phenomenon is alarming, and YouTube says it is implementing a new filtering method. But the source of the problem remains. In fact, it is the most important tool for the site-more and more we use it.
YouTube uses proprietary algorithms to suggest search results and “previous” videos: Computer programs can determine what to recommend or hide to specific users based on a specific set of criteria and training with a large amount of user data. They work well, and the company claims that only 0.005% of YouTube kids videos have been flagged as inappropriate in the past 30 days. However, as these latest reports show, no code is perfect.
Almost all of the most successful tech companies use similar algorithms as engines to support everything from Facebook news feeds to Google search results. (By the way, Google is the parent company of YouTube.) Naturally, these mysterious tools have become a convenient scapegoat for many of the content issues we face today, from bizarre videos targeting vulnerable children to misinformation in the news during the 2016 election .
Clearly, Silicon Valley has some work to do. However, in addition to requiring companies to take more responsibility after problems with tools, we should also ask us to take more responsibilities. We need to consider whether we want to reduce our dependence on the company’s algorithms, and if so, how.
As the Internet has become an increasingly important part of our lives, we have begun to rely on these proprietary codes as a shortcut to organizing the world. When we don’t have the ability (or maybe just energy) to process it on our own, algorithms classify the information and make decisions for us. Need to distract the child? Send them to YouTube’s fanatic education world. The application may pick out secure videos. The mechanism may be distorted by monetization motive, biased by its data set, or simply difficult to understand overall, but is there any reason to abandon it?
Why aren’t we even more shocked? Maybe it’s because we’ve always used decision shortcuts, and they are always flawed. How did we choose kids videos before YouTube? Maybe we will act on the recommendations of the library director, peer group, or even the National Legion List. These resources are also isolated and limited in scope by personal biases.
However, there are still meaningful differences between these old-fashioned shortcuts and today’s machine learning algorithms. The former has at least some oversight and regulations; public libraries are unlikely to lend nursery rhymes snuff videos. Common community values clearly indicate which options are favored and why. Human judgment-almost quaint today-occasionally allows surprise in the positive direction. A person may come across a resource that is better at agreeing to their stated preferences without careful calibration.
Is there any way to lead our current algorithmic system to a more humane direction? It is unclear how. Some lawmakers have suggested that companies release their algorithms for public review. Others have suggested standardizing company algorithms. For now, the courses for everyday users may just be an urgent need to raise awareness, reminding us that perhaps we should not put all trust in decision-making features that we don’t fully understand. Among other things, scary children’s videos are a wake-up call. If something goes wrong on the Internet, we should do more than just observe.