why recommendation algorithms are a necessary evil

Experts are worried we’re ceding too many decisions to recommendation algorithms and are on a slippery slope to exploitation and learned helplessness. In reality, we’d be lost and very frustrated without them.





It really shouldn’t be a surprise to anyone who owns a smartphone, uses any streaming video service, or social media site, that algorithms have a major effect on our lives today. Based on how much we rely on recommendations from our favorite sites and apps, they can shape how we view the world, what shows we binge, what products we buy, and even what music plays in our headphones. In light of this development, Kartik Hosanagar, a professor of information systems management at Wharton, wrote a book about the full extent of the tech world’s reach into our lives and decision-making process, tackling everything from what Netflix thinks you might want to watch next, to AI chatbots that may one day provide customer service and improve our existing digital assistants.

An excerpt published as a preview for his thesis on a tech publication ran by Medium (paywall) focuses on a very plausible slice-of-life story which highlights every time an algorithm nudges us into a decision, and asks just how much power we’re ceding to machines by passively going along with their suggestions. Hosanagar’s point is excellent food for thought, and for me, it’s closely related to a problem we don’t often discuss, but which figures prominently in studies of modern day anxieties and mental health issues. You see, we’re not just letting computers take over our lives. We might have little choice but to use algorithms to help us navigate the world since without a computer’s help, we’d be drowning in the myriad of choices we’re given for pretty much anything and everything.

the problem with billions of choices

Netflix has over 13,500 titles. Hulu carries more than 75,000 episodes of nearly a thousand TV shows. A typical supermarket has almost 47,000 individual products while Amazon has more than three billion. Some five billion videos are watched on YouTube daily. (In case you’re feeling curious, almost 92 billion are served up on PornHub in the same time frame.) And if you’re looking for a song on a service like Spotify, you have 30 million possible choices. In short, we have so many options, it would take us years, if not entire lifetimes, to figure out what to watch or listen to next without some sort of help, which is why we tend to agree with recommendation algorithms more often than not.

They’re not just there to help increase revenue or time spent on a site, although they absolutely do both. They’re also important and necessary shortcuts to make our way through the tsunami of choices with which we’re presented. Of course, the problem Hosanagar points to is that our reliance on recommendation algorithms also shapes our consumption and future choices. Just by following along, we give up a small degree of agency to a computer. If knowing that every last choice you made was entirely your own, philosophically speaking, is important to you, this is probably an upsetting thought. But at the same time, can you really spare the time to objectively evaluate thousands of movies, tens of thousands of TV shows, tens of millions of songs, and billions of videos and products? It’s a mathematically and biologically impossible feat.

how the first world is fighting its fomo epidemic

And not only do algorithms provide necessary and useful shortcuts for making choices, they’re also a great way to avoid the fear of missing out. Far from being a temporary buzzword in social media memes, FOMO is a very real condition in which a glut of choices causes stress, anxiety, and depression. Simply put, our brains are overwhelmed with options and try to imagine on what we might be missing out after we finally made a choice, feeling less satisfied with our decisions in a sort of perpetual buyer’s remorse. We have to make choices all the time, but our brains just haven’t caught up with having to parse dozens of possibilities, much less millions, every time, especially when we’re acutely aware that we have a very finite amount of time to choose.

By presenting us with options we can handle and grading them based on how likely we are to be satisfied with them we would be based on our explicit feedback — like how we rated the product we bought — and implicit cues — like whether we watched the entire show or not, and whether we binged it or slowly caught up on it — these algorithms can keep us from feeling overwhelmed and make us happier with our decisions. Plus, consider that while the algorithms are steering us towards certain things, they’re doing so based on our behavior and we’re not forced to do what they tell us. The agency we give up can easily be reclaimed and is very similar to asking friends what they like based on the logic that because you tend to like the same things, you’d also enjoy what they enjoy.

where do we draw the line on recommendations?

At the same time, it’s also perfectly valid to be concerned about companies hijacking all those useful algorithms to steer and nudge you in certain directions, banking on your passivity in the face of countless choices. We know they do exactly that, making quite a bit of money in the process by balancing your predicted enjoyment against their profit margins, so the algorithms may not have your best interests in their programming. That said, these blocks of code carry a solution to this problem. They have to be effective and recommend without annoying or blatantly exploiting you, and if they fail to do that, you’re either going to quickly catch on and stop using them if they don’t adjust, or your decisions on whether to take their recommendation or not will force them to change their outputs.

Instead of being alarmed about how much we use recommendation algorithms and worry about them being abused to brainwash us, we need to remember that there’s a good reason why we use them, and when treated with healthy skepticism and governed by a regulatory framework we’ll need to prevent overly intrusive collection of personal data and give users legal recourse against rigged recommendations, they can be fantastic tools to help us navigate a world filled with digital abundance while maintaining our sanity. Letting our fears of a Black Mirror-esque dystopia force us to manually navigate all the options we laid out in great detail means more FOMO and more power for corporate-backed influencers who’ll be happy to recommend even more things to us than they already do if we find ourselves paralyzed by thousands of options crowding our screens.