The web has grown to become the main source of information for many people, and they depend on it to build their knowledge and perception of the world.

In order to manage the constant stream of information arriving on our screens, people interact with recommender systems on a daily basis. Some well-known examples include Facebook’s News Feed, Spotify’s Discover Weekly playlists, movie and book recommendations on Netflix and Amazon, and of course Google Search, which these days is predominately a recommendation engine. Just a few days ago Instagram also announced that they will move from a temporal to an algorithmic timeline.

Recommender systems are supposed to help people make good choices and decisions. Numerous algorithms (sets of rules to be followed by computers in calculations or other problem-solving operations) work hard to produce a list of recommendations just for you. They’re increasingly pervasive, as Eli Pariser, writer of the book The Filter Bubble, concludes in his 2015 Medium post:

“[The algorithms] mediate more and more of what we do. They guide an increasing proportion of our choices — where to eat, where to sleep, who to sleep with and what to read. From Google to Yelp to Facebook, they help shape what we know.” ◦◦◦ Eli Pariser

Algorithms form what we know as a community and as individuals. But are they programmed to do what’s best for us, the users?

The conversation about the effects and ethics of algorithms has been taking off, and recent studies show that social media feeds expose you to an increasingly narrow range of themes, despite the breadth of content you have chosen to follow.

One widely used approach by recommender systems is ‘collaborative filtering’. This approach is based on the assumption that you will like things people who are similar to you also have enjoyed. As a result collaborative filtering approaches often boosts the most liked content by you and your peers even more.

Choosing this approach makes perfect sense from the point of view of a company that’s mainly concerned with making you stick around longer (to watch more ads). Showing more of the popular content is often very effective in extending the time you spend on these platforms. But their success should also make us pause — what are the consequences?

Is popularity always the best route?

In her TEDx talk, discussing the divided and dysfunctional American congress, psychologist Diane Halpern describes the ubiquitous ‘my side bias’. This is the conviction that your belief system and that of other in-group members is always right and righteous, and that other peoples’ belief systems are always wrong and wrong-headed. Halpern refers to this as the ‘calcification’ of our brains and explains that to counteract it we constantly need to be exposed to diverse perspectives.

Only being introduced to content algorithmically deemed ‘relevant’ to you leads to an increasingly limited view of the world and heightens the risk of harmful bias. In a complex and ever-challenging world where knowledge acquisition is essential for long term prosperity, recommender systems need to do so much more than serve a squirrel dying in front of your house.

“The larger the island of knowledge, the longer the shoreline of wonder” ◦◦◦ Ralph W. Sockman

How could a different kind of recommender system work — one that acknowledges the importance of broadening our horizons and helps us think more critically?

Importantly, it would need to look beyond who you have been and consider who you could become.

In his talk on machine learning at Airbnb OpenAir 2015, Xavier Amatriain, VP of Engineering at Quora, discussed the three core elements of their recommender approach.

Xavier believes that a recommender system should be a complex ecosystem where engagement and popularity are valued, but relevance and content quality are of paramount importance.

By considering the three pillars of quality, relevance and demand, we can understand more about how users interact with content, and how each piece of content relates to one another. This understanding is crucial to offering relevant suggestions that expands the users horizons and add perspective.

We know that users tend to be more satisfied with diverse recommendations, i.e. being exposed to a wider variety of content, which can prompt the experience of serendipity — discovering something when we were least expecting it. Let’s embrace this idea and create recommender systems that can go further and also list conceptually diverse results. For example, by showing you a video about a different but directly related subject area, or by recommending an article with an opposing view to one you looked at before.

I find that a great part of the information I have, was acquired by looking up something and finding something else on the way ◦◦◦ Franklin Pierce Adams

By showing recommendations that are interesting for other reasons than just popularity alone, we can achieve higher goals such as mental stimulation and exploring new frontiers.

Diversity in recommendations pushes engagement to new highs. But we might ask ourselves — how diverse do the results need to be? To answer that question we should be engaging with the results we see — the system alone is not enough. Even if you allow other human reactions such as sentiments to influence which content gets shown, this still doesn’t enable users to curate their own recommendations. Even a single user isn’t always the same: people have different personas with various desires and interests at specific points in time. And besides, do we want our discovery journey to be solely governed by a mathematical formula of popularity?

When a technology is used to shrink people’s possibilities, more than to expand them, it cannot create value for them ◦◦◦ Umair Haque

Technology should expand people’s possibilities, not shrink them.

User involvement has a lot going for it. People have a need for autonomy — the perception of having control of your own behaviour, based on your individual interests, values and even mood. High quality forms of motivation and engagement for activities are fostered by conditions that support the user’s individual experience of autonomy. Surprisingly little attention has been paid to the decision-making processes of users that can be triggered or enabled by the recommender system. This is a missed opportunity. In order to create the experience of autonomy, we first need to explain why a recommendation is given, and follow up by giving the user tools to then influence the recommendation result in real time.

In Summary

We believe that recommendation systems shouldn’t only care about people’s past behaviour and popular demand; content quality and diversity are essential. Users must also be helped to better understand why a resource is being recommended. Long-term relevance and the user’s feeling of autonomy should be amplified by offering the ability to actively influence the result of a recommendation list or search. By doing so, recommendation systems will take users to unseen levels of productive engagement.