The decision to scrap recommended content from my “social media” experience came with the realization that 1) there are entire industries dedicated to capturing and selling my attention online, and 2) I can find what I am looking for without having to browse through Silicon Valley’s recommendations.

After reaching those conclusions, it’s hard to believe that a simple “retooling” of YouTube’s algorithms will fix its issues. For me, the off-chance of being pleasantly surprised or intellectually challenged by a recommended video is not worth having to weave through conspiracies and propaganda every time I log in.

To be clear, YouTube’s algorithm problems have been out in the open for years, but the issue resurfaces in the mainstream only when someone uncovers a truly vile corner of the platform. Similarly to Facebook’s reactionary way of dealing with criticism, YouTube’s structural problems are then swept under the rug through promises for more oversight, until the next time a critical mass of people notice something alarming and the PR cycle starts again.

I realize a lot of people don’t mind overlooking YouTube’s algorithm problems as long as they have access to the content they want. Similarly, many are probably OK with being shown Goldman Sachs and Raytheon ads in-between and around the content they are viewing on social media — be it on their phone, their watch, their messenger, or their VR device.

Nevertheless, I have a sneaking suspicion that as soon as someone offers a viable platform that doesn’t profit from our attention and personal data, Facebook, YouTube and others will quickly go the way of MySpace which, for all it’s worth, exited the stage much more gracefully.

In the meantime, Facebook is fighting tooth and nail to stay relevant through Instagram and, more recently, “Messenger for Kids.”

Tweet, Re-tweet, and Follow (The People We Suggest)

Using recommended content to increase user engagement is a strategy that is also utilized by Twitter — the place for “breaking news” and discombobulated thoughts.

The way Twitter gets you to see “additional” content is by following other users. When you follow someone, you are also shown posts and accounts of the users they follow, which eventually transforms your “feed” into an echo chamber of blue-checked “digital influencers.”

Users are constantly followed by Twitter’s criminally titled “Who to follow” section which features exciting, left field personalities such as Hillary Clinton, Donald J. Trump, and Ben & Candy Carson.

Similarly to what I did with my YouTube homepage, I decided to remove all extra content (trends, “Who to Follow,” etc.) from my Twitter page through a browser extension. I then “unfollowed” everyone and started to browse Twitter through lists. This cleaned my feed from sponsored ads and “third party” content.

However, I suspect this also made me a suspicious account in the eyes of Twitter, as I saw a steep decline in my reach and followers. As soon as I stared following accounts, I started gaining followers again.

The game of “either follow or remain in an echo chamber” necessitates for new users, who don’t have that many followers, to find creative ways to stand out in the stream of never-ending blurbs. This is often achieved by inserting yourself in “trending” conversations, or re-sharing posts of others with a unique take and hoping that someone will “re-tweet” you.

Many Twitter users exploit this way of attracting attention by re-sharing tweets with the intent to smear, intimidate, or threaten people on the platform. This is Twitter’s bread and butter, as nothing gets people more excited than projecting their frustrations on strangers online.

Right-wing operative directs his followers to a video he doesn’t like, and then to the presenter’s Twitter account.

Twitter’s solution to this toxic way of gaining attention was to introduce “quality filters.” However, filters hardly address Twitter’s fundamental flaws — they merely ask users to stick their head in the sand.

In addition, the platform has notoriously allowed “tough guy” politicians like Marco Rubio and Donald Trump to post tweets threatening to murder foreign leaders and initiate a nuclear war. There’s an obvious double standard for “power users” like Marco and Donald, whose insights are deemed more important and tolerable than those of the unchecked masses.

Even Twitter’s CEO has admitted that his creation is not a place for “nuanced discussion,” which makes mass media’s efforts to entice anything with a pulse to tweet, or have a hashtag, that much more revealing.

YouTube’s “solutions” have been equally ineffective. It was recently discovered that the platform’s recommendation algorithm makes it easy for pedophiles to find and comment on videos of young children. This prompted a number of companies to pull advertising dollars from YouTube since their ads were being shown on said videos.

YouTube’s solution was to disable “tens of millions of videos that feature minors, in addition to removing inappropriate comments and the accounts that make them,” as reported by WIRED.

However, according to Guillaume Chaslot, an AI researcher who worked on YouTube’s recommendation engine, “It’s an AI problem, not a comment section problem.” In an interview for WIRED, Chaslot said that as long as YouTube’s recommendation algorithm bases its decisions on watch time, regardless of their content, the problem won’t go away.

Similarly to Twitter, where journalists and establishment politicians feel compelled to “one up” each other through cleverly written haikus, YouTube’s algorithm encourages users to regularly produce content in order to stay relevant. Predictably, the combination of having to upload daily, long-form videos to appease the Algorithm has caused many popular YouTube content creators to feel burned out.

In both Twitter and YouTube, quality of content is sacrificed in the rush to produce the fastest tweet or video that offers the best take on an issue that just might be recommended to you — the ultimate product.

Social Media is Social Control

It’s not hard to see how social media companies benefit the U.S. oligarchy. Expressing ourselves through bits of text, dividing people into “blue checked” and regulars, and creating controlled environments where outrage can be easily manufactured and amplified in the mainstream media is the perfect way to control a population.

The way social media companies treat their employees is illustrative of their inaction when it comes to harmful content and addiction-encouraging platforms.

In “The Trauma Floor: The secret lives of Facebook moderators in America,” published in The Verge, Casey Newton describes the experiences of Facebook content moderators who experience severe anxiety while still in training and continue to struggle with trauma symptoms long after they leave:

Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance. It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.

In a Medium article, ex-Google employee Liz Fong-Jones writes about what she describes as an escalation of harassment, doxxing, and hate speech “targeted at marginalized employees within Google’s internal communications”: