When it comes to explaining why Facebook, YouTube and Twitter have become hotbeds for extremism, propaganda and bigotry, there’s a tendency to overcomplicate things.

That’s understandable. The algorithms that govern the platforms are unknowable trade secrets. There are, in some cases, billions of users to account for. There are meaty issues of free speech and copyright law playing out in real time across borders. Technology is confusing!

And, yes, it’s true that the tech companies are dealing with thorny problems that most likely have no universally satisfying outcome. Big Tech’s problems are indeed dizzying and manifold, but the last few years have taught us that there’s an Occam’s razor quality to any explanation of the toxicity of our online platforms. The original sin, it seems, isn’t all that complicated; it’s the prioritization of growth — above all else and at the expense of those of us who use the services.

The most recent example came on Tuesday morning when Bloomberg News published a story chronicling YouTube’s struggles to quash misinformation, conspiracies and incendiary content. According to the report, current and former YouTube employees said that the company had ignored warnings to change YouTube’s recommendation engine and that they were, in some cases, discouraged from seeking out videos that might violate YouTube’s rules to preserve a sense of plausible deniability.