There’s a better solution available: enticing Web companies entrusted with personal data and preferences to act as “information fiduciaries.” Champions of the concept include Jack Balkin of Yale Law School, who sees a precedent in the way that lawyers and doctors obtain sensitive information about their clients and patients—and are then not allowed to use that knowledge for outside purposes. Balkin asks, “Should we treat certain online businesses, because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses, in the same way?”

As things stand, Web companies are simply bound to follow their own privacy policies, however flimsy. Information fiduciaries would have to do more. For example, they might be required to keep automatic audit trails reflecting when the personal data of their users is shared with another company, or is used in a new way. (Interestingly, the kind of ledger that crypto-currencies like Bitcoin use to track the movement of money could be adapted to this function.) They would provide a way for users to toggle search results or newsfeeds to see how that content would appear without the influence of reams of personal data—that is, non-personalized. And, most important, information fiduciaries would forswear any formulas of personalization derived from their own ideological goals. Such a system could be voluntary, in the way that businesspeople who make suggestions on buying and selling stocks and bonds can elect between careers as investment advisers or brokers: the “advisers” owe duties not to put their own interests above those of their clients, while the “brokers” have no such duty, even as they—confusingly—can go by such titles as financial adviser, financial consultant, wealth manager, and registered representative. (If someone’s telling you how to handle your nest egg, you might ask flat out whether he or she is your fiduciary and walk swiftly to the exit if the answer is no.)

Constructed correctly, the duties of the information fiduciary would be limited enough for the Facebooks and Googles of the world, while meaningful enough to the people who rely on the services, that the intermediaries could be induced to opt into them. To provide further incentive, the government could offer tax breaks or certain legal immunities for those willing to step up toward an enhanced duty to their users. My search results and newsfeed might still end up different from yours based on our political leanings, but only because the algorithm is trying to give me what I want—the way that an investment adviser may recommend stocks to the reckless and bonds to the sedate—and never because the search engine or social network is trying to covertly pick election winners.

Four decades ago, another emerging technology had Americans worried about how it might be manipulating them. In 1974, amid a panic over the possibility of subliminal messages in TV advertisements, the Federal Communications Commission strictly forbade that kind of communication. There was a foundation for the move; historically, broadcasters have accepted a burden of evenhandedness in exchange for licenses to use the public airwaves. The same duty of audience protection ought to be brought to today’s dominant medium. As more and more of what shapes our views and behaviors comes from inscrutable, artificial-intelligence-driven processes, the worst-case scenarios should be placed off limits in ways that don’t trip over into restrictions on free speech. Our information intermediaries can keep their sauces secret, inevitably advantaging some sources of content and disadvantaging others, while still agreeing that some ingredients are poison—and must be off the table.