There have been multiple sessions in Congress over the past year looking at the failures of digital platforms such as Facebook, Google, and Twitter, including their failure to limit the actions of trolls spreading misinformation during the 2016 election. But there have been very few concrete proposals from the government on how to deal with that, or with the virtual monopoly platforms have on certain types of information, or how they should handle user privacy.

Democratic Senator Mark Warner hopes to fill that gap with a policy discussion paper he has been circulating in governmental and tech circles, according to a report from Axios (which obtained a copy of the paper from an unknown source). The proposals in the paper are wide ranging and in some cases even politically impossible, and raise almost as many questions as they try to answer.

ICYMI: Eleven newsletters to subscribe to if you work in media.

The paper argues that the revelations of the past year, including evidence that Russian trolls manipulated Facebook, have “revealed the dark underbelly of an entire ecosystem.” It goes on to say:

The speed with which these products have grown and come to dominate nearly every aspect of our social, political and economic lives has in many ways obscured the shortcomings of their creators in anticipating the harmful effects of their use. Government has failed to adapt and has been incapable or unwilling to adequately address the impacts of these trends on privacy, competition, and public discourse.

When it comes to misinformation, the Warner paper says one possible proposal is that platforms be required to label automated bot accounts, and also do more to identify who is behind anonymous or pseudonymous accounts. If there’s a failure to do these things, it says, the Federal Trade Commission could step in with sanctions.

Sign up for CJR 's daily email

But would labeling bots actually help solve the issues Congress is concerned about? Experts say they are just one part of the problem, and that the behavior of what are sometimes called “cyborgs”—partially automated accounts run by human beings—is also important. And while anonymity can be a shield for some trolls, others are more than happy to engage in all kinds of bad behavior under their real names.

The paper also admits that identifying users could backfire if it invades the privacy of journalists or dissidents and whistleblowers who have real reasons for wanting to remain anonymous.

One other significant change the Warner paper discusses is an amendment to Section 230 of the Communications Decency Act, which gives the platforms immunity from prosecution for content uploaded by their users. Since some users complain harassing material is often re-uploaded after being removed, the paper recommends Section 230 be amended so the platforms could face sanctions if they don’t prevent this. Such tinkering, however, could weaken the free speech protections Section 230 is designed to uphold.

In addition, the paper argues the US should pass privacy-protection legislation similar to the General Data Protection Regulation now in force in Europe, including the right to data portability and what is often called “the right to be forgotten.” It notes, however, that in order to have a GDPR-like regime, the US would need a central body to administer the law, something it doesn’t currently have—and creating one could produce even more problems than the proposal hopes to solve.

ICYMI: Newspaper staffers leave, create digital startup rival

Has America ever needed a media watchdog more than now? Help us by joining CJR today

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.