"There was no regulatory body and there still isn't one that addresses unethical or harmful uses of platform data even though the implications are arguably as serious as those involving national security."

The terror attacks of 2001 ushered in stringent Know Your Customer (KYC) regulations requiring such institutions know their customers in great detail. The 2008-2009 crisis resulted in further regulation aimed at mitigating economic instability and curbing the use of funds for financing terrorism and other illicit activity.

In contrast, the news industry is expected to largely regulate itself. We pride ourselves on our freedom of the press, and the freedom of expression. Indeed, one of the wonders of the Internet era is the empowerment of individuals as publishers on digital platforms.

However, unlike news organizations whose reputations ultimately depend on careful verification, we have allowed digital platforms like Facebook and Twitter to function without much regard to the risks of propaganda and bias they create. They are now promising us they will do better in the future by employing more people for fact checking. Would we have trusted the bankers with such a promise after the financial crisis and left them alone to regulate themselves?

An important lesson learned from the rise of digital platforms is that they disrupt industries by blurring the boundaries between them. Which industry does Amazon belong to? What about Google? Facebook? Apple? The answers are increasingly vague. And yet regulation continues to be siloed largely by industry, like finance, retail, and telecommunication despite their increasing irrelevance in classifying some of our largest and most powerful businesses.

KYC is critical to finance because every industry uses financial services. Since digital platforms are becoming similarly ubiquitous to virtually every industry, KYC requirements should be considered for them, similar to those for financial institutions.

Advertisements on mass media are plain for everyone to see, and political ads require disclosing who paid for them. Yet in the narrowcasting world of digital targeting it is possible for parties to target individuals with customized hate speech ads completely under the radar without the public or regulators ever noticing, or even without the platform designers being aware of their algorithms being used in ways they had never envisioned or intended!

Requiring KYC requirements on digital platforms is not new. Airbnb already does an automated KYC in real time through verification of government issued IDs. Social media platforms could be required to do something similar but with the added restriction of requiring foreign publishers to obtain legitimate U.S. issued IDs to advertise in the U.S. This would pre-empt the use of legitimate identities issued by governments who may be sponsors of illicit activity or harmful propaganda.

Secondly, we need better guidelines around the ethical use of data, especially around profiling and social manipulation. Business should follow IRB policies around the use of human subjects for social experimentation similar to university researchers, regardless of their stated data usage policies which most individuals don't even read let alone understand.

For too long, regulators have turned a blind eye to the use of data, emboldening digital platforms to do as they please with no oversight. We need a more proactive approach, where social platforms disclose broadly their data mining goals and policies and demonstrate that they are not violating the implicit intentions of users who entrusted them with their data. These points are not intended to be critical of Facebook and Twitter, but a warning that social media platforms in general pose risks that need to be considered seriously by regulators in free societies.

There are no easy answers, but turning a blind eye to this new Internet phenomenon will continue to expose us to considerable peril in the future. The U.S. government and our regulators need to understand how digital platforms can be weaponized and misused against its citizens, and equally importantly, against democracy itself.

Commentary by Vasant Dhar, a professor at the Stern School of Business and the Center for Data Science at New York University. A longer version of this article is featured in the December 2017 special issue of the Big Data journal on "Computational Propaganda." Follow him on Twitter @ VasantDhar.

For more insight from CNBC contributors, follow @CNBCopinion on Twitter.