Second, rigid rules about the “sale of data” and limits on the use of artificial intelligence are not a productive way to prevent abuse and would impact activities essential to our safety and security.

Some behavior raises alarms and should be stopped, like secretly sharing minutely detailed personal profiles with political operatives to turn elections. However, essential activities — including advances in health care, cybersecurity, financial services and fundamental scientific research — depend upon large data sets and broad data sharing. Massachusetts has funded a public-private partnership called Mass Digital Health, and the American Medical Association has created the Integrated Health Model Initiative to promote data sharing across the medical and scientific communities to improve health care outcomes. Technology companies are deploying artificial intelligence across massive data sets to advance understanding of Parkinson’s, Alzheimer’s and other diseases.

Detecting fraud and cybercrime relies upon compiling and analyzing large sets of metadata, as bad actors intentionally strike broadly and over time. And A.I. is being used to benefit underserved communities. There are innovative programs using a range of personal data to offer loans to disadvantaged consumers, and there is research on internet search data to predict and prevent infectious diseases. Requiring companies to remove individual pieces of data from large data sets on demand, and prohibiting analytics because some might use it improperly, would render many of these activities impractical.

Finally, the law must not be so burdensome that it cuts off innovation and economic opportunity.

The digital sector represented roughly 7 percent of the American economy as of 2017, and it is growing at nearly triple the average, according to the Bureau of Economic Affairs. Contrast that to Continental Europe, where a burdensome regulatory environment contributes to anemic tech sector growth.

At the same time, American companies are competing to develop more privacy-protective technologies and to wed their brands to how well they guard our data. Any new law must foster this commercial interest in the value of privacy without making it onerous for new businesses to emerge and compete. Notably, creating high barriers to new services only further entrenches the few social media and data companies with established services, large data sets and financial cushions.

Some ideas already employed widely outside the United States deserve consideration. Privacy laws in Canada, Japan and elsewhere rightfully require companies to consider the benefits and risks their data practices pose to consumers and society. Setting standards for data use and requiring more granular disclosures would take the pressure off consumers to consent to all possible uses at once.

Any new law must also recognize that data is important to all we do and that we cannot simply make it go away. Instead, a variety of tools — including reasonable data minimization, development of industry standards and Federal Trade Commission rule-making — can protect against misuse while allowing development of science and industry.