Last year, when the Food and Drug Administration approved an Apple Watch feature that notified users if they had an irregular heart rhythm, the information tech industry hailed it as a watershed moment in consumer-focused health care. Cardiologists, on the other hand, warned that the app could lead to privacy violations, unwarranted worrying and wasteful or even dangerous medical care.

It might have been good to have an authoritative assessment of the new technology’s pros and cons. But in the United States, at least, that no longer happens.

In Britain, France and the European Union, government agencies examine the ethical, social and economic impact of artificial intelligence and other big new technologies used in health care and elsewhere. But while a number of U.S. academic centers study these issues, federal policymaking is practically nil.

This is an unprecedented and relatively recent lapse, when you consider that the government previously reviewed potentially risky technologies such as DNA modification, nuclear physics and human genome science. It’s particularly baffling given the real-world abuses of the new technologies, not least in China where the state uses AI and facial recognition to track, control and sometimes imprison millions of its Muslim citizens.

One reason for the curiosity gap is that the United States no longer has a place to do that kind of technology review. The Office of Technology Assessment conducted 750 studies on topics ranging from biotechnology to robotics and fuel economy from 1972 until then-House Speaker Newt Gingrich and his allies shut it down in 1995. Two other congressional research groups have suffered severe cuts—the Government Accountability Office’s funding has fallen by a third since 1990, the Congressional Research Service’s by 40 percent. The White House’s Office of Science and Technology Policy created an AI task force in 2018, but its concern was promoting U.S. competitiveness, not oversight.

Though many university professors and tech-funded think tanks are examining the ethical, social and legal implications of technologies like Big Data and machine learning, “it’s definitely happening outside the policy infrastructure,” said John Wilbanks, a philosopher and technologist at the Sage Bionetworks research group.

Yet these technologies could have profound effects on our future, and they pose enormous questions for society. Overestimating their impact could be as dangerous as underestimating it. “The promise of AI is undeniable,” novelist and physician Abraham Verghese wrote recently in the Journal of the American Medical Association. “The hype and fear surrounding the subject may be greater than that which accompanied the discovery of the structure of DNA or the whole genome.”

The privacy issues raised by AI are starting to stir interest in Congress, where a series of hearings have examined data theft and suspect data sharing practices by big companies like Facebook and Google. But there is no concerted effort to weigh the pros and cons of unfettered data mongering, particularly in health care, said Duke University cybersecurity expert Eric Perakslis.

“Consumers, clinicians and institutions need to understand that personalized health is a type of surveillance,” he says. “There is no way around it, so it needs to be recognized and understood.”

In addition to the sale or breach or misuse of electronic data, other AI-related risks include the loss of jobs to AI or robots, and the unintended consequences of reliance on false or discriminatory AI algorithms.

Exhibit #1, critics suspect, is the Apple Watch’s electrocardiogram, which measures the wearer’s heart rhythms and can detect atrial defibrillation. The latter is a potentially serious medical condition—but it might not be, especially in the younger people who generally use the Apple Watch. A false positive could lead to big workups and even treatment with risky blood thinners.

“For everyone who is helped by this app, how many will be harmed by unnecessary anxiety, treatment and cost? We don’t know that ratio—and no one but Apple will have the data,” notes Eric Topol, a cardiologist, author and founder of the Scripps Research Translational Institute, where scientists study how to bring new technologies into medicine.

Topol’s statement points to the central problem U.S. policymakers have with regulating artificial intelligence and its offshoots: The development is going on behind closed doors in corporate laboratories. If policymakers can’t keep an eye on the technology, AI may be an example of something that we don’t know that can hurt us.

“It is no secret that members lack access to the expertise needed to fully understand the problems and opportunities current, and future, technologies will bring about,” Rep. Mark Takano (D-Calif.), who has pushed for the restoration of the Office of Technology Assessment, told POLITICO. He said there’s bipartisan support for an office that could provide “access to unbiased knowledge and expertise that can inform effective lawmaking.”

As it happens, there is a good precedent for the federal government stepping up to examine the ethical and legal issues around an important new technology. Starting in 1990, the National Institutes of Health set aside 5 percent of the funding for its Human Genome Project for a program known as ELSI—which stood for the ethical, legal and social implications of genetics research.

The ELSI program, which started 30 years ago, “was a symbol that NIH thought the ethical issues were so important in genomics that they’d spend a lot of money on them,” says Isaac Kohane, chief of the Harvard Medical School’s Department of Biomedical Informatics. “It gave other genetics researchers a heads-up—police your ethics, we care about them.”

ELSI’s premise was to have smart social scientists weigh the pros and cons of genetic technology before they emerged, instead of, “Oops, we let the genie out of the bottle,” said Larry Brody, director of Genomics and Society program at the National Human Genome Research Institute.

And it produced some concrete results.

The work of ELSI-funded scientists like geneticist Wylie Burke of the University of Washington helped drive home the realization that research study subjects want to be participants who learn from the studies. That idea has won widespread acceptance, including in the NIH’s All of Us initiative, which aims to collect and study data from more than 1 million Americans to assess genetic and environmental impacts on health.

Perhaps most significant, ELSI research into the risk that genetic readouts could lead people to lose their jobs or insurance led to the Genetic Information Nondiscrimination Act of 2008. As a result of the act, reports of such discrimination are rare, experts say.

By contrast, there is no law—and only recently any public discussion—about how data companies use AI to generate financial and health risk scores on millions of Americans. These algorithms are being used to shape insurance policies and may affect doctors’ decisions, such as whether to prescribe a pain medication — all without consumers’ knowledge.

“I think there is now growing awareness that things can go awry with AI if we don’t pay attention to it,” Kohane said.

Since AI, unlike the genome project, largely emerged in the private sector, “there hasn’t been any centralized body dedicated to funding research and questions” associated with it, says Arti Rai, an attorney and ethicist at Duke University.

When Chinese scientist He Jiankui announced last year that he had created a gene-edited baby using a technology known as CRISPR, his achievement was met with near-universal condemnation, in part because understanding of the pitfalls of genetic enhancement had been developed over two decades by ELSI-funded lawyers and ethicists.

New developments in AI, by contrast, are often announced at flashy business presentations. And they are much harder to understand. Sometimes the engineers and scientists working on AI don’t even understand how the technology they’ve created is making decisions.

“The technology is opaque,” notes Mildred Cho, associate director of the Stanford University Biomedical Ethics program. The FDA has more or less given up on directly regulating AI-related software such as clinical decision support, which presents recommendations to nurses and doctors based on computer-processed data. The FDA is moving toward a system in which instead of approving products, it gives a seal of approval to medical software companies that show good practices.

Many technologists and social scientists see a need for more and higher guardrails, to protect the public and also to protect the technology from backlash.

“I’m not an alarmist when it comes to machine learning, but I fear that something terrible happens and there won’t be a group of people who can explain what it means in an appropriate way,” Rai said.

Kohane is somewhat more skeptical of a big regulatory role for government. AI is not a coherent enough science to be regulated by a catch-all agency, and its risks to health and society are likely to be specific to particular contexts, he said.

But he and other AI experts agree on a role for an agency like the Office for Technology Assessment to help Congress understand the issue and help clarify its actions—say by setting out a framework for how the National Highway Safety Administration should regulate driverless cars.

To cite another example, if Congress wanted to punish Facebook over Cambridge Analytica’s use of the platform to target voters in 2016, it would be a political decision, Kohane said. But an expert group could help fine-tune the intervention to avoid unintended consequences, he said.

University of North Carolina bioethicist Eric Juengst, ELSI’s first leader, recalled a genomics conference years ago where a scientist complained that ELSI researchers were “constricting the pipeline” of genomic research. Maynard Olson, a leading genomicist, responded, “Think about why we have brakes on an automobile. It’s to enable us to go fast. Without brakes you’d have to go very slowly indeed to be safe.”

Just as ELSI studies helped ease the introduction of genetic technology into medicine, they could improve understanding of the risks and benefits of genetically modified foods, digital health records—and AI, notes Brody.

The interchange between technologists and end users such as clinicians is key to lowering the hype and fear around AI, said Finale Doshi-Velez, a Harvard computer scientist who works with clinicians to develop useful machine-learning algorithms. It’s humbling, she said.

“You come in with machine-learning hubris: We’re going to solve everything!” she said. “But you have to understand what the situation looks like on the ground. Providing the right information to a human partner is much trickier than how you make a prediction.”

Initially, machine learning in cancer focused on mortality prediction—who would survive? But after shadowing clinicians, Doshi-Velez realized, “their question is, ‘What’s going to improve the odds the best? What are the patient’s wishes in terms of comfort, and how do we manage that?’ They aren’t thinking, ‘What are the odds of mortality?’”

Whether the government might return to its previous role in reviewing new technologies remains uncertain.

The General Accountability Office created a Science, Technology Assessment and Analytics team in January, but GAO’s interactions with Congress are formal and its reports can take months. Back in the day, members of Congress could call the Office of Technology Assessment on the phone for clarification of difficult technological issues, said Rep. Bill Foster (D-Ill.), a physicist who chairs the Finance Committee’s Task Force on Artificial Intelligence.

The House Appropriations Committee in May provided $6 million to restore the Office of Technology Assessment in the fiscal 2020 spending bill, and the full House is likely to approve it. But its future in the Senate is not yet clear, Foster said.

The House has also passed a resolution calling for ethical uses of artificial intelligence, and Sen. Brian Schatz (D-Hawaii) reintroduced a bill that would create a center of excellence at the General Services Administration to advise the government on AI.

For now, however, research by AI ethics groups in academia “hasn’t been translated into action,” said Topol, the founder of the Scripps Research Translational Institute. Some researchers have called for a moratorium on the use of facial recognition technology, for example, but “Is there anything being done?” Topol asked. “No. The tech keeps getting better, but nothing’s being done to put guardrails on it.”

Europe, which created a more stringent privacy standard for the internet, “is showing more teeth,” he said, in challenging tech titans like Google and Facebook.

“I wish we had more government leadership on AI like in the UK or France, but in this country, we’re distracted,” he said. “I wonder why?”

Authors: