A new academic report examining a deal between Google’s AI subsidiary DeepMind and the UK’s National Health Service (NHS) has said that the US tech giant made “inexcusable” errors in terms of transparency and oversight when handling sensitive medical information.

The data-sharing agreement — which was signed in 2015 and has since been superseded by a new contract — allows DeepMind access to medical records from 1.6 million patients attending London hospitals run by the NHS Royal Free Trust. Although at the time Google presented the deal as primarily about finding patients at risk from a condition known as acute kidney injury or AKI, the actual terms of the agreement, revealed in April 2016 by a New Scientist investigation, were more broad.

The report notes that DeepMind was given access not only to relevant blood tests and diagnostics, but historical medical records dating back five years, including information on HIV diagnoses, drug overdoses, and abortions. The report also says the wording of the 2015 deal did not constrain the company from using AI analytical techniques on the data (something DeepMind disputes).

“these are issues people ought to be looking at.”

“I think it was a very flawed basis on which they originally operated,” Julia Powles, a postdoctoral researcher at the University of Cambridge and a co-author of the report, told The Verge. “I’m pro data-driven health innovation, but I think that there are real questions of autonomy, public value, and competition that people ought to be looking at.”

In response to the report, published today in the journal Health and Technology, DeepMind and the Royal Free issued a statement saying that the study “completely misrepresents the reality of how the NHS uses technology to process data.” They say: “It makes a series of significant factual and analytical errors, assuming that this kind of data agreement is unprecedented.” The report’s authors say these accusations are unfounded, and have asked DeepMind and the Royal Free to respond to them on the record and in a public forum.

The cause of friction between these two readings of the agreement is complex, but often comes down to questions of intent and good faith. For example, the report notes that the 2015 deal does not address contractually how medical records would be “cabined from other identifiable data stored by Google, given that [...] the company’s business model depends on monetizing personal data.” DeepMind’s response is that it made several public assurances that the Royal Free data would never be joined with Google user data, and that any such combination would be covered by the UK’s Data Protection Act, making it unnecessary to detail the issue within the 2015 agreement.

Powles and co-author Hal Hadson, who wrote the New Scientist story in April 2016, maintain that DeepMind’s ambitions in the health care industry are “vast” and “considerably out of step” with the company’s PR statements. They liken the relationship between the UK public and DeepMind to a one-way mirror. “Once our data makes its way onto Google-controlled servers, our ability to track it — to understand how and why decisions are made about us — is at an end,” they write.

“it doesn’t offer you a robust way of withdrawing from the system.”

DeepMind and Google have taken steps to respond to these criticisms already. In addition to the new data-sharing agreement signed with the NHS last November (still under review by the UK’s data watchdog), DeepMind has held a forum to answer questions from patients and members of the public about how its technology is being used. It also recently announced plans to develop blockchain technology that would let patients track every time their data is accessed. Powles praised the latter initiative, but suggested it didn’t go far enough in giving individuals power over their medical records. “What it doesn’t offer you is a robust way of withdrawing from the system,” she said.

More broadly, Powles and Hodson paint a picture of Google and DeepMind as companies that are poised to profit from access to UK health data that was created at the expense of the taxpayer. As well as its agreement with the Royal Free, DeepMind has partnered with other UK hospitals to access medical data including a million retina scans and hundreds of cancer scans. The exact terms of these deals — including how much DeepMind is getting paid for developing an app named Streams for the NHS — have been kept under wraps, despite requests for more information.

Nicola Perrin, part of the data and policy team at the Wellcome Trust, the world’s largest medical research charity, said that today’s report makes a number of important points but presents an “overly-critical” reading of the 2015 agreement. “There is no doubt that both DeepMind and the Royal Free made mistakes,” Perrin told The Verge. “But all the parties involved have also learned a significant amount.”

Perrin said: “It is important to remember [that] new data-driven technologies do offer real promise in improving healthcare. But the NHS will only be able to make the most of the opportunity that new technologies such as this can offer if the public has confidence that data is appropriately protected.”