In the 1880s, when the world-renowned Mayo Clinic was still a young fraternal surgical practice in the newish state of Minnesota, its doctors scribbled notes about their patients into heavy, leather-bound ledgers. But in 1907, a physician there named Henry Plummer came up with something better. He thought the episodes of a patient’s medical history should all be in one place, not scattered between many doctors’ journals. So he introduced a new system, creating for every Mayo patient a centrally housed file folder and a unique identifying number that was to be inscribed on every piece of paper that went inside it—doctor’s notes, lab results, patient correspondence, birth and death records. And recognizing the scientific value in these dossiers, he also convinced Mayo’s leadership to make them available for teaching and research to any physician at the practice.

This development marked the beginning of modern medical record-keeping in the US. And from the beginning, the endeavor has been animated by an inextricable tension between sharing and secrecy—between the potential to mine patient data for new medical insights and the rights of patients to keep that information private.

Last week, that tension came to the fore again when the Mayo Clinic announced that Google would begin securely storing the hospital’s patient data in a private corner of the company’s cloud. It’s a switch from Microsoft Azure, where Mayo has stored patient data since May of last year, when it completed a years-long project to get all of its care sites onto a single electronic health record system. (Project Plummer, it was called.)

The change signals the storied hospital’s ambitions for its vast troves of patient data. Google is leading the much-hyped effort to use artificial intelligence to improve health care, with experiments reading medical images, analyzing genomes, predicting kidney disease, and screening for eye problems caused by diabetes. As part of the 10-year partnership, Google plans to unleash its deep AI expertise on Mayo’s colossal collection of clinical records. The tech giant also plans to establish an office in Rochester, Minnesota, to support the partnership, but declined to say how many employees will staff it or when it will open.

Hospital officials say that strict controls will limit Google’s access to Mayo’s data. Yet despite the best intentions and loftiest goals, data has a way of escaping its silos. And some health data experts worry that these kinds of partnerships pull at the fraying threads of the US’s aging privacy laws and the patchwork of regulations covering medical data.

“The problem is Google’s business model is to use or sell data,” says Lawrence Gostin of Georgetown Law School, who has written extensively about reforming health data privacy laws. “I’m far from convinced that Google might not use identifiable information for its business purposes.”

That would be information the company is not supposed to have unless patients explicitly consent, according to the US Health Insurance Portability and Accountability Act, or HIPAA, the highest health information privacy law of the land. HIPAA requires that health care providers not disclose any personally identifiable health information to third parties without express patient authorization.

But skeptics like Gostin have reason to doubt that Google’s data-ravenous operating style is compatible with the sensitive business of health care. Some of the tech company’s other health experiments have run into regulatory and legal problems, including an app called Streams that its DeepMind subsidiary is developing into an AI-powered assistant for doctors and nurses. A partnership between DeepMind and the UK’s National Health Service to trial the app broke the law by giving the company overly broad access to records on 1.6 million patients, according to a 2017 investigation by the country’s data protection regulator.