The company trained its AI using de-identified data from patients in both the US and the UK, and showed that it could reduce false positives by 5.7 percent and false negatives by 9.4 percent in the US. Interestingly, a smaller reduction of 1.2 percent and 2.7 percent respectively was seen in the UK, suggesting that the current US detection system has lower accuracy than the current UK system.

Unlike the human experts, who used patient histories and prior mammograms to make their assessments, the AI only had access to the most recent mammogram of each patient. Despite this, it was able to make screening decisions with greater accuracy than the experts, and the model could be generalized to different populations -- such as women in the US compared to women in the UK.

The developers of the AI emphasize that this is early stage research and that more studies and cooperation with healthcare providers will be required before the system is ready for widespread use.

DeepMind has been used in the past for medical purposes from spotting eye diseases to predicting kidney illness, however, it has also been the subject of considerable controversy. In 2017, it was revealed that the UK's National Health System had shared data with DeepMind on an "inappropriate legal basis," with the company receiving 1.6 million patient records without the direct consent of the patients. This broke privacy laws, the UK data watchdog ruled, so the NHS chose continue working with DeepMind but to anonymize data in future.

In 2018, DeepMind was brought under the Google Health initiative, and concerns about privacy were not assuaged when Google dissolved the review board which was supposed to oversee the company's relationship with the NHS. For all the potential good that could be done with a medical AI like DeepMind, there seems to be a concerning lack of oversight over the privacy of patient data and a lack of accountability for past data privacy issues.