To many doctors, Eric Topol is a machine. He’s a prolific researcher who publishes in high-impact journals like Nature and Science, leads the Scripps Research Institute, and is a well-respected cardiologist based in La Jolla, California. So, it’s particularly impressive (and quite ironic) that, through it all, Topol managed to write the definitive tome, Deep Medicine (Basic Books, 2019) , on how robots–vis-a-vis artificial intelligence–are becoming incorporated into healthcare and what this means for the future of medicine.

Topol begins his book by making the case for AI through his own experience with a knee injury many years ago that left him with post-surgical complications. Had AI been available, he argues, the complication risks may have been elucidated earlier.

The following chapters take us on a journey through various scenarios—from a patient’s stroke that was misattributed to a heart defect to how physicians’ cognitive biases sometimes result in misdiagnoses–as well as summarizing the history of AI, which may have begun in 1936 with Alan Turing’s work. Topol even describes, in great detail, the journeys of various tech companies to develop wearables that include AI–the parallel attempts of AlivCor’s Kardia (wearable ECG) and the AppleWatch ECG for instance–as well as machine-learning algorithms intended to be incorporated in electronic health records. For a novice, either to medicine or computer science, Topol intricately details–through tables and charts–both the terminology used to describe AI in healthcare, and how it works.

It’s clear that Topol has done painstakingly extensive research, collating what’s likely every meaningful study on AI and healthcare, as well as each major global initiative, while incorporating insights from the most prominent tech and medicine influencers (of which Topol might count himself). Every time my inclination was to raise a counterpoint, on the next page Topol had beat me to it, bringing in research published even as recently as the end of 2018.

Arguably the most touching narrative was Topol’s description of his father-in-law, John, who succumbed to an unknown illness after a lengthy battle in the hospital. Topol wonders if AI may have helped both predict this illness, and the likelihood of recovery, making John’s last days a bit easier. In this regard, Topol lays out one of the controversial ethical dilemmas around AI in healthcare: its involvement in both predicting death and determining it. For example, as both Topol and a new paper argue, who might be blamed, given the medical industry’s predilection toward litigation, if the “robot” is incorrect in such a scenario?

While these questions are left for the reader to reflect upon, Topol makes a clear case for the role of AI in fields such as radiology and dermatology, where doctors in those specialties typically rely on pattern recognition and a methodical approach. Now even fields such as ophthalmology are benefiting from AI to detect eye disease such as diabetic retinopathy, which could have widespread implications for screening–particularly in under-resourced areas in the U.S. and globally. As well, in specialities such as cardiology or general internal medicine, AI may help physicians by functioning as a decision aid to both better personalize medicine and help doctors avoid cognitive errors in diagnostic decision-making.

Above all, Deep Medicine is timely and necessary. Over the last two weeks, Google launched, and subsequently canceled, plans to create an ethics committee dedicated to overseeing the ethics behind AI. And last Tuesday , the FDA posted a detailed plan about its intention to create regulations around the use of AI in health and medicine. In Canada, where I live, one of the biggest single investments in healthcare was recently made toward expanding research and development of AI.