Electronic health records store valuable information about hospital patients, but they’re often sparse and unstructured, making them difficult for potentially labor- and time-saving AI systems to parse. Fortunately, researchers at New York University and Princeton have developed a framework that evaluates clinical notes (i.e., descriptions of symptoms, reasons for diagnoses, and radiology results) and autonomously assigns a risk score indicating whether patients will be readmitted within 30 days. They claim that the code and model parameters, which are publicly available on Github, handily outperform baselines.

“Accurately predicting readmission has clinical significance both in terms of efficiency and reducing the burden on intensive care unit doctors,” the paper’s authors wrote. “One estimate puts the financial burden of readmission at $17.9 billion dollars and the fraction of avoidable admissions at 76 percent.”

As the researchers point out in a preprint paper on Arxiv.org (“ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission”), clinical notes use abbreviations and jargon, and they’re often lengthy, which poses an AI system design challenge. To overcome it, they used a natural language processing method — Google’s bidirectional encoder representations from transformers, or BERT — that captures interactions between distant words in sentences by incorporating global, long-range information.

Each clinical note is represented as a collection of tokens, or subword units extracted from text in a preprocessing step. From multiple sequences of these, ClinicalBERT identifies which tokens are associated with which sequence. It also learns the position of tokens from variables corresponding to the sequences, and inserts a special token used in classification tasks in front of every sequence.

To train ClinicalBERT, the team sourced a corpus of clinical notes and masked 15 percent of the input tokens, forcing the model to predict the concealed tokens and whether any two given two sentences were in consecutive order. Then, drawing on the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC-III), an electronic health records data set comprising over two million notes from 58,976 hospital admissions of 38,597 patients, the researchers fine-tuned the system for clinical forecasting tasks.

Tested on a sample set consisting of 30 pairs of medical terms designed to assess medical term similarity, the authors report, ClinicalBERT achieved a high correlation score, indicating that its tokens captured similarity between medical concepts terms. Heart-related concepts like myocardial infarction, atrial fibrillation, and myocardium were close together, they say, and renal failure and kidney failure were also close.

In a separate experiment involving 48 or 72 hours of concatenated notes from 34,560 patients in the MIMIC-III corpus, the team claims that ClinicalBERT showed improved 30-day readmission prediction over models that focus solely on discharge summaries, yielding a 15% relative increase on recall. Moreover, they say that as the length of admissions and access to clinical notes increased, the system began to outperform the original BERT model in language modeling tasks.

“[ClinicalBERT] can help care providers make informed decisions and intervene in advance if needed,” the researchers wrote. “[Its] output … can be traced back to understand which elements of clinical notes were relevant to the current prediction, [and it’s] also readily adapted to other tasks such as diagnosis predictions, mortality risk estimation, or length of stay assessments.”