Just imagine, what if you could just upload a bitmap image of a chest x-ray into a web-based software that would spit out relevant diagnoses in seconds? Chester the AI radiology assistant aims to do something similar to that, you grab the bitmap image of a frontal chest radiograph, upload it, and the system quickly assesses the likelihood of 14 distinct categories of pathologies including masses, pneumonia, and heart enlargement among others. A video overview of the system has been provided here, while the algorithm itself has been described itself in an arXiv paper. According to some related news articles “this free AI reads X-rays as well as doctors” though this claim has to the best of my knowledge not been tested by anyone. Chester has been trained using the Chest-Xray8 dataset released by NIH in 2017, which has been extensively criticized for inaccuracies of image labels within the dataset among many other things. This as we will see likely resulted in profound consequences for the way Chester interprets images.

In this post I tested the performance of the system against the most typical examples of the disease categories found on Radiopaedia of the fourteen distinct abnormalities Chester has been suggested to be able to detect. The good thing with Radiopaedia is that unlike most radiology training datasets used in deep learning research their database comes from a myriad of different institutions around the globe, while the diagnoses have been curated and verified by professional editors. Lets see how Chester fares against these cases:

Atelectasis

Atelectasis is a rather broad category, but it can hardly be more obvious as when a whole lobe — in this case the right upper one is collapsed. As we can see the algorithm assumes a minimal risk of the actual condition while potential mimickers (mass, consolidation) are given significantly greater consideration (To be fair this is one condition where lateral views can be of great use, but Chester can not yet handle those).