DISCLAIMER: This a RESEARCH USE ONLY tool. Its purpose is to help clinicians and researchers explore the value of AI in medical imaging. These AI models are not regulated or validated and only for suggestion purposes.
Please note that the Arterys Marketplace is in beta. The imaging data you upload will not be retained past the beta program.

Opacity detection (pneumonia) on chest x-ray

Winning model of the 2018 RSNA Pneumonia detection challenge

Your own data

Cost: Free during Beta

This model detects and localizes lung opacities suspicious for pneumonia on frontal (PA or AP) chest radiographs when a suggestive clinical context is present. Based on the ensemble of a classification model and five instance object detection models, the ensemble model can detect single or multiple distinct opacities on a single image. Bounding boxes are defining the area boundaries of detected lung opacities.

Intended Use

For PA (Posterior-Anterior) or AP (Anterior-Posterior) View Chest Xrays

Application hypothesis/alternative usage during Covid-19 pandemic: Multifocal lung opacities in the current pandemic context with an associated clinical presentation (cough, fever, respiratory distress) are suggestive but not specific of a Covid-19 pneumonia diagnosis. Detection of 2 or more bounding boxes is compatible with multifocal opacities. A positive finding could clinically warrant further testing or isolation procedure. Applying a positivity threshold using a single bounding box could enhance sensitivity for Covid-19 pneumonia but with lower specificity.


The test results are based on a single source of data which is a subset of the original NIH CXR14 dataset. The capacity to transfer the reported performance to different sources of data isn't confirmed. Different acquisition machines, acquisitions protocols, patient demographics, label interpretations or disease prevalence could bring different performance results.

Information on training data

The training dataset was composed of 25684 frontal chest radiographs that were strongly labeled by 6 board-certified US radiologists. Details of the annotation method are described here : https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/discussion/64723#379805 The test dataset was composed of 3000 radiographs divided in 2 test phases during the RSNA challenge. Test samples were independently strongly labeled by 3 different board-certified radiologists, including 2 experts radiologists members of the Society of Thoracic Radiology (STR).

Model performance metrics

Mean average precision = 0.232
Estimated classification AUROC = 0.894

For comparison, classification models in the literature for similar labels (infiltration, pneumonia, consolidation) report AUROC classification performance between 0.609 to 0.790.

Performance Curves

RSNA 2018 Pneumonia detection challenge winner

Ian Pan is at Brown University, Alexandre Cadrin-Chenevert is at CISSS Lanaudiere/Laval University