Improving the prognostication of melanoma is crucial for selecting patients for effective adjuvant therapies. Given that tumor tissue contains a large amount of clinically relevant hidden information that is not fully exploited, we applied a weakly-supervised deep learning approach to H&E-stained whole slide images (WSIs) to directly predict BRAF mutational status.
We designed an artificial intelligence algorithm that extracts features from no-padding patches of WSIs using a pre-trained deep neural network. These features are then fed into a classifier that assigns a BRAF mutational status probability to each WSI. The model was trained and validated using a cross-validation approach on 220 WSIs from the IHP Group (IHP-MEL-BRAF) with patients included from April 2014 to January 2023. The model was tested on the publicly available TCGA cohort. We used the area under the curve (AUC) as the metric to assess the performance of the model.
The model yielded an AUC of 78.3% on the cross-validation folds and 75.7% on the testing folds. We show that the performance of the model is impacted by the amount of tumoral tissue present in the WSI, and that thin melanomas are highly subject to false predictions. In the external data, the model achieves an AUC of 68.3%. As the model learns to assign a weight to the tiles contributing to the mutational status characterization, we explore the main phenotypes that are more likely to explain the BRAF mutational status.
This pilot study in melanoma demonstrates that this novel deep-learning approach to H&E image analysis is capable of discovering new digital predictive and prognostic biomarkers. It has the advantage of leaving the exploration of the WSI and the selection of regions of interest entirely to the AI, thus reducing the bias introduced by annotations. Furthermore, these findings could potentially accelerate and improve the clinical decision-making process in the field of melanoma in the near future.
C Bossard, Y Salhi, J Chetritt, S Salhi