From Pixels to Predictions: Development of a multimodal-based deep learning algorithm for accurate and efficient erythema score assessment in radiation induced dermatitis

Download PDF

Introduction & Objectives: Despite the significant progress made in computer-aided diagnostics using AI, there are currently limited viable methods available for the analysis and categorization of radiation-induced skin reactions (RISRs), making this study a critical step towards filling this gap. This study aims to develop advanced techniques that utilise a deep learning algorithm for the automatic classification of RISRs in accordance with the Common Terminology Criteria for Adverse Events (CTCAE) grading system. To achieve this objective, Scarletred® Vision, a state-of-the-art digital skin imaging method that allows for standardized monitoring and objective assessment of acute RISRs, was utilised. The study conducted SEV* measurements on 2D digital skin images after transforming them into the CIELAB colour space. Materials & Methods: In terms of data, we propose a robust pipeline to collect a dedicated, diverse, and curated image dataset using Scarletred® Vision, a CE certified medical device software platform. When it comes to models, we have trained a deep learning-based multimodal system that incorporates Scarletred® Vision augmentation pipeline. The dataset consisted of 2192 images, with a train-test size of 80:20 and image dimensions of 512 x 512. We used patch dimensions of 32 x 32 and 256 patches per image. The approach towards the solution involved input image patch, position embedding, and feature augmentation, with image augmentation based on feedback from ensembled prediction (R. Ranjan et al., 2021). Results: The results of our study indicate that the proposed model achieved high precision (92.51%), recall (91.21%), and F-score (91.83%), with a test accuracy of 92.02%. The model’s performance was evaluated in a class-wise manner, where the sensitivity for classes 0, 1, and 2 was found to be 96%, 94%, and 86%, respectively. Similarly, the specificity was determined to be 94%, 97%, and 97% for the respective classes. The overall accuracy for the model was found to be 96%, 94%, and 86% for classes 0, 1, and 2, respectively. These results indicate a high level of precision and discriminatory power of the model across all three classes. Conclusion: This study provides the first benchmark results using multimodal-based deep learning algorithm for erythema severity score classification. Data from this study highlights that the estimation of severity score for RISRs can be automated using Scarletred® Vision as decision support system. Our algorithm is currently being refined to provide also the estimated localisation, size and tissue composition of the lesion.

Download PDF

Let's get in touch!

Our state of the art digital platform is supplied via flexible SaaS agreement to leading corporates and newcomers worldwide. Connect with us to obtain more information in a personal demo, and to get a scalable solution fitting with your specific clinical study or piloting demands.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Like the publication? Spread the word