Abstract
Background: Automated brain tumour identification using artificial intelligence has the potential to facilitate diagnosis and treatment planning of brain tumours. The performance of automated techniques, both traditional machine learning (TML) and deep learning (DL), in brain tumour identification using magnetic resonance imaging (MRI), was evaluated.Methods: First, a systematic review and meta-analysis was performed of studies utilising automated methods. Then a retrospective cohort study was undertaken to evaluate three automated segmentation methods (DeepMedic, Random Forest and 3D-Unet), using routinely acquired pre-operative glioblastoma MRI data. Dice similarity coefficients (DSC) were computed for the evaluation.
Results: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. DL had a higher DSC, particularly for tumour core (TC); 0.80 [95%CI: 0.77-0.83] and 0.63 [0.56-0.71] (p<0.001). Both manual and automated whole tumour (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69-0.86) and 0.64 [0.53-0.74] (P = 0.014), respectively. Only 30% of studies reported external validation. In our cohort study, both DeepMedic (0.88 [0.82-0.93]) and Random Forest (0.84 [0.79-0.89]) models achieved “good” segmentation results (DSC 0.7); however, the 3D-Unet (0.72 [0.67-0.77]) model failed to reach a significant “good” DSC. All three models had lower DSC during external validation compared to internal validation; however, only the DeepMedic (0.79 [0.77-0.81]) and Random Forest (0.75 [0.72-0.78]) models maintained a “good” DSC. Overall, for sub-compartmental segmentation (TC and ET), none of the models achieved a “good” segmentation result (i.e., DSC <0.7).
Conclusion: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development in this area. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models.
Date of Award | 2023 |
---|---|
Original language | English |
Awarding Institution |
|
Sponsors | SINAPSE (Scottish Imaging Network: A Platform for Scientific Excellence) |
Supervisor | Douglas Steele (Supervisor) & Kismet Hossain-Ibrahim (Supervisor) |