Santiago Cepeda, Roberto Romero, Lidia Luque, Daniel García-Pérez, Guillermo Blasco, Luigi Tommaso Luppino, Samuel Kuttner, Olga Esteban-Sinovas, Ignacio Arrese, Ole Solheim, Live Eikenes, Anna Karlberg, Ángel Pérez-Núñez, Olivier Zanier, Carlo Serra, Victor E Staartjes, Andrea Bianconi, Luca Francesco Rossi, Diego Garbossa, Trinidad Escudero, Roberto Hornero, Rosario Sarabia
{"title":"基于深度学习的胶质母细胞瘤术后分割和切除程度评估:发展、外部验证和模型比较。","authors":"Santiago Cepeda, Roberto Romero, Lidia Luque, Daniel García-Pérez, Guillermo Blasco, Luigi Tommaso Luppino, Samuel Kuttner, Olga Esteban-Sinovas, Ignacio Arrese, Ole Solheim, Live Eikenes, Anna Karlberg, Ángel Pérez-Núñez, Olivier Zanier, Carlo Serra, Victor E Staartjes, Andrea Bianconi, Luca Francesco Rossi, Diego Garbossa, Trinidad Escudero, Roberto Hornero, Rosario Sarabia","doi":"10.1093/noajnl/vdae199","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms.</p><p><strong>Methods: </strong>To develop the segmentation model, a training cohort from 3 research institutions and 3 public databases was used. Multiparametric MRI scans with ground truth labels for contrast-enhancing tumor (ET), edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used.</p><p><strong>Results: </strong>The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast ET, 0.77 for edema, and 0.81 for surgical cavities. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort.</p><p><strong>Conclusions: </strong>Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.</p>","PeriodicalId":94157,"journal":{"name":"Neuro-oncology advances","volume":"6 1","pages":"vdae199"},"PeriodicalIF":3.7000,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631186/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison.\",\"authors\":\"Santiago Cepeda, Roberto Romero, Lidia Luque, Daniel García-Pérez, Guillermo Blasco, Luigi Tommaso Luppino, Samuel Kuttner, Olga Esteban-Sinovas, Ignacio Arrese, Ole Solheim, Live Eikenes, Anna Karlberg, Ángel Pérez-Núñez, Olivier Zanier, Carlo Serra, Victor E Staartjes, Andrea Bianconi, Luca Francesco Rossi, Diego Garbossa, Trinidad Escudero, Roberto Hornero, Rosario Sarabia\",\"doi\":\"10.1093/noajnl/vdae199\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms.</p><p><strong>Methods: </strong>To develop the segmentation model, a training cohort from 3 research institutions and 3 public databases was used. Multiparametric MRI scans with ground truth labels for contrast-enhancing tumor (ET), edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used.</p><p><strong>Results: </strong>The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast ET, 0.77 for edema, and 0.81 for surgical cavities. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort.</p><p><strong>Conclusions: </strong>Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.</p>\",\"PeriodicalId\":94157,\"journal\":{\"name\":\"Neuro-oncology advances\",\"volume\":\"6 1\",\"pages\":\"vdae199\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-11-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631186/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuro-oncology advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/noajnl/vdae199\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"CLINICAL NEUROLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuro-oncology advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/noajnl/vdae199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
Deep learning-based postoperative glioblastoma segmentation and extent of resection evaluation: Development, external validation, and model comparison.
Background: The pursuit of automated methods to assess the extent of resection (EOR) in glioblastomas is challenging, requiring precise measurement of residual tumor volume. Many algorithms focus on preoperative scans, making them unsuitable for postoperative studies. Our objective was to develop a deep learning-based model for postoperative segmentation using magnetic resonance imaging (MRI). We also compared our model's performance with other available algorithms.
Methods: To develop the segmentation model, a training cohort from 3 research institutions and 3 public databases was used. Multiparametric MRI scans with ground truth labels for contrast-enhancing tumor (ET), edema, and surgical cavity, served as training data. The models were trained using MONAI and nnU-Net frameworks. Comparisons were made with currently available segmentation models using an external cohort from a research institution and a public database. Additionally, the model's ability to classify EOR was evaluated using the RANO-Resect classification system. To further validate our best-trained model, an additional independent cohort was used.
Results: The study included 586 scans: 395 for model training, 52 for model comparison, and 139 scans for independent validation. The nnU-Net framework produced the best model with median Dice scores of 0.81 for contrast ET, 0.77 for edema, and 0.81 for surgical cavities. Our best-trained model classified patients into maximal and submaximal resection categories with 96% accuracy in the model comparison dataset and 84% in the independent validation cohort.
Conclusions: Our nnU-Net-based model outperformed other algorithms in both segmentation and EOR classification tasks, providing a freely accessible tool with promising clinical applicability.