{"title":"Classifying driver mutations of papillary thyroid carcinoma on whole slide image: an automated workflow applying deep convolutional neural network.","authors":"Peiling Tsou, Chang-Jiun Wu","doi":"10.3389/fendo.2024.1395979","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Informative biomarkers play a vital role in guiding clinical decisions regarding management of cancers. We have previously demonstrated the potential of a deep convolutional neural network (CNN) for predicting cancer driver gene mutations from expert-curated histopathologic images in papillary thyroid carcinomas (PTCs). Recognizing the importance of whole slide image (WSI) analysis for clinical application, we aimed to develop an automated image preprocessing workflow that uses WSI inputs to categorize PTCs based on driver mutations.</p><p><strong>Methods: </strong>Histopathology slides from The Cancer Genome Atlas (TCGA) repository were utilized for diagnostic purposes. These slides underwent an automated tile extraction and preprocessing pipeline to ensure analysis-ready quality. Next, the extracted image tiles were utilized to train a deep learning CNN model, specifically Google's Inception v3, for the classification of PTCs. The model was trained to distinguish between different groups based on <i>BRAF<sup>V600E</sup></i> or <i>RAS</i> mutations.</p><p><strong>Results: </strong>The newly developed pipeline performed equally well as the expert-curated image classifier. The best model achieved Area Under the Curve (AUC) values of 0.86 (ranging from 0.847 to 0.872) for validation and 0.865 (ranging from 0.854 to 0.876) for the final testing subsets. Notably, it accurately predicted 90% of tumors in the validation set and 84.2% in the final testing set. Furthermore, the performance of our new classifier showed a strong correlation with the expert-curated classifier (Spearman rho = 0.726, p = 5.28 e-08), and correlated with the molecular expression-based classifier, BRS (BRAF-RAS scores) (Spearman rho = 0.418, p = 1.92e-13).</p><p><strong>Conclusions: </strong>Utilizing WSIs, we implemented an automated workflow with deep CNN model that accurately classifies driver mutations in PTCs.</p>","PeriodicalId":12447,"journal":{"name":"Frontiers in Endocrinology","volume":"15 ","pages":"1395979"},"PeriodicalIF":3.9000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11573888/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Endocrinology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fendo.2024.1395979","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENDOCRINOLOGY & METABOLISM","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Informative biomarkers play a vital role in guiding clinical decisions regarding management of cancers. We have previously demonstrated the potential of a deep convolutional neural network (CNN) for predicting cancer driver gene mutations from expert-curated histopathologic images in papillary thyroid carcinomas (PTCs). Recognizing the importance of whole slide image (WSI) analysis for clinical application, we aimed to develop an automated image preprocessing workflow that uses WSI inputs to categorize PTCs based on driver mutations.
Methods: Histopathology slides from The Cancer Genome Atlas (TCGA) repository were utilized for diagnostic purposes. These slides underwent an automated tile extraction and preprocessing pipeline to ensure analysis-ready quality. Next, the extracted image tiles were utilized to train a deep learning CNN model, specifically Google's Inception v3, for the classification of PTCs. The model was trained to distinguish between different groups based on BRAFV600E or RAS mutations.
Results: The newly developed pipeline performed equally well as the expert-curated image classifier. The best model achieved Area Under the Curve (AUC) values of 0.86 (ranging from 0.847 to 0.872) for validation and 0.865 (ranging from 0.854 to 0.876) for the final testing subsets. Notably, it accurately predicted 90% of tumors in the validation set and 84.2% in the final testing set. Furthermore, the performance of our new classifier showed a strong correlation with the expert-curated classifier (Spearman rho = 0.726, p = 5.28 e-08), and correlated with the molecular expression-based classifier, BRS (BRAF-RAS scores) (Spearman rho = 0.418, p = 1.92e-13).
Conclusions: Utilizing WSIs, we implemented an automated workflow with deep CNN model that accurately classifies driver mutations in PTCs.
期刊介绍:
Frontiers in Endocrinology is a field journal of the "Frontiers in" journal series.
In today’s world, endocrinology is becoming increasingly important as it underlies many of the challenges societies face - from obesity and diabetes to reproduction, population control and aging. Endocrinology covers a broad field from basic molecular and cellular communication through to clinical care and some of the most crucial public health issues. The journal, thus, welcomes outstanding contributions in any domain of endocrinology.
Frontiers in Endocrinology publishes articles on the most outstanding discoveries across a wide research spectrum of Endocrinology. The mission of Frontiers in Endocrinology is to bring all relevant Endocrinology areas together on a single platform.