Emily L Clarke, Derek Magee, Julia Newton-Bishop, William Merchant, Robert Insall, Nigel G Maher, Richard A Scolyer, Grace Farnworth, Anisah Ali, Sally O'Shea, Darren Treanor
{"title":"The Development and Evaluation of a Convolutional Neural Network for Cutaneous Melanoma Detection in Whole Slide Images.","authors":"Emily L Clarke, Derek Magee, Julia Newton-Bishop, William Merchant, Robert Insall, Nigel G Maher, Richard A Scolyer, Grace Farnworth, Anisah Ali, Sally O'Shea, Darren Treanor","doi":"10.5858/arpa.2024-0094-OA","DOIUrl":null,"url":null,"abstract":"<p><strong>Context.—: </strong>The current melanoma staging system does not account for 26% of the variance seen in melanoma-specific survival, therefore our ability to predict patient outcome is not fully elucidated. Morphology may be of greater significance than in other solid tumors, with Breslow thickness remaining the strongest prognostic indicator despite being subject to high levels of interobserver variation. The application of convolutional neural networks to whole slide images affords objective morphologic metrics, which may reveal new insights into patient prognosis.</p><p><strong>Objective.—: </strong>To develop and evaluate a convolutional neural network for invasive cutaneous melanoma detection in whole slide images for the generation of objective prognostic biomarkers based on tumor morphology.</p><p><strong>Design.—: </strong>One thousand sixty-eight whole slide images containing cutaneous melanoma from 5 data sets have been used in the initial development and evaluation of the convolutional neural network. A 2-class tumor segmentation network with a fully convolutional architecture was trained using sparse annotations. The network was evaluated at per-pixel and per-tumor levels as compared to manual annotation, as well as variation across 3 scanning platforms.</p><p><strong>Results.—: </strong>The convolutional neural network located conventional cutaneous invasive melanoma tissue with an average per-pixel sensitivity and specificity of 97.59% and 99.86%, respectively, across the 5 test sets. There were high levels of concordance between the tumor dimensions generated by the model as compared to manual annotation, and between the tumor dimensions generated by the model across 3 scanning platforms.</p><p><strong>Conclusions.—: </strong>We have developed a convolutional neural network that accurately detects invasive cutaneous conventional melanoma in whole slide images from multiple data sources. Future work should assess the use of this network to generate metrics for survival prediction.</p>","PeriodicalId":93883,"journal":{"name":"Archives of pathology & laboratory medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Archives of pathology & laboratory medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5858/arpa.2024-0094-OA","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Context.—: The current melanoma staging system does not account for 26% of the variance seen in melanoma-specific survival, therefore our ability to predict patient outcome is not fully elucidated. Morphology may be of greater significance than in other solid tumors, with Breslow thickness remaining the strongest prognostic indicator despite being subject to high levels of interobserver variation. The application of convolutional neural networks to whole slide images affords objective morphologic metrics, which may reveal new insights into patient prognosis.
Objective.—: To develop and evaluate a convolutional neural network for invasive cutaneous melanoma detection in whole slide images for the generation of objective prognostic biomarkers based on tumor morphology.
Design.—: One thousand sixty-eight whole slide images containing cutaneous melanoma from 5 data sets have been used in the initial development and evaluation of the convolutional neural network. A 2-class tumor segmentation network with a fully convolutional architecture was trained using sparse annotations. The network was evaluated at per-pixel and per-tumor levels as compared to manual annotation, as well as variation across 3 scanning platforms.
Results.—: The convolutional neural network located conventional cutaneous invasive melanoma tissue with an average per-pixel sensitivity and specificity of 97.59% and 99.86%, respectively, across the 5 test sets. There were high levels of concordance between the tumor dimensions generated by the model as compared to manual annotation, and between the tumor dimensions generated by the model across 3 scanning platforms.
Conclusions.—: We have developed a convolutional neural network that accurately detects invasive cutaneous conventional melanoma in whole slide images from multiple data sources. Future work should assess the use of this network to generate metrics for survival prediction.