Mitchell Wiebe, Christina Haston, Michael Lamey, Apurva Narayan, Rasika Rajapakshe
{"title":"The effect of spatial resolution on deep learning classification of lung cancer histopathology.","authors":"Mitchell Wiebe, Christina Haston, Michael Lamey, Apurva Narayan, Rasika Rajapakshe","doi":"10.1259/bjro.20230008","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT.</p><p><strong>Methods: </strong>We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters.</p><p><strong>Results: </strong>The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px.</p><p><strong>Conclusions: </strong>Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists.</p><p><strong>Advances in knowledge: </strong>We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.</p>","PeriodicalId":72419,"journal":{"name":"BJR open","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10636338/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJR open","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1259/bjro.20230008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: The microscopic analysis of biopsied lung nodules represents the gold-standard for definitive diagnosis of lung cancer. Deep learning has achieved pathologist-level classification of non-small cell lung cancer histopathology images at high resolutions (0.5-2 µm/px), and recent studies have revealed tomography-histology relationships at lower spatial resolutions. Thus, we tested whether patterns for histological classification of lung cancer could be detected at spatial resolutions such as those offered by ultra-high-resolution CT.
Methods: We investigated the performance of a deep convolutional neural network (inception-v3) to classify lung histopathology images at lower spatial resolutions than that of typical pathology. Models were trained on 2167 histopathology slides from The Cancer Genome Atlas to differentiate between lung cancer tissues (adenocarcinoma (LUAD) and squamous-cell carcinoma (LUSC)), and normal dense tissue. Slides were accessed at 2.5 × magnification (4 µm/px) and reduced resolutions of 8, 16, 32, 64, and 128 µm/px were simulated by applying digital low-pass filters.
Results: The classifier achieved area under the curve ≥0.95 for all classes at spatial resolutions of 4-16 µm/px, and area under the curve ≥0.95 for differentiating normal tissue from the two cancer types at 128 µm/px.
Conclusions: Features for tissue classification by deep learning exist at spatial resolutions below what is typically viewed by pathologists.
Advances in knowledge: We demonstrated that a deep convolutional network could differentiate normal and cancerous lung tissue at spatial resolutions as low as 128 µm/px and LUAD, LUSC, and normal tissue as low as 16 µm/px. Our data, and results of tomography-histology studies, indicate that these patterns should also be detectable within tomographic data at these resolutions.