Artificial intelligence recognition of pathological T stage and tumor invasion in rectal cancer based on large panoramic pathological sections

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Yiheng Ju , Longbo Zheng , Peng Zhao , Fangjie Xin , Fengjiao Wang , Yuan Gao , Xianxiang Zhang , Dongsheng Wang , Yun Lu
{"title":"Artificial intelligence recognition of pathological T stage and tumor invasion in rectal cancer based on large panoramic pathological sections","authors":"Yiheng Ju ,&nbsp;Longbo Zheng ,&nbsp;Peng Zhao ,&nbsp;Fangjie Xin ,&nbsp;Fengjiao Wang ,&nbsp;Yuan Gao ,&nbsp;Xianxiang Zhang ,&nbsp;Dongsheng Wang ,&nbsp;Yun Lu","doi":"10.1016/j.imed.2022.03.004","DOIUrl":null,"url":null,"abstract":"<div><h3><strong><em>Background</em></strong></h3><p>The incidence of colorectal cancer is increasing worldwide, and it currently ranks third among all cancers. Moreover, pathological diagnosis is becoming increasingly arduous. Artificial intelligence has demonstrated the ability to fully excavate image features and assist doctors in making decisions. Large panoramic pathological sections contain considerable amounts of pathological information. In this study, we used large panoramic pathological sections to establish a deep learning model to assist pathologists in identifying cancerous areas on whole-slide images of rectal cancer, as well as for T staging and prognostic analysis.</p></div><div><h3><em><strong>Methods</strong></em></h3><p>We collected 126 cases of primary rectal cancer from the Affiliated Hospital of Qingdao University West Coast Hospital District (internal dataset) and 42 cases from Shinan and Laoshan Hospital District (external dataset) that had tissue surgically removed from January to September 2019. After sectioning, staining, and scanning, a total of 2350 hematoxylin-eosin-stained whole-slide images were obtained. The patients in the internal dataset were randomly divided into a training cohort (<em>n =</em>88 ) and a test cohort (<em>n</em> =38 ) at a ratio of 7:3. We chose DeepLabV3+ and ResNet50 as target models for our experiment. We used the Dice similarity coefficient, accuracy, sensitivity, specificity, receiver operating characteristic (ROC) curve, and area under the curve (AUC) to evaluate the performance of the artificial intelligence platform in the test set and validation set. Finally, we followed up patients and examined their prognosis and short-term survival to corroborate the value of T-staging investigations.</p></div><div><h3><em><strong>Results</strong></em></h3><p>In the test set, the accuracy of image segmentation was 95.8%, the Dice coefficient was 0.92, the accuracy of automatic T-staging recognition was 86%, and the ROC AUC value was 0.93. In the validation set, the accuracy of image segmentation was 95.3%, the Dice coefficient was 0.90, the accuracy of automatic classification was 85%, the ROC AUC value was 0.92, and the image analysis time was 0.2 s. There was a difference in survival in patients with local recurrence or distant metastasis as the outcome at follow-up. Univariate analysis showed that T stage, N stage, preoperative carcinoembryonic antigen (CEA) level, and tumor location were risk factors for postoperative recurrence or metastasis in patients with rectal cancer. When these factors were included in a multivariate analysis, only preoperative CEA level and N stage showed significant differences.</p></div><div><h3><em><strong>Conclusion</strong></em></h3><p>The deep convolutional neural networks we have establish can assist clinicians in making decisions of T-stage judgment and improve diagnostic efficiency. Using large panoramic pathological sections enables better judgment of the condition of tumors and accurate pathological diagnoses, which has certain clinical application value.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"2 3","pages":"Pages 141-151"},"PeriodicalIF":4.4000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102622000109/pdfft?md5=86b599ef05e61c30eaa22298216678a0&pid=1-s2.0-S2667102622000109-main.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102622000109","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 1

Abstract

Background

The incidence of colorectal cancer is increasing worldwide, and it currently ranks third among all cancers. Moreover, pathological diagnosis is becoming increasingly arduous. Artificial intelligence has demonstrated the ability to fully excavate image features and assist doctors in making decisions. Large panoramic pathological sections contain considerable amounts of pathological information. In this study, we used large panoramic pathological sections to establish a deep learning model to assist pathologists in identifying cancerous areas on whole-slide images of rectal cancer, as well as for T staging and prognostic analysis.

Methods

We collected 126 cases of primary rectal cancer from the Affiliated Hospital of Qingdao University West Coast Hospital District (internal dataset) and 42 cases from Shinan and Laoshan Hospital District (external dataset) that had tissue surgically removed from January to September 2019. After sectioning, staining, and scanning, a total of 2350 hematoxylin-eosin-stained whole-slide images were obtained. The patients in the internal dataset were randomly divided into a training cohort (n =88 ) and a test cohort (n =38 ) at a ratio of 7:3. We chose DeepLabV3+ and ResNet50 as target models for our experiment. We used the Dice similarity coefficient, accuracy, sensitivity, specificity, receiver operating characteristic (ROC) curve, and area under the curve (AUC) to evaluate the performance of the artificial intelligence platform in the test set and validation set. Finally, we followed up patients and examined their prognosis and short-term survival to corroborate the value of T-staging investigations.

Results

In the test set, the accuracy of image segmentation was 95.8%, the Dice coefficient was 0.92, the accuracy of automatic T-staging recognition was 86%, and the ROC AUC value was 0.93. In the validation set, the accuracy of image segmentation was 95.3%, the Dice coefficient was 0.90, the accuracy of automatic classification was 85%, the ROC AUC value was 0.92, and the image analysis time was 0.2 s. There was a difference in survival in patients with local recurrence or distant metastasis as the outcome at follow-up. Univariate analysis showed that T stage, N stage, preoperative carcinoembryonic antigen (CEA) level, and tumor location were risk factors for postoperative recurrence or metastasis in patients with rectal cancer. When these factors were included in a multivariate analysis, only preoperative CEA level and N stage showed significant differences.

Conclusion

The deep convolutional neural networks we have establish can assist clinicians in making decisions of T-stage judgment and improve diagnostic efficiency. Using large panoramic pathological sections enables better judgment of the condition of tumors and accurate pathological diagnoses, which has certain clinical application value.

基于大全景病理切片的人工智能识别直肠癌病理T期及肿瘤侵袭
结直肠癌的发病率在全球范围内呈上升趋势,目前在所有癌症中排名第三。此外,病理诊断也变得越来越困难。人工智能已经证明能够充分挖掘图像特征,帮助医生做出决策。大的全景病理切片包含了大量的病理信息。在本研究中,我们使用大的全景病理切片建立了一个深度学习模型,以帮助病理学家在直肠癌全片图像上识别癌区,并进行T分期和预后分析。方法收集2019年1 - 9月青岛大学附属医院西海岸医院区126例(内部数据集)和石南、崂山医院区42例(外部数据集)手术切除的原发性直肠癌患者。经过切片、染色和扫描,共获得2350张苏木精-伊红染色的全片图像。内部数据集中的患者按7:3的比例随机分为训练队列(n =88)和测试队列(n =38)。我们选择DeepLabV3+和ResNet50作为实验的目标模型。我们使用Dice相似系数、准确性、灵敏度、特异性、受试者工作特征(ROC)曲线和曲线下面积(AUC)来评估人工智能平台在测试集和验证集中的性能。最后,我们对患者进行随访,并检查他们的预后和短期生存,以证实t分期调查的价值。结果在测试集中,图像分割准确率为95.8%,Dice系数为0.92,自动t分期识别准确率为86%,ROC AUC值为0.93。在验证集中,图像分割准确率为95.3%,Dice系数为0.90,自动分类准确率为85%,ROC AUC值为0.92,图像分析时间为0.2 s。以局部复发或远处转移作为随访结果的患者生存率存在差异。单因素分析显示,T分期、N分期、术前癌胚抗原(CEA)水平、肿瘤部位是直肠癌患者术后复发或转移的危险因素。当这些因素纳入多因素分析时,只有术前CEA水平和N分期有显著差异。结论所建立的深度卷积神经网络可辅助临床医生进行t期判断决策,提高诊断效率。采用大的全景病理切片可以更好的判断肿瘤的情况,准确的进行病理诊断,具有一定的临床应用价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信