A2HTL:利用CT图像预测食管癌存活率的基于混合变压器的自动学习方法

IF 3.7 4区 生物学 Q1 BIOCHEMICAL RESEARCH METHODS
Hailin Yue;Jin Liu;Lina Zhao;Hulin Kuang;Jianhong Cheng;Junjian Li;Mengshen He;Jie Gong;Jianxin Wang
{"title":"A2HTL:利用CT图像预测食管癌存活率的基于混合变压器的自动学习方法","authors":"Hailin Yue;Jin Liu;Lina Zhao;Hulin Kuang;Jianhong Cheng;Junjian Li;Mengshen He;Jie Gong;Jianxin Wang","doi":"10.1109/TNB.2024.3441533","DOIUrl":null,"url":null,"abstract":"Esophageal cancer is a common malignant tumor, precisely predicting survival of esophageal cancer is crucial for personalized treatment. However, current region of interest (ROI) based methodologies not only necessitate prior medical knowledge for tumor delineation, but may also cause the model to be overly sensitive to ROI. To address these challenges, we develop an automated Hybrid Transformer based learning that integrates a Hybrid Transformer size-aware U-Net with a ranked survival prediction network to enable automatic survival prediction for esophageal cancer. Specifically, we first incorporate the Transformer with shifted windowing multi-head self-attention mechanism (SW-MSA) into the base of the U-Net encoder to capture the long-range dependency in CT images. Furthermore, to alleviate the imbalance between the ROI and the background in CT images, we devise a size-aware coefficient for the segmentation loss. Finally, we also design a ranked pair sorting loss to more comprehensively capture the ranked information inherent in CT images. We evaluate our proposed method on a dataset comprising 759 samples with esophageal cancer. Experimental results demonstrate the superior performance of our proposed method in survival prediction, even without ROI ground truth.","PeriodicalId":13264,"journal":{"name":"IEEE Transactions on NanoBioscience","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A2HTL: An Automated Hybrid Transformer-Based Learning for Predicting Survival of Esophageal Cancer Using CT Images\",\"authors\":\"Hailin Yue;Jin Liu;Lina Zhao;Hulin Kuang;Jianhong Cheng;Junjian Li;Mengshen He;Jie Gong;Jianxin Wang\",\"doi\":\"10.1109/TNB.2024.3441533\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Esophageal cancer is a common malignant tumor, precisely predicting survival of esophageal cancer is crucial for personalized treatment. However, current region of interest (ROI) based methodologies not only necessitate prior medical knowledge for tumor delineation, but may also cause the model to be overly sensitive to ROI. To address these challenges, we develop an automated Hybrid Transformer based learning that integrates a Hybrid Transformer size-aware U-Net with a ranked survival prediction network to enable automatic survival prediction for esophageal cancer. Specifically, we first incorporate the Transformer with shifted windowing multi-head self-attention mechanism (SW-MSA) into the base of the U-Net encoder to capture the long-range dependency in CT images. Furthermore, to alleviate the imbalance between the ROI and the background in CT images, we devise a size-aware coefficient for the segmentation loss. Finally, we also design a ranked pair sorting loss to more comprehensively capture the ranked information inherent in CT images. We evaluate our proposed method on a dataset comprising 759 samples with esophageal cancer. Experimental results demonstrate the superior performance of our proposed method in survival prediction, even without ROI ground truth.\",\"PeriodicalId\":13264,\"journal\":{\"name\":\"IEEE Transactions on NanoBioscience\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on NanoBioscience\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10633746/\",\"RegionNum\":4,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on NanoBioscience","FirstCategoryId":"99","ListUrlMain":"https://ieeexplore.ieee.org/document/10633746/","RegionNum":4,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

摘要

食管癌是一种常见的恶性肿瘤,精确预测食管癌的生存率对个性化治疗至关重要。然而,目前基于兴趣区域(ROI)的方法不仅需要事先掌握肿瘤划分的医学知识,还可能导致模型对 ROI 过度敏感。为了应对这些挑战,我们开发了一种基于混合变形器的自动学习方法,它将混合变形器尺寸感知 U-Net 与排序生存预测网络整合在一起,实现了食管癌的自动生存预测。具体来说,我们首先在 UNet 编码器的基础上加入了带有移位窗口多头自关注机制(SW-MSA)的变换器,以捕捉 CT 图像中的长程依赖性。此外,为了缓解 CT 图像中 ROI 与背景之间的不平衡,我们设计了一个尺寸感知系数来计算分割损失。最后,我们还设计了排序对排序损失,以更全面地捕捉 CT 图像中固有的排序信息。我们在由 759 个食道癌样本组成的数据集上评估了我们提出的方法。实验结果表明,即使在没有 ROI 地面实况的情况下,我们提出的方法在生存预测方面也表现出色。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A2HTL: An Automated Hybrid Transformer-Based Learning for Predicting Survival of Esophageal Cancer Using CT Images
Esophageal cancer is a common malignant tumor, precisely predicting survival of esophageal cancer is crucial for personalized treatment. However, current region of interest (ROI) based methodologies not only necessitate prior medical knowledge for tumor delineation, but may also cause the model to be overly sensitive to ROI. To address these challenges, we develop an automated Hybrid Transformer based learning that integrates a Hybrid Transformer size-aware U-Net with a ranked survival prediction network to enable automatic survival prediction for esophageal cancer. Specifically, we first incorporate the Transformer with shifted windowing multi-head self-attention mechanism (SW-MSA) into the base of the U-Net encoder to capture the long-range dependency in CT images. Furthermore, to alleviate the imbalance between the ROI and the background in CT images, we devise a size-aware coefficient for the segmentation loss. Finally, we also design a ranked pair sorting loss to more comprehensively capture the ranked information inherent in CT images. We evaluate our proposed method on a dataset comprising 759 samples with esophageal cancer. Experimental results demonstrate the superior performance of our proposed method in survival prediction, even without ROI ground truth.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on NanoBioscience
IEEE Transactions on NanoBioscience 工程技术-纳米科技
CiteScore
7.00
自引率
5.10%
发文量
197
审稿时长
>12 weeks
期刊介绍: The IEEE Transactions on NanoBioscience reports on original, innovative and interdisciplinary work on all aspects of molecular systems, cellular systems, and tissues (including molecular electronics). Topics covered in the journal focus on a broad spectrum of aspects, both on foundations and on applications. Specifically, methods and techniques, experimental aspects, design and implementation, instrumentation and laboratory equipment, clinical aspects, hardware and software data acquisition and analysis and computer based modelling are covered (based on traditional or high performance computing - parallel computers or computer networks).
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信