{"title":"TCF-Net: A Hierarchical Transformer Convolution Fusion Network for Prostate Cancer Segmentation in Transrectal Ultrasound Images.","authors":"Xu Lu, Qihao Zhou, Zhiwei Xiao, Yanqi Guo, Qianhong Peng, Shen Zhao, Shaopeng Liu, Jun Huang, Chuan Yang, Yuan Yuan","doi":"10.1007/s10278-025-01690-5","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate prostate segmentation from transrectal ultrasound (TRUS) images is the key to the computer-aided diagnosis of prostate cancer. However, this task faces serious challenges, including various interferences, variational prostate shapes, and insufficient datasets. To address these challenges, a region-adaptive transformer convolution fusion net (TCF-Net) for accurate and robust segmentation of TRUS images is proposed. As a high-performance segmentation network, the TCF-Net contains a hierarchical encoder-decoder structure with two main modules: (1) a region-adaptive transformer-based encoder to identify and localize prostate regions, which learns the relationship between objects and pixels. It assists the model in overcoming various interferences and prostate shape variations. (2) A convolution-based decoder to improve the applicability to small datasets. Besides, a patch-based fusion module is also proposed to introduce an inductive bias for fine prostate segmentation. TCF-Net is trained and evaluated on a challenging clinical TRUS image dataset collected from the First Affiliated Hospital of Jinan University in China. The dataset contains 1000 TRUS images of 135 patients. Experimental results show that the mIoU of TCF-Net is 94.4%, which exceeds other state-of-the-art (SOTA) models by more than 1%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01690-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate prostate segmentation from transrectal ultrasound (TRUS) images is the key to the computer-aided diagnosis of prostate cancer. However, this task faces serious challenges, including various interferences, variational prostate shapes, and insufficient datasets. To address these challenges, a region-adaptive transformer convolution fusion net (TCF-Net) for accurate and robust segmentation of TRUS images is proposed. As a high-performance segmentation network, the TCF-Net contains a hierarchical encoder-decoder structure with two main modules: (1) a region-adaptive transformer-based encoder to identify and localize prostate regions, which learns the relationship between objects and pixels. It assists the model in overcoming various interferences and prostate shape variations. (2) A convolution-based decoder to improve the applicability to small datasets. Besides, a patch-based fusion module is also proposed to introduce an inductive bias for fine prostate segmentation. TCF-Net is trained and evaluated on a challenging clinical TRUS image dataset collected from the First Affiliated Hospital of Jinan University in China. The dataset contains 1000 TRUS images of 135 patients. Experimental results show that the mIoU of TCF-Net is 94.4%, which exceeds other state-of-the-art (SOTA) models by more than 1%.