Yin Gu , Huimin Guo , Jiahao Zhang , Yuhua Gao , Yuexian Li , Ming Cui , Wei Qian , He Ma
{"title":"MFFUNet:一种交叉注意引导多特征融合的混合模型,用于宫颈癌近距离治疗中危险器官的自动分割","authors":"Yin Gu , Huimin Guo , Jiahao Zhang , Yuhua Gao , Yuexian Li , Ming Cui , Wei Qian , He Ma","doi":"10.1016/j.compmedimag.2025.102571","DOIUrl":null,"url":null,"abstract":"<div><div>Brachytherapy is a common treatment option for cervical cancer. An important step involved in brachytherapy is the delineation of organs at risk (OARs) based on computed tomography (CT) images. Automating OARs segmentation in brachytherapy has the benefit of both reducing the time and improving the quality of radiation therapy planning. This paper introduces a novel segmentation model named MFFUNet for the automatic contour delineation of OARs in cervical cancer brachytherapy. The proposed model employs a staged encoder–decoder structure, integrating the self-attention mechanism of Transformer with the CNN framework. A novel multi-features fusion (MFF) block with a cross-attention-guided feature fusion mechanism is also proposed, which efficiently extracts and cross-fuses features from multiple receptive fields, enriching the semantic information of the features and thus improving the performance of complex segmentation tasks. A private CT image dataset of 95 patients with cervical cancer undergoing brachytherapy is used to evaluate the segmentation performance of the proposed method. The OARs in the data consist of the bladder, rectum, and colon surrounding the cervix. The proposed model surpasses current mainstream OARs segmentation models in terms of segmentation accuracy. The mean Dice similarity coefficient (DSC) score of all three OARs has achieved 73.69%. Among them, the DSC score for the bladder is 92.65%, for the rectum is 66.55%, and for the colon is 61.86%. Moreover, we also conducted experiments on two common public thoracoabdominal multi-organ CT datasets. The excellent segmentation performance further demonstrates the generalization ability of our model. In conclusion, MFFUNet has demonstrated outstanding effectiveness in segmenting OARs for cervical cancer brachytherapy. By accurately delineating OARs, it enhances radiotherapy planning precision and helps reduce radiation toxicity, improving patient outcomes.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102571"},"PeriodicalIF":5.4000,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MFFUNet: A hybrid model with cross-attention-guided multi-feature fusion for automated segmentation of organs at risk in cervical cancer brachytherapy\",\"authors\":\"Yin Gu , Huimin Guo , Jiahao Zhang , Yuhua Gao , Yuexian Li , Ming Cui , Wei Qian , He Ma\",\"doi\":\"10.1016/j.compmedimag.2025.102571\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Brachytherapy is a common treatment option for cervical cancer. An important step involved in brachytherapy is the delineation of organs at risk (OARs) based on computed tomography (CT) images. Automating OARs segmentation in brachytherapy has the benefit of both reducing the time and improving the quality of radiation therapy planning. This paper introduces a novel segmentation model named MFFUNet for the automatic contour delineation of OARs in cervical cancer brachytherapy. The proposed model employs a staged encoder–decoder structure, integrating the self-attention mechanism of Transformer with the CNN framework. A novel multi-features fusion (MFF) block with a cross-attention-guided feature fusion mechanism is also proposed, which efficiently extracts and cross-fuses features from multiple receptive fields, enriching the semantic information of the features and thus improving the performance of complex segmentation tasks. A private CT image dataset of 95 patients with cervical cancer undergoing brachytherapy is used to evaluate the segmentation performance of the proposed method. The OARs in the data consist of the bladder, rectum, and colon surrounding the cervix. The proposed model surpasses current mainstream OARs segmentation models in terms of segmentation accuracy. The mean Dice similarity coefficient (DSC) score of all three OARs has achieved 73.69%. Among them, the DSC score for the bladder is 92.65%, for the rectum is 66.55%, and for the colon is 61.86%. Moreover, we also conducted experiments on two common public thoracoabdominal multi-organ CT datasets. The excellent segmentation performance further demonstrates the generalization ability of our model. In conclusion, MFFUNet has demonstrated outstanding effectiveness in segmenting OARs for cervical cancer brachytherapy. By accurately delineating OARs, it enhances radiotherapy planning precision and helps reduce radiation toxicity, improving patient outcomes.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"124 \",\"pages\":\"Article 102571\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2025-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125000801\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125000801","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
MFFUNet: A hybrid model with cross-attention-guided multi-feature fusion for automated segmentation of organs at risk in cervical cancer brachytherapy
Brachytherapy is a common treatment option for cervical cancer. An important step involved in brachytherapy is the delineation of organs at risk (OARs) based on computed tomography (CT) images. Automating OARs segmentation in brachytherapy has the benefit of both reducing the time and improving the quality of radiation therapy planning. This paper introduces a novel segmentation model named MFFUNet for the automatic contour delineation of OARs in cervical cancer brachytherapy. The proposed model employs a staged encoder–decoder structure, integrating the self-attention mechanism of Transformer with the CNN framework. A novel multi-features fusion (MFF) block with a cross-attention-guided feature fusion mechanism is also proposed, which efficiently extracts and cross-fuses features from multiple receptive fields, enriching the semantic information of the features and thus improving the performance of complex segmentation tasks. A private CT image dataset of 95 patients with cervical cancer undergoing brachytherapy is used to evaluate the segmentation performance of the proposed method. The OARs in the data consist of the bladder, rectum, and colon surrounding the cervix. The proposed model surpasses current mainstream OARs segmentation models in terms of segmentation accuracy. The mean Dice similarity coefficient (DSC) score of all three OARs has achieved 73.69%. Among them, the DSC score for the bladder is 92.65%, for the rectum is 66.55%, and for the colon is 61.86%. Moreover, we also conducted experiments on two common public thoracoabdominal multi-organ CT datasets. The excellent segmentation performance further demonstrates the generalization ability of our model. In conclusion, MFFUNet has demonstrated outstanding effectiveness in segmenting OARs for cervical cancer brachytherapy. By accurately delineating OARs, it enhances radiotherapy planning precision and helps reduce radiation toxicity, improving patient outcomes.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.