{"title":"PET-CT弥漫性大b细胞淋巴瘤的混合注意融合分割网络","authors":"Shun Chen, Ang Li, Jianxin Chen, Xuguang Zhang, Chong Jiang, Jingyan Xu","doi":"10.1109/WCSP55476.2022.10039141","DOIUrl":null,"url":null,"abstract":"Diffuse large B-cell lymphoma (DLBCL) is a type of lymphoma with a high incidence in Asia. Positron emission tomography-computed tomography (PET-CT) is usually used as the evaluation means for DLBCL. Theoretically, the effective combination of the PET and CT can display the shape, size and location of the tumor. In practice, manual lesion segmentation of PET-CT is time-consuming. Hence, in this work, we design a hybrid attention fusion segmentation network (HAFS-Net) for automatic segmentation task. Most works only pay attention to extract information from multi-modal images, but ignore the potential correlations between them, which is ineffective for segmentation. In contrast, our network combines hybrid attention mechanism and PET-CT feature fusion module, which can fully mine correlations between multi-modal information. Specifically, the hybrid attention exploits the tumor region enhancement properties of PET to guide segmentation on CT. And the irrelevant noise regions on CT which interfere with the segmentation will be suppressed. In PET-CT feature fusion module, the supervision information (attention fusion map on PET-CT) is efficiently applied to assist segmentation. Extensive experiments demonstrate that the proposed framework can effectively complete the task of multi-modal medical image segmentation.","PeriodicalId":199421,"journal":{"name":"2022 14th International Conference on Wireless Communications and Signal Processing (WCSP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Hybrid Attention Fusion Segmentation Network for Diffuse Large B-cell Lymphoma in PET-CT\",\"authors\":\"Shun Chen, Ang Li, Jianxin Chen, Xuguang Zhang, Chong Jiang, Jingyan Xu\",\"doi\":\"10.1109/WCSP55476.2022.10039141\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diffuse large B-cell lymphoma (DLBCL) is a type of lymphoma with a high incidence in Asia. Positron emission tomography-computed tomography (PET-CT) is usually used as the evaluation means for DLBCL. Theoretically, the effective combination of the PET and CT can display the shape, size and location of the tumor. In practice, manual lesion segmentation of PET-CT is time-consuming. Hence, in this work, we design a hybrid attention fusion segmentation network (HAFS-Net) for automatic segmentation task. Most works only pay attention to extract information from multi-modal images, but ignore the potential correlations between them, which is ineffective for segmentation. In contrast, our network combines hybrid attention mechanism and PET-CT feature fusion module, which can fully mine correlations between multi-modal information. Specifically, the hybrid attention exploits the tumor region enhancement properties of PET to guide segmentation on CT. And the irrelevant noise regions on CT which interfere with the segmentation will be suppressed. In PET-CT feature fusion module, the supervision information (attention fusion map on PET-CT) is efficiently applied to assist segmentation. Extensive experiments demonstrate that the proposed framework can effectively complete the task of multi-modal medical image segmentation.\",\"PeriodicalId\":199421,\"journal\":{\"name\":\"2022 14th International Conference on Wireless Communications and Signal Processing (WCSP)\",\"volume\":\"45 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Wireless Communications and Signal Processing (WCSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/WCSP55476.2022.10039141\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Wireless Communications and Signal Processing (WCSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WCSP55476.2022.10039141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hybrid Attention Fusion Segmentation Network for Diffuse Large B-cell Lymphoma in PET-CT
Diffuse large B-cell lymphoma (DLBCL) is a type of lymphoma with a high incidence in Asia. Positron emission tomography-computed tomography (PET-CT) is usually used as the evaluation means for DLBCL. Theoretically, the effective combination of the PET and CT can display the shape, size and location of the tumor. In practice, manual lesion segmentation of PET-CT is time-consuming. Hence, in this work, we design a hybrid attention fusion segmentation network (HAFS-Net) for automatic segmentation task. Most works only pay attention to extract information from multi-modal images, but ignore the potential correlations between them, which is ineffective for segmentation. In contrast, our network combines hybrid attention mechanism and PET-CT feature fusion module, which can fully mine correlations between multi-modal information. Specifically, the hybrid attention exploits the tumor region enhancement properties of PET to guide segmentation on CT. And the irrelevant noise regions on CT which interfere with the segmentation will be suppressed. In PET-CT feature fusion module, the supervision information (attention fusion map on PET-CT) is efficiently applied to assist segmentation. Extensive experiments demonstrate that the proposed framework can effectively complete the task of multi-modal medical image segmentation.