Tao Zhou, Yanan Zhao, Huiling Lu, Yaxing Wang, Lijia Zhi
{"title":"[基于密集交互式特征融合 Mask RCNN 的肺 PET /CT 图像实例分割]。","authors":"Tao Zhou, Yanan Zhao, Huiling Lu, Yaxing Wang, Lijia Zhi","doi":"10.7507/1001-5515.202309026","DOIUrl":null,"url":null,"abstract":"<p><p>There are some problems in positron emission tomography/ computed tomography (PET/CT) lung images, such as little information of feature pixels in lesion regions, complex and diverse shapes, and blurred boundaries between lesions and surrounding tissues, which lead to inadequate extraction of tumor lesion features by the model. To solve the above problems, this paper proposes a dense interactive feature fusion Mask RCNN (DIF-Mask RCNN) model. Firstly, a feature extraction network with cross-scale backbone and auxiliary structures was designed to extract the features of lesions at different scales. Then, a dense interactive feature enhancement network was designed to enhance the lesion detail information in the deep feature map by interactively fusing the shallowest lesion features with neighboring features and current features in the form of dense connections. Finally, a dense interactive feature fusion feature pyramid network (FPN) network was constructed, and the shallow information was added to the deep features one by one in the bottom-up path with dense connections to further enhance the model's perception of weak features in the lesion region. The ablation and comparison experiments were conducted on the clinical PET/CT lung image dataset. The results showed that the APdet, APseg, APdet_s and APseg_s indexes of the proposed model were 67.16%, 68.12%, 34.97% and 37.68%, respectively. Compared with Mask RCNN (ResNet50), APdet and APseg indexes increased by 7.11% and 5.14%, respectively. DIF-Mask RCNN model can effectively detect and segment tumor lesions. It provides important reference value and evaluation basis for computer-aided diagnosis of lung cancer.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 3","pages":"527-534"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11208663/pdf/","citationCount":"0","resultStr":"{\"title\":\"[Pulmonary PET /CT image instance segmentation based on dense interactive feature fusion Mask RCNN].\",\"authors\":\"Tao Zhou, Yanan Zhao, Huiling Lu, Yaxing Wang, Lijia Zhi\",\"doi\":\"10.7507/1001-5515.202309026\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>There are some problems in positron emission tomography/ computed tomography (PET/CT) lung images, such as little information of feature pixels in lesion regions, complex and diverse shapes, and blurred boundaries between lesions and surrounding tissues, which lead to inadequate extraction of tumor lesion features by the model. To solve the above problems, this paper proposes a dense interactive feature fusion Mask RCNN (DIF-Mask RCNN) model. Firstly, a feature extraction network with cross-scale backbone and auxiliary structures was designed to extract the features of lesions at different scales. Then, a dense interactive feature enhancement network was designed to enhance the lesion detail information in the deep feature map by interactively fusing the shallowest lesion features with neighboring features and current features in the form of dense connections. Finally, a dense interactive feature fusion feature pyramid network (FPN) network was constructed, and the shallow information was added to the deep features one by one in the bottom-up path with dense connections to further enhance the model's perception of weak features in the lesion region. The ablation and comparison experiments were conducted on the clinical PET/CT lung image dataset. The results showed that the APdet, APseg, APdet_s and APseg_s indexes of the proposed model were 67.16%, 68.12%, 34.97% and 37.68%, respectively. Compared with Mask RCNN (ResNet50), APdet and APseg indexes increased by 7.11% and 5.14%, respectively. DIF-Mask RCNN model can effectively detect and segment tumor lesions. It provides important reference value and evaluation basis for computer-aided diagnosis of lung cancer.</p>\",\"PeriodicalId\":39324,\"journal\":{\"name\":\"生物医学工程学杂志\",\"volume\":\"41 3\",\"pages\":\"527-534\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11208663/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"生物医学工程学杂志\",\"FirstCategoryId\":\"1087\",\"ListUrlMain\":\"https://doi.org/10.7507/1001-5515.202309026\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Medicine\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"生物医学工程学杂志","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.7507/1001-5515.202309026","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
[Pulmonary PET /CT image instance segmentation based on dense interactive feature fusion Mask RCNN].
There are some problems in positron emission tomography/ computed tomography (PET/CT) lung images, such as little information of feature pixels in lesion regions, complex and diverse shapes, and blurred boundaries between lesions and surrounding tissues, which lead to inadequate extraction of tumor lesion features by the model. To solve the above problems, this paper proposes a dense interactive feature fusion Mask RCNN (DIF-Mask RCNN) model. Firstly, a feature extraction network with cross-scale backbone and auxiliary structures was designed to extract the features of lesions at different scales. Then, a dense interactive feature enhancement network was designed to enhance the lesion detail information in the deep feature map by interactively fusing the shallowest lesion features with neighboring features and current features in the form of dense connections. Finally, a dense interactive feature fusion feature pyramid network (FPN) network was constructed, and the shallow information was added to the deep features one by one in the bottom-up path with dense connections to further enhance the model's perception of weak features in the lesion region. The ablation and comparison experiments were conducted on the clinical PET/CT lung image dataset. The results showed that the APdet, APseg, APdet_s and APseg_s indexes of the proposed model were 67.16%, 68.12%, 34.97% and 37.68%, respectively. Compared with Mask RCNN (ResNet50), APdet and APseg indexes increased by 7.11% and 5.14%, respectively. DIF-Mask RCNN model can effectively detect and segment tumor lesions. It provides important reference value and evaluation basis for computer-aided diagnosis of lung cancer.