{"title":"Accurate tumor segmentation in FDG-PET images with guidance of complementary CT images","authors":"C. Lian, S. Ruan, T. Denoeux, Yu Guo, P. Vera","doi":"10.1109/ICIP.2017.8297123","DOIUrl":null,"url":null,"abstract":"While hybrid PET/CT scanner is becoming a standard imaging technique in clinical oncology, many existing methods still segment tumor in mono-modality without consideration of complementary information from another modality. In this paper, we propose an unsupervised 3-D method to automatically segment tumor in PET images, where anatomical knowledge from CT images is included as critical guidance to improve PET segmentation accuracy. To this end, a specific context term is proposed to iteratively quantify the conflicts between PET and CT segmentation. In addition, to comprehensively characterize image voxels for reliable segmentation, informative image features are effectively selected via an unsupervised metric learning strategy. The proposed method is based on the theory of belief functions, a powerful tool for information fusion and uncertain reasoning. Its performance has been well evaluated by real-patient PET/CT images.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2017.8297123","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
While hybrid PET/CT scanner is becoming a standard imaging technique in clinical oncology, many existing methods still segment tumor in mono-modality without consideration of complementary information from another modality. In this paper, we propose an unsupervised 3-D method to automatically segment tumor in PET images, where anatomical knowledge from CT images is included as critical guidance to improve PET segmentation accuracy. To this end, a specific context term is proposed to iteratively quantify the conflicts between PET and CT segmentation. In addition, to comprehensively characterize image voxels for reliable segmentation, informative image features are effectively selected via an unsupervised metric learning strategy. The proposed method is based on the theory of belief functions, a powerful tool for information fusion and uncertain reasoning. Its performance has been well evaluated by real-patient PET/CT images.