{"title":"头颈部口咽癌的CT- PET分割及危险器官的检测","authors":"Maria Khan, Syed Fahad Tahir","doi":"10.1109/ICAI58407.2023.10136671","DOIUrl":null,"url":null,"abstract":"The detection of oropharynx cancer is very important. There are various applications of image segmentation in the medical field, such as locating the tumour, study of different anatomical structures, segmenting the object of interest etc. The segmentation of cancer is time consuming, and it requires a lot of human effort. The automated segmentation of cancer solves this problem. The goal of the research is to provide a deep learning method of segmenting an oropharynx cancer in 3D CT-PET images and find the organs at risk. The main challenge in the segmentation is that the organs are very dense or may overlap each other because, most of the organs share same intensity levels with the other surrounding tissues. We use the combination of CT -PET images to solve this problem because, the images provide the information both anatomical and metabolic about tumors. We used U-Net as our base model for the segmentation of tumour. The 3D Inception module is used at the encoder side and the 3D Res-Net module is used at the decoder side. The 3D squeeze and excitation (SE) module is also inserted in each encoder block of the model. We used a depth wise convolutional layer in both 3D Res-Net module and 3D Inception module. We used the precision, recall and Dice Similarity Coefficient (DSC) as our performance metrics and achieved precision 0.84849, recall 0.6225 and Dice Similarity Coefficient (DSC) 0.7183 and compared the results with the state of art. Our main contribution is finding the distance from the centre of the organs (nasal cavity, oral cavity, nasopharynx, brain stem, spinal cord, hypopharynx, larynx) to the oropharynx tumour. On the base of the minimum distance among all organs, we assume that organ will be at risk.","PeriodicalId":161809,"journal":{"name":"2023 3rd International Conference on Artificial Intelligence (ICAI)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Segmentation of oropharynx cancer in head and neck and detection of the organ at risk by using CT- PET images\",\"authors\":\"Maria Khan, Syed Fahad Tahir\",\"doi\":\"10.1109/ICAI58407.2023.10136671\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The detection of oropharynx cancer is very important. There are various applications of image segmentation in the medical field, such as locating the tumour, study of different anatomical structures, segmenting the object of interest etc. The segmentation of cancer is time consuming, and it requires a lot of human effort. The automated segmentation of cancer solves this problem. The goal of the research is to provide a deep learning method of segmenting an oropharynx cancer in 3D CT-PET images and find the organs at risk. The main challenge in the segmentation is that the organs are very dense or may overlap each other because, most of the organs share same intensity levels with the other surrounding tissues. We use the combination of CT -PET images to solve this problem because, the images provide the information both anatomical and metabolic about tumors. We used U-Net as our base model for the segmentation of tumour. The 3D Inception module is used at the encoder side and the 3D Res-Net module is used at the decoder side. The 3D squeeze and excitation (SE) module is also inserted in each encoder block of the model. We used a depth wise convolutional layer in both 3D Res-Net module and 3D Inception module. We used the precision, recall and Dice Similarity Coefficient (DSC) as our performance metrics and achieved precision 0.84849, recall 0.6225 and Dice Similarity Coefficient (DSC) 0.7183 and compared the results with the state of art. Our main contribution is finding the distance from the centre of the organs (nasal cavity, oral cavity, nasopharynx, brain stem, spinal cord, hypopharynx, larynx) to the oropharynx tumour. On the base of the minimum distance among all organs, we assume that organ will be at risk.\",\"PeriodicalId\":161809,\"journal\":{\"name\":\"2023 3rd International Conference on Artificial Intelligence (ICAI)\",\"volume\":\"114 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-02-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 3rd International Conference on Artificial Intelligence (ICAI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAI58407.2023.10136671\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 3rd International Conference on Artificial Intelligence (ICAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAI58407.2023.10136671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Segmentation of oropharynx cancer in head and neck and detection of the organ at risk by using CT- PET images
The detection of oropharynx cancer is very important. There are various applications of image segmentation in the medical field, such as locating the tumour, study of different anatomical structures, segmenting the object of interest etc. The segmentation of cancer is time consuming, and it requires a lot of human effort. The automated segmentation of cancer solves this problem. The goal of the research is to provide a deep learning method of segmenting an oropharynx cancer in 3D CT-PET images and find the organs at risk. The main challenge in the segmentation is that the organs are very dense or may overlap each other because, most of the organs share same intensity levels with the other surrounding tissues. We use the combination of CT -PET images to solve this problem because, the images provide the information both anatomical and metabolic about tumors. We used U-Net as our base model for the segmentation of tumour. The 3D Inception module is used at the encoder side and the 3D Res-Net module is used at the decoder side. The 3D squeeze and excitation (SE) module is also inserted in each encoder block of the model. We used a depth wise convolutional layer in both 3D Res-Net module and 3D Inception module. We used the precision, recall and Dice Similarity Coefficient (DSC) as our performance metrics and achieved precision 0.84849, recall 0.6225 and Dice Similarity Coefficient (DSC) 0.7183 and compared the results with the state of art. Our main contribution is finding the distance from the centre of the organs (nasal cavity, oral cavity, nasopharynx, brain stem, spinal cord, hypopharynx, larynx) to the oropharynx tumour. On the base of the minimum distance among all organs, we assume that organ will be at risk.