Pin-Hsiu Chen, Cheng-Hsien Huang, S. Hung, Liang-Cheng Chen, Hui-Ling Hsieh, W. Chiou, Moon-Sing Lee, Hon-Yi Lin, Wei-Min Liu
{"title":"关注- lstm融合U-Net结构在CT图像器官分割中的应用","authors":"Pin-Hsiu Chen, Cheng-Hsien Huang, S. Hung, Liang-Cheng Chen, Hui-Ling Hsieh, W. Chiou, Moon-Sing Lee, Hon-Yi Lin, Wei-Min Liu","doi":"10.1109/IS3C50286.2020.00085","DOIUrl":null,"url":null,"abstract":"During the treatment planning stage of the radiotherapy, the medical physicists or doctors have to delineate the contour of the tumor and the organs at risk in order to accurately deliver enough radiation energy to the tumor and reduce such exposure to the surrounding normal tissues. Organ contouring is a time-consuming and laborious task. An automatic contouring tool is definitely required to fulfill the needs of the increasing cancer population. In this work, we proposed a fusion model that combines the network characteristics of a sequential model (sensor3D) and Attention U-Net. In which the convolutional LSTM layer is applied to study the spatial correlation between different layers in a CT image data set. The attention mechanism suppresses the irrelevant features from the complex image content and focuses on the useful messages of target organs. The clinical data contained CT image series of 108 patients and was acquired from a local hospital with IRB approval. The proposed model segmented five types of organs, lung, liver, stomach, esophagus, and heart. The segmentation accuracy rates of Dice Similarity Coefficient (DSC) were 99.27%, 95.48%, 88.19%, 80.81%, and 93.8%, respectively. We further developed a user interface that converts the AI-generated results into DICOM-RT format. Therefore the radiologists can fine-tune the results under the software used to do the routine manual-delineation tasks.","PeriodicalId":143430,"journal":{"name":"2020 International Symposium on Computer, Consumer and Control (IS3C)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Attention-LSTM Fused U-Net Architecture for Organ Segmentation in CT Images\",\"authors\":\"Pin-Hsiu Chen, Cheng-Hsien Huang, S. Hung, Liang-Cheng Chen, Hui-Ling Hsieh, W. Chiou, Moon-Sing Lee, Hon-Yi Lin, Wei-Min Liu\",\"doi\":\"10.1109/IS3C50286.2020.00085\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"During the treatment planning stage of the radiotherapy, the medical physicists or doctors have to delineate the contour of the tumor and the organs at risk in order to accurately deliver enough radiation energy to the tumor and reduce such exposure to the surrounding normal tissues. Organ contouring is a time-consuming and laborious task. An automatic contouring tool is definitely required to fulfill the needs of the increasing cancer population. In this work, we proposed a fusion model that combines the network characteristics of a sequential model (sensor3D) and Attention U-Net. In which the convolutional LSTM layer is applied to study the spatial correlation between different layers in a CT image data set. The attention mechanism suppresses the irrelevant features from the complex image content and focuses on the useful messages of target organs. The clinical data contained CT image series of 108 patients and was acquired from a local hospital with IRB approval. The proposed model segmented five types of organs, lung, liver, stomach, esophagus, and heart. The segmentation accuracy rates of Dice Similarity Coefficient (DSC) were 99.27%, 95.48%, 88.19%, 80.81%, and 93.8%, respectively. We further developed a user interface that converts the AI-generated results into DICOM-RT format. Therefore the radiologists can fine-tune the results under the software used to do the routine manual-delineation tasks.\",\"PeriodicalId\":143430,\"journal\":{\"name\":\"2020 International Symposium on Computer, Consumer and Control (IS3C)\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Symposium on Computer, Consumer and Control (IS3C)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IS3C50286.2020.00085\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Symposium on Computer, Consumer and Control (IS3C)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IS3C50286.2020.00085","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Attention-LSTM Fused U-Net Architecture for Organ Segmentation in CT Images
During the treatment planning stage of the radiotherapy, the medical physicists or doctors have to delineate the contour of the tumor and the organs at risk in order to accurately deliver enough radiation energy to the tumor and reduce such exposure to the surrounding normal tissues. Organ contouring is a time-consuming and laborious task. An automatic contouring tool is definitely required to fulfill the needs of the increasing cancer population. In this work, we proposed a fusion model that combines the network characteristics of a sequential model (sensor3D) and Attention U-Net. In which the convolutional LSTM layer is applied to study the spatial correlation between different layers in a CT image data set. The attention mechanism suppresses the irrelevant features from the complex image content and focuses on the useful messages of target organs. The clinical data contained CT image series of 108 patients and was acquired from a local hospital with IRB approval. The proposed model segmented five types of organs, lung, liver, stomach, esophagus, and heart. The segmentation accuracy rates of Dice Similarity Coefficient (DSC) were 99.27%, 95.48%, 88.19%, 80.81%, and 93.8%, respectively. We further developed a user interface that converts the AI-generated results into DICOM-RT format. Therefore the radiologists can fine-tune the results under the software used to do the routine manual-delineation tasks.