{"title":"Multi-Modal Multi-Stream UNET Model for Liver Segmentation","authors":"Hagar Louye Elghazy, M. Fakhr","doi":"10.1109/AIIoT52608.2021.9454216","DOIUrl":null,"url":null,"abstract":"Computer segmentation of abdominal organs using CT and MRI images can benefit diagnosis, treatment, and workload management. In recent years, UNETs have been widely used in medical image segmentation for their precise accuracy. Most of the UNETs current solutions rely on the use of single data modality. Recently, it has been shown that learning from more than one modality at a time can significantly enhance the segmentation accuracy, however most of available multi-modal datasets are not large enough for training complex architectures. In this paper, we worked on a small dataset and proposed a multi-modal dual-stream UNET architecture that learns from unpaired MRI and CT image modalities to improve the segmentation accuracy on each individual one. We tested the practicality of the proposed architecture on Task 1 of the CHAOS segmentation challenge. Results showed that multi-modal/multi-stream learning improved accuracy over single modality learning and that using UNET in the dual stream was superior than using a standard FCN. A “Dice” score of 96.78 was achieved on CT images. To the best of our knowledge, this is one of the highest reported scores yet.","PeriodicalId":443405,"journal":{"name":"2021 IEEE World AI IoT Congress (AIIoT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIIoT52608.2021.9454216","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Computer segmentation of abdominal organs using CT and MRI images can benefit diagnosis, treatment, and workload management. In recent years, UNETs have been widely used in medical image segmentation for their precise accuracy. Most of the UNETs current solutions rely on the use of single data modality. Recently, it has been shown that learning from more than one modality at a time can significantly enhance the segmentation accuracy, however most of available multi-modal datasets are not large enough for training complex architectures. In this paper, we worked on a small dataset and proposed a multi-modal dual-stream UNET architecture that learns from unpaired MRI and CT image modalities to improve the segmentation accuracy on each individual one. We tested the practicality of the proposed architecture on Task 1 of the CHAOS segmentation challenge. Results showed that multi-modal/multi-stream learning improved accuracy over single modality learning and that using UNET in the dual stream was superior than using a standard FCN. A “Dice” score of 96.78 was achieved on CT images. To the best of our knowledge, this is one of the highest reported scores yet.