{"title":"基于深度学习神经网络的MRI和PET图像融合","authors":"M. Muthiah, E. Logashamugam, B. V. Reddy","doi":"10.1109/ICPEDC47771.2019.9036665","DOIUrl":null,"url":null,"abstract":"Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) are the two oldest modalities for the detection of brain tumor. These two images provide complementary information. Physicians have to analyze both the images in order to make a decision. Rather than analyzing two different images, it would be better if these images are combined together as a single image. Image fusion refers to the process of combining two different images into a single image. In this research work, a novel feature based image fusion is performed on both MRI and PET images using Convolutional Neural Network (CNN) by extracting features. Features representing texture, shape, edges and other discontiuites are extracted and are then combined to form the output image. Signal to Noise Ratio (SNR) which provides the information present in the input image (<40 represents usefule information in the image) and entropy (entropy approaching one indicates more information) are used as objective measures. Entropy and SNR are higher for CNN based image fusion than that of Discrete Wavelet Transform (DWT). It implies that information from both the input images is available in the output image.","PeriodicalId":426923,"journal":{"name":"2019 2nd International Conference on Power and Embedded Drive Control (ICPEDC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Fusion of MRI and PET Images Using Deep Learning Neural Networks\",\"authors\":\"M. Muthiah, E. Logashamugam, B. V. Reddy\",\"doi\":\"10.1109/ICPEDC47771.2019.9036665\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) are the two oldest modalities for the detection of brain tumor. These two images provide complementary information. Physicians have to analyze both the images in order to make a decision. Rather than analyzing two different images, it would be better if these images are combined together as a single image. Image fusion refers to the process of combining two different images into a single image. In this research work, a novel feature based image fusion is performed on both MRI and PET images using Convolutional Neural Network (CNN) by extracting features. Features representing texture, shape, edges and other discontiuites are extracted and are then combined to form the output image. Signal to Noise Ratio (SNR) which provides the information present in the input image (<40 represents usefule information in the image) and entropy (entropy approaching one indicates more information) are used as objective measures. Entropy and SNR are higher for CNN based image fusion than that of Discrete Wavelet Transform (DWT). It implies that information from both the input images is available in the output image.\",\"PeriodicalId\":426923,\"journal\":{\"name\":\"2019 2nd International Conference on Power and Embedded Drive Control (ICPEDC)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 2nd International Conference on Power and Embedded Drive Control (ICPEDC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPEDC47771.2019.9036665\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 2nd International Conference on Power and Embedded Drive Control (ICPEDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPEDC47771.2019.9036665","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fusion of MRI and PET Images Using Deep Learning Neural Networks
Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) are the two oldest modalities for the detection of brain tumor. These two images provide complementary information. Physicians have to analyze both the images in order to make a decision. Rather than analyzing two different images, it would be better if these images are combined together as a single image. Image fusion refers to the process of combining two different images into a single image. In this research work, a novel feature based image fusion is performed on both MRI and PET images using Convolutional Neural Network (CNN) by extracting features. Features representing texture, shape, edges and other discontiuites are extracted and are then combined to form the output image. Signal to Noise Ratio (SNR) which provides the information present in the input image (<40 represents usefule information in the image) and entropy (entropy approaching one indicates more information) are used as objective measures. Entropy and SNR are higher for CNN based image fusion than that of Discrete Wavelet Transform (DWT). It implies that information from both the input images is available in the output image.