Segmentation of Infrared Breast Images Using MultiResUnet Neural Networks

Ange Lou, Shuyue Guan, Nada Kamona, M. Loew
{"title":"Segmentation of Infrared Breast Images Using MultiResUnet Neural Networks","authors":"Ange Lou, Shuyue Guan, Nada Kamona, M. Loew","doi":"10.1109/AIPR47015.2019.9316541","DOIUrl":null,"url":null,"abstract":"Breast cancer is the second leading cause of death for women in the U.S. Early detection of breast cancer is key to higher survival rates to breast cancer patients. We are investigating infrared (IR) thermography as a noninvasive adjunct to mammography for breast cancer screening. IR imaging is radiation-free, pain-free, and non-contact. Automatic segmentation of the breast area from the acquired full-size breast IR images will help limit the area for tumor search, as well as reduce the time and effort costs of manual hand segmentation. Autoencoder-like convolutional and deconvolutional neural networks (C-DCNN) had been applied to automatically segment the breast area in IR images in previous studies. In this study, we applied a state-of-the-art deep-learning segmentation model, MultiResUnet, which consists of an encoder part to capture features and a decoder part for precise localization. It was used to segment the breast area by using a set of breast IR images, collected in our clinical trials by imaging breast cancer patients and normal volunteers with our infrared camera (N2 Imager). The database we used has 450 images, acquired from 14 patients and 16 volunteers. We used a thresholding method to remove interference in the raw images and remapped them from the original 16-bit to 8-bit, and then cropped and segmented the 8-bit images manually. Experiments using leave-one-out cross-validation (LOOCV) and comparison with the ground-truth images by using Tanimoto similarity show that the average accuracy of MultiResUnet is 91.47%, which is about 2% higher than that of the autoencoder. MultiResUnet offers a better approach to segment breast IR images than our previous model.","PeriodicalId":167075,"journal":{"name":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"1206 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR47015.2019.9316541","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Breast cancer is the second leading cause of death for women in the U.S. Early detection of breast cancer is key to higher survival rates to breast cancer patients. We are investigating infrared (IR) thermography as a noninvasive adjunct to mammography for breast cancer screening. IR imaging is radiation-free, pain-free, and non-contact. Automatic segmentation of the breast area from the acquired full-size breast IR images will help limit the area for tumor search, as well as reduce the time and effort costs of manual hand segmentation. Autoencoder-like convolutional and deconvolutional neural networks (C-DCNN) had been applied to automatically segment the breast area in IR images in previous studies. In this study, we applied a state-of-the-art deep-learning segmentation model, MultiResUnet, which consists of an encoder part to capture features and a decoder part for precise localization. It was used to segment the breast area by using a set of breast IR images, collected in our clinical trials by imaging breast cancer patients and normal volunteers with our infrared camera (N2 Imager). The database we used has 450 images, acquired from 14 patients and 16 volunteers. We used a thresholding method to remove interference in the raw images and remapped them from the original 16-bit to 8-bit, and then cropped and segmented the 8-bit images manually. Experiments using leave-one-out cross-validation (LOOCV) and comparison with the ground-truth images by using Tanimoto similarity show that the average accuracy of MultiResUnet is 91.47%, which is about 2% higher than that of the autoencoder. MultiResUnet offers a better approach to segment breast IR images than our previous model.
基于MultiResUnet神经网络的红外乳房图像分割
乳腺癌是美国女性死亡的第二大原因,早期发现乳腺癌是提高乳腺癌患者存活率的关键。我们正在研究红外(IR)热成像作为乳房x光检查的无创辅助乳腺癌筛查。红外成像无辐射、无痛、非接触。从获得的全尺寸乳房红外图像中自动分割乳房区域有助于限制肿瘤搜索的区域,并减少人工分割的时间和精力成本。类似自编码器的卷积和反卷积神经网络(C-DCNN)已被应用于红外图像中乳房区域的自动分割。在本研究中,我们应用了最先进的深度学习分割模型MultiResUnet,该模型由一个编码器部分和一个解码器部分组成,用于捕获特征和精确定位。它是通过使用一组乳房红外图像来分割乳房区域,这些图像是在我们的临床试验中收集的,这些图像是用我们的红外相机(N2成像仪)对乳腺癌患者和正常志愿者进行成像的。我们使用的数据库有450张图像,来自14名患者和16名志愿者。我们使用阈值法去除原始图像中的干扰,并将原始图像从16位重新映射到8位,然后对8位图像进行手动裁剪和分割。利用留一交叉验证(LOOCV)和谷本相似度与真实图像的对比实验表明,MultiResUnet的平均准确率为91.47%,比自编码器提高约2%。与我们之前的模型相比,MultiResUnet提供了更好的方法来分割乳房红外图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信