Graphical User Interface for Medical Deep Learning - Application to Magnetic Resonance Imaging

Sebastian Milde, Annika Liebgott, Ziwei Wu, Wenyi Feng, Jiahuan Yang, Lukas Mauch, P. Martirosian, F. Bamberg, K. Nikolaou, S. Gatidis, F. Schick, Bin Yang, Thomas Kustner
{"title":"Graphical User Interface for Medical Deep Learning - Application to Magnetic Resonance Imaging","authors":"Sebastian Milde, Annika Liebgott, Ziwei Wu, Wenyi Feng, Jiahuan Yang, Lukas Mauch, P. Martirosian, F. Bamberg, K. Nikolaou, S. Gatidis, F. Schick, Bin Yang, Thomas Kustner","doi":"10.23919/APSIPA.2018.8659515","DOIUrl":null,"url":null,"abstract":"In clinical diagnostic, magnetic resonance imaging (MRI) is a valuable and versatile tool. The acquisition process is, however, susceptible to image distortions (artifacts) which may lead to degradation of image quality. Automated and reference-free localization and quantification of artifacts by employing convolutional neural networks (CNNs) is a promising way for early detection of artifacts. Training relies on high amount of expert labeled data which is a time-demanding process. Previous studies were based on global labels, i.e. a whole volume was automatically labeled as artifact-free or artifact-affected. However, artifact appearance is rather localized. We propose a local labeling which is conducted via a graphical user interface (GUI). Moreover, the GUI provides easy handling of data viewing, preprocessing (labeling, patching, data augmentation), network parametrization and training, data and network evaluation as well as deep visualization of the learned network content. The GUI is not limited to these features and will be extended in the future. The developed GUI is made publicly available and features a modular outline to target different applications of machine learning and deep learning, such as artifact detection, classification and segmentation.","PeriodicalId":287799,"journal":{"name":"2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/APSIPA.2018.8659515","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

In clinical diagnostic, magnetic resonance imaging (MRI) is a valuable and versatile tool. The acquisition process is, however, susceptible to image distortions (artifacts) which may lead to degradation of image quality. Automated and reference-free localization and quantification of artifacts by employing convolutional neural networks (CNNs) is a promising way for early detection of artifacts. Training relies on high amount of expert labeled data which is a time-demanding process. Previous studies were based on global labels, i.e. a whole volume was automatically labeled as artifact-free or artifact-affected. However, artifact appearance is rather localized. We propose a local labeling which is conducted via a graphical user interface (GUI). Moreover, the GUI provides easy handling of data viewing, preprocessing (labeling, patching, data augmentation), network parametrization and training, data and network evaluation as well as deep visualization of the learned network content. The GUI is not limited to these features and will be extended in the future. The developed GUI is made publicly available and features a modular outline to target different applications of machine learning and deep learning, such as artifact detection, classification and segmentation.
医学深度学习的图形用户界面-应用于磁共振成像
在临床诊断中,磁共振成像(MRI)是一种有价值且用途广泛的工具。然而,采集过程容易受到图像失真(伪影)的影响,这可能导致图像质量下降。利用卷积神经网络(cnn)对伪影进行自动化和无参考的定位和量化是一种很有前途的早期检测伪影的方法。训练依赖于大量的专家标记数据,这是一个耗时的过程。以前的研究是基于全局标记,即整个卷被自动标记为无人工或人工影响。然而,工件的外观是相当局部的。我们建议通过图形用户界面(GUI)进行局部标记。此外,GUI还提供了易于处理的数据查看、预处理(标记、修补、数据增强)、网络参数化和训练、数据和网络评估以及学习到的网络内容的深度可视化。GUI并不局限于这些特性,将来还会进行扩展。开发的GUI是公开可用的,并具有模块化大纲,以针对机器学习和深度学习的不同应用,如工件检测,分类和分割。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信