Towards Architecture-Insensitive Untrained Network Priors for Accelerated MRI Reconstruction

Yilin Liu, Yunkui Pang, Jiang Li, Yong Chen, Pew-Thian Yap
{"title":"Towards Architecture-Insensitive Untrained Network Priors for Accelerated MRI Reconstruction","authors":"Yilin Liu, Yunkui Pang, Jiang Li, Yong Chen, Pew-Thian Yap","doi":"arxiv-2312.09988","DOIUrl":null,"url":null,"abstract":"Untrained neural networks pioneered by Deep Image Prior (DIP) have recently\nenabled MRI reconstruction without requiring fully-sampled measurements for\ntraining. Their success is widely attributed to the implicit regularization\ninduced by suitable network architectures. However, the lack of understanding\nof such architectural priors results in superfluous design choices and\nsub-optimal outcomes. This work aims to simplify the architectural design\ndecisions for DIP-MRI to facilitate its practical deployment. We observe that\ncertain architectural components are more prone to causing overfitting\nregardless of the number of parameters, incurring severe reconstruction\nartifacts by hindering accurate extrapolation on the un-acquired measurements.\nWe interpret this phenomenon from a frequency perspective and find that the\narchitectural characteristics favoring low frequencies, i.e., deep and narrow\nwith unlearnt upsampling, can lead to enhanced generalization and hence better\nreconstruction. Building on this insight, we propose two architecture-agnostic\nremedies: one to constrain the frequency range of the white-noise input and the\nother to penalize the Lipschitz constants of the network. We demonstrate that\neven with just one extra line of code on the input, the performance gap between\nthe ill-designed models and the high-performing ones can be closed. These\nresults signify that for the first time, architectural biases on untrained MRI\nreconstruction can be mitigated without architectural modifications.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2312.09988","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Untrained neural networks pioneered by Deep Image Prior (DIP) have recently enabled MRI reconstruction without requiring fully-sampled measurements for training. Their success is widely attributed to the implicit regularization induced by suitable network architectures. However, the lack of understanding of such architectural priors results in superfluous design choices and sub-optimal outcomes. This work aims to simplify the architectural design decisions for DIP-MRI to facilitate its practical deployment. We observe that certain architectural components are more prone to causing overfitting regardless of the number of parameters, incurring severe reconstruction artifacts by hindering accurate extrapolation on the un-acquired measurements. We interpret this phenomenon from a frequency perspective and find that the architectural characteristics favoring low frequencies, i.e., deep and narrow with unlearnt upsampling, can lead to enhanced generalization and hence better reconstruction. Building on this insight, we propose two architecture-agnostic remedies: one to constrain the frequency range of the white-noise input and the other to penalize the Lipschitz constants of the network. We demonstrate that even with just one extra line of code on the input, the performance gap between the ill-designed models and the high-performing ones can be closed. These results signify that for the first time, architectural biases on untrained MRI reconstruction can be mitigated without architectural modifications.
为加速核磁共振成像重建开发对架构不敏感的非训练网络先验器
最近,由深度图像优先(DIP)开创的无训练神经网络无需进行全采样测量训练就能进行核磁共振成像重建。它们的成功普遍归功于合适的网络架构所带来的隐式正则化。然而,由于缺乏对这种架构先验的理解,导致了多余的设计选择和次优结果。这项工作旨在简化 DIP-MRI 的架构设计决策,以促进其实际部署。我们从频率的角度解释了这一现象,发现偏向低频的架构特性,即深窄和未学习的上采样,可以增强泛化,从而改善重建。基于这一见解,我们提出了两种与架构无关的补救措施:一种是限制白噪声输入的频率范围,另一种是惩罚网络的 Lipschitz 常量。我们证明,即使只在输入上多写一行代码,也能缩小设计不当的模型与高性能模型之间的性能差距。这些结果首次表明,无需对架构进行修改,就能减轻未经训练的磁共振成像重建的架构偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信