{"title":"为加速核磁共振成像重建开发对架构不敏感的非训练网络先验器","authors":"Yilin Liu, Yunkui Pang, Jiang Li, Yong Chen, Pew-Thian Yap","doi":"arxiv-2312.09988","DOIUrl":null,"url":null,"abstract":"Untrained neural networks pioneered by Deep Image Prior (DIP) have recently\nenabled MRI reconstruction without requiring fully-sampled measurements for\ntraining. Their success is widely attributed to the implicit regularization\ninduced by suitable network architectures. However, the lack of understanding\nof such architectural priors results in superfluous design choices and\nsub-optimal outcomes. This work aims to simplify the architectural design\ndecisions for DIP-MRI to facilitate its practical deployment. We observe that\ncertain architectural components are more prone to causing overfitting\nregardless of the number of parameters, incurring severe reconstruction\nartifacts by hindering accurate extrapolation on the un-acquired measurements.\nWe interpret this phenomenon from a frequency perspective and find that the\narchitectural characteristics favoring low frequencies, i.e., deep and narrow\nwith unlearnt upsampling, can lead to enhanced generalization and hence better\nreconstruction. Building on this insight, we propose two architecture-agnostic\nremedies: one to constrain the frequency range of the white-noise input and the\nother to penalize the Lipschitz constants of the network. We demonstrate that\neven with just one extra line of code on the input, the performance gap between\nthe ill-designed models and the high-performing ones can be closed. These\nresults signify that for the first time, architectural biases on untrained MRI\nreconstruction can be mitigated without architectural modifications.","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Towards Architecture-Insensitive Untrained Network Priors for Accelerated MRI Reconstruction\",\"authors\":\"Yilin Liu, Yunkui Pang, Jiang Li, Yong Chen, Pew-Thian Yap\",\"doi\":\"arxiv-2312.09988\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Untrained neural networks pioneered by Deep Image Prior (DIP) have recently\\nenabled MRI reconstruction without requiring fully-sampled measurements for\\ntraining. Their success is widely attributed to the implicit regularization\\ninduced by suitable network architectures. However, the lack of understanding\\nof such architectural priors results in superfluous design choices and\\nsub-optimal outcomes. This work aims to simplify the architectural design\\ndecisions for DIP-MRI to facilitate its practical deployment. We observe that\\ncertain architectural components are more prone to causing overfitting\\nregardless of the number of parameters, incurring severe reconstruction\\nartifacts by hindering accurate extrapolation on the un-acquired measurements.\\nWe interpret this phenomenon from a frequency perspective and find that the\\narchitectural characteristics favoring low frequencies, i.e., deep and narrow\\nwith unlearnt upsampling, can lead to enhanced generalization and hence better\\nreconstruction. Building on this insight, we propose two architecture-agnostic\\nremedies: one to constrain the frequency range of the white-noise input and the\\nother to penalize the Lipschitz constants of the network. We demonstrate that\\neven with just one extra line of code on the input, the performance gap between\\nthe ill-designed models and the high-performing ones can be closed. These\\nresults signify that for the first time, architectural biases on untrained MRI\\nreconstruction can be mitigated without architectural modifications.\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-12-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2312.09988\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2312.09988","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Architecture-Insensitive Untrained Network Priors for Accelerated MRI Reconstruction
Untrained neural networks pioneered by Deep Image Prior (DIP) have recently
enabled MRI reconstruction without requiring fully-sampled measurements for
training. Their success is widely attributed to the implicit regularization
induced by suitable network architectures. However, the lack of understanding
of such architectural priors results in superfluous design choices and
sub-optimal outcomes. This work aims to simplify the architectural design
decisions for DIP-MRI to facilitate its practical deployment. We observe that
certain architectural components are more prone to causing overfitting
regardless of the number of parameters, incurring severe reconstruction
artifacts by hindering accurate extrapolation on the un-acquired measurements.
We interpret this phenomenon from a frequency perspective and find that the
architectural characteristics favoring low frequencies, i.e., deep and narrow
with unlearnt upsampling, can lead to enhanced generalization and hence better
reconstruction. Building on this insight, we propose two architecture-agnostic
remedies: one to constrain the frequency range of the white-noise input and the
other to penalize the Lipschitz constants of the network. We demonstrate that
even with just one extra line of code on the input, the performance gap between
the ill-designed models and the high-performing ones can be closed. These
results signify that for the first time, architectural biases on untrained MRI
reconstruction can be mitigated without architectural modifications.