{"title":"MOST:通过持续学习优化多个下行流任务的磁共振重构","authors":"Hwihun Jeong, Se Young Chun, Jongho Lee","doi":"arxiv-2409.10394","DOIUrl":null,"url":null,"abstract":"Deep learning-based Magnetic Resonance (MR) reconstruction methods have\nfocused on generating high-quality images but they often overlook the impact on\ndownstream tasks (e.g., segmentation) that utilize the reconstructed images.\nCascading separately trained reconstruction network and downstream task network\nhas been shown to introduce performance degradation due to error propagation\nand domain gaps between training datasets. To mitigate this issue, downstream\ntask-oriented reconstruction optimization has been proposed for a single\ndownstream task. Expanding this optimization to multi-task scenarios is not\nstraightforward. In this work, we extended this optimization to sequentially\nintroduced multiple downstream tasks and demonstrated that a single MR\nreconstruction network can be optimized for multiple downstream tasks by\ndeploying continual learning (MOST). MOST integrated techniques from\nreplay-based continual learning and image-guided loss to overcome catastrophic\nforgetting. Comparative experiments demonstrated that MOST outperformed a\nreconstruction network without finetuning, a reconstruction network with\nna\\\"ive finetuning, and conventional continual learning methods. This\nadvancement empowers the application of a single MR reconstruction network for\nmultiple downstream tasks. The source code is available at:\nhttps://github.com/SNU-LIST/MOST","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning\",\"authors\":\"Hwihun Jeong, Se Young Chun, Jongho Lee\",\"doi\":\"arxiv-2409.10394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning-based Magnetic Resonance (MR) reconstruction methods have\\nfocused on generating high-quality images but they often overlook the impact on\\ndownstream tasks (e.g., segmentation) that utilize the reconstructed images.\\nCascading separately trained reconstruction network and downstream task network\\nhas been shown to introduce performance degradation due to error propagation\\nand domain gaps between training datasets. To mitigate this issue, downstream\\ntask-oriented reconstruction optimization has been proposed for a single\\ndownstream task. Expanding this optimization to multi-task scenarios is not\\nstraightforward. In this work, we extended this optimization to sequentially\\nintroduced multiple downstream tasks and demonstrated that a single MR\\nreconstruction network can be optimized for multiple downstream tasks by\\ndeploying continual learning (MOST). MOST integrated techniques from\\nreplay-based continual learning and image-guided loss to overcome catastrophic\\nforgetting. Comparative experiments demonstrated that MOST outperformed a\\nreconstruction network without finetuning, a reconstruction network with\\nna\\\\\\\"ive finetuning, and conventional continual learning methods. This\\nadvancement empowers the application of a single MR reconstruction network for\\nmultiple downstream tasks. The source code is available at:\\nhttps://github.com/SNU-LIST/MOST\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10394\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning
Deep learning-based Magnetic Resonance (MR) reconstruction methods have
focused on generating high-quality images but they often overlook the impact on
downstream tasks (e.g., segmentation) that utilize the reconstructed images.
Cascading separately trained reconstruction network and downstream task network
has been shown to introduce performance degradation due to error propagation
and domain gaps between training datasets. To mitigate this issue, downstream
task-oriented reconstruction optimization has been proposed for a single
downstream task. Expanding this optimization to multi-task scenarios is not
straightforward. In this work, we extended this optimization to sequentially
introduced multiple downstream tasks and demonstrated that a single MR
reconstruction network can be optimized for multiple downstream tasks by
deploying continual learning (MOST). MOST integrated techniques from
replay-based continual learning and image-guided loss to overcome catastrophic
forgetting. Comparative experiments demonstrated that MOST outperformed a
reconstruction network without finetuning, a reconstruction network with
na\"ive finetuning, and conventional continual learning methods. This
advancement empowers the application of a single MR reconstruction network for
multiple downstream tasks. The source code is available at:
https://github.com/SNU-LIST/MOST