基于增量学习的柔性CAD系统设计

Prathyusha Akundi, J. Sivaswamy
{"title":"基于增量学习的柔性CAD系统设计","authors":"Prathyusha Akundi, J. Sivaswamy","doi":"10.1109/ISBI52829.2022.9761688","DOIUrl":null,"url":null,"abstract":"Deep neural networks suffer from Catastrophic Forgetting (CF) on old tasks when they are trained to learn new tasks sequentially, since the parameters of the model will change to optimize on the new class. The problem of alleviating CF is of interest to Computer aided diagnostic (CAD) systems community to facilitate class incremental learning (IL): learn new classes as and when new data/annotations are made available and old data is no longer accessible. However, IL has not been explored much in CAD development. We propose a novel approach that ensures that a model remembers the causal factor behind the decisions on the old classes, while incrementally learning new classes. We introduce a common auxiliary task during the course of incremental training, whose hidden representations are shared across all the classification heads. Since the hidden representation is no longer task-specific, it leads to a significant reduction in CF. We demonstrate our approach by incrementally learning 5 different tasks on Chest-Xrays and compare the results with the state-of-the-art regularization methods. Our approach performs consistently well in reducing CF in all the tasks with almost zero CF in most of the cases unlike standard regularisation-based approaches.","PeriodicalId":6827,"journal":{"name":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","volume":"1 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Incremental Learning for a Flexible CAD System Design\",\"authors\":\"Prathyusha Akundi, J. Sivaswamy\",\"doi\":\"10.1109/ISBI52829.2022.9761688\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks suffer from Catastrophic Forgetting (CF) on old tasks when they are trained to learn new tasks sequentially, since the parameters of the model will change to optimize on the new class. The problem of alleviating CF is of interest to Computer aided diagnostic (CAD) systems community to facilitate class incremental learning (IL): learn new classes as and when new data/annotations are made available and old data is no longer accessible. However, IL has not been explored much in CAD development. We propose a novel approach that ensures that a model remembers the causal factor behind the decisions on the old classes, while incrementally learning new classes. We introduce a common auxiliary task during the course of incremental training, whose hidden representations are shared across all the classification heads. Since the hidden representation is no longer task-specific, it leads to a significant reduction in CF. We demonstrate our approach by incrementally learning 5 different tasks on Chest-Xrays and compare the results with the state-of-the-art regularization methods. Our approach performs consistently well in reducing CF in all the tasks with almost zero CF in most of the cases unlike standard regularisation-based approaches.\",\"PeriodicalId\":6827,\"journal\":{\"name\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"volume\":\"1 1\",\"pages\":\"1-4\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISBI52829.2022.9761688\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISBI52829.2022.9761688","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

当深度神经网络被训练成顺序学习新任务时,它们会在旧任务上遭受灾难性遗忘(CF),因为模型的参数会改变以优化新任务。减轻CF的问题是计算机辅助诊断(CAD)系统社区感兴趣的,以促进类增量学习(IL):当新的数据/注释可用而旧的数据不再可访问时学习新的类。然而,在CAD的发展中,IL的探索并不多。我们提出了一种新颖的方法,确保模型在增量学习新类的同时记住旧类决策背后的因果因素。我们在增量训练过程中引入了一个常见的辅助任务,其隐藏表示在所有分类头中共享。由于隐藏表示不再是特定于任务的,它导致CF显著减少。我们通过在Chest-Xrays上增量学习5个不同的任务来演示我们的方法,并将结果与最先进的正则化方法进行比较。与基于正则化的标准方法不同,我们的方法在大多数情况下都能很好地减少所有任务中的CF,几乎为零CF。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Incremental Learning for a Flexible CAD System Design
Deep neural networks suffer from Catastrophic Forgetting (CF) on old tasks when they are trained to learn new tasks sequentially, since the parameters of the model will change to optimize on the new class. The problem of alleviating CF is of interest to Computer aided diagnostic (CAD) systems community to facilitate class incremental learning (IL): learn new classes as and when new data/annotations are made available and old data is no longer accessible. However, IL has not been explored much in CAD development. We propose a novel approach that ensures that a model remembers the causal factor behind the decisions on the old classes, while incrementally learning new classes. We introduce a common auxiliary task during the course of incremental training, whose hidden representations are shared across all the classification heads. Since the hidden representation is no longer task-specific, it leads to a significant reduction in CF. We demonstrate our approach by incrementally learning 5 different tasks on Chest-Xrays and compare the results with the state-of-the-art regularization methods. Our approach performs consistently well in reducing CF in all the tasks with almost zero CF in most of the cases unlike standard regularisation-based approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信