Analysis of ensemble-combination strategies for improving inter-database generalization of deep-learning-based automatic sleep staging

Adriana Anido-Alonso, D. Álvarez-Estévez
{"title":"Analysis of ensemble-combination strategies for improving inter-database generalization of deep-learning-based automatic sleep staging","authors":"Adriana Anido-Alonso, D. Álvarez-Estévez","doi":"10.1109/BHI56158.2022.9926860","DOIUrl":null,"url":null,"abstract":"Deep learning has demonstrated its usefulness in reaching top-level performance on a number of application domains. However, the achievement of robust prediction capabilities on multi-database scenarios referring to a common task is still a broad of concern. The problem arises associated with different sources of variability modulating the respective database generative processes. Hence, even though great performance can be obtained during validation on a local (source) dataset, maintenance of prediction capabilities on external databases, or target domains, is usually problematic. Such scenario has been studied in the past by the authors in the context of inter-database generalization in the domain of sleep medicine. In this work we build up over past work and explore the use of different local deep-learning model's combination strategies to analyze their effects on the resulting inter-database generalization performance. More specifically, we investigate the use of three different ensemble combination strategies, namely max-voting, output averaging, and weighted Nelder-Mead output combination, and compare them to the more classical database-aggregation approach. We compare the performance resulting from each of these strategies using six independent, heterogeneous and open sleep staging databases. Based on the results of our experimentation we analyze and discuss the advantages and disadvantages of each of the examined approaches.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Deep learning has demonstrated its usefulness in reaching top-level performance on a number of application domains. However, the achievement of robust prediction capabilities on multi-database scenarios referring to a common task is still a broad of concern. The problem arises associated with different sources of variability modulating the respective database generative processes. Hence, even though great performance can be obtained during validation on a local (source) dataset, maintenance of prediction capabilities on external databases, or target domains, is usually problematic. Such scenario has been studied in the past by the authors in the context of inter-database generalization in the domain of sleep medicine. In this work we build up over past work and explore the use of different local deep-learning model's combination strategies to analyze their effects on the resulting inter-database generalization performance. More specifically, we investigate the use of three different ensemble combination strategies, namely max-voting, output averaging, and weighted Nelder-Mead output combination, and compare them to the more classical database-aggregation approach. We compare the performance resulting from each of these strategies using six independent, heterogeneous and open sleep staging databases. Based on the results of our experimentation we analyze and discuss the advantages and disadvantages of each of the examined approaches.
基于深度学习的自动睡眠分期集成组合策略分析
深度学习已经证明了它在许多应用领域达到顶级性能方面的有用性。然而,在涉及共同任务的多数据库场景上实现健壮的预测能力仍然是一个广泛关注的问题。这个问题与调节各自数据库生成过程的不同可变性来源有关。因此,尽管在对本地(源)数据集进行验证期间可以获得出色的性能,但在外部数据库或目标域上维护预测能力通常是有问题的。过去,作者在睡眠医学领域的数据库间泛化背景下对这种情况进行了研究。在这项工作中,我们在过去的工作基础上,探索了不同局部深度学习模型的组合策略的使用,以分析它们对结果数据库间泛化性能的影响。更具体地说,我们研究了三种不同的集成组合策略的使用,即最大投票、输出平均和加权Nelder-Mead输出组合,并将它们与更经典的数据库聚合方法进行了比较。我们使用六个独立的、异构的和开放的睡眠分期数据库来比较这些策略的性能。根据我们的实验结果,我们分析和讨论了每种研究方法的优点和缺点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信