{"title":"基于深度学习的自动睡眠分期集成组合策略分析","authors":"Adriana Anido-Alonso, D. Álvarez-Estévez","doi":"10.1109/BHI56158.2022.9926860","DOIUrl":null,"url":null,"abstract":"Deep learning has demonstrated its usefulness in reaching top-level performance on a number of application domains. However, the achievement of robust prediction capabilities on multi-database scenarios referring to a common task is still a broad of concern. The problem arises associated with different sources of variability modulating the respective database generative processes. Hence, even though great performance can be obtained during validation on a local (source) dataset, maintenance of prediction capabilities on external databases, or target domains, is usually problematic. Such scenario has been studied in the past by the authors in the context of inter-database generalization in the domain of sleep medicine. In this work we build up over past work and explore the use of different local deep-learning model's combination strategies to analyze their effects on the resulting inter-database generalization performance. More specifically, we investigate the use of three different ensemble combination strategies, namely max-voting, output averaging, and weighted Nelder-Mead output combination, and compare them to the more classical database-aggregation approach. We compare the performance resulting from each of these strategies using six independent, heterogeneous and open sleep staging databases. Based on the results of our experimentation we analyze and discuss the advantages and disadvantages of each of the examined approaches.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Analysis of ensemble-combination strategies for improving inter-database generalization of deep-learning-based automatic sleep staging\",\"authors\":\"Adriana Anido-Alonso, D. Álvarez-Estévez\",\"doi\":\"10.1109/BHI56158.2022.9926860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning has demonstrated its usefulness in reaching top-level performance on a number of application domains. However, the achievement of robust prediction capabilities on multi-database scenarios referring to a common task is still a broad of concern. The problem arises associated with different sources of variability modulating the respective database generative processes. Hence, even though great performance can be obtained during validation on a local (source) dataset, maintenance of prediction capabilities on external databases, or target domains, is usually problematic. Such scenario has been studied in the past by the authors in the context of inter-database generalization in the domain of sleep medicine. In this work we build up over past work and explore the use of different local deep-learning model's combination strategies to analyze their effects on the resulting inter-database generalization performance. More specifically, we investigate the use of three different ensemble combination strategies, namely max-voting, output averaging, and weighted Nelder-Mead output combination, and compare them to the more classical database-aggregation approach. We compare the performance resulting from each of these strategies using six independent, heterogeneous and open sleep staging databases. Based on the results of our experimentation we analyze and discuss the advantages and disadvantages of each of the examined approaches.\",\"PeriodicalId\":347210,\"journal\":{\"name\":\"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BHI56158.2022.9926860\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of ensemble-combination strategies for improving inter-database generalization of deep-learning-based automatic sleep staging
Deep learning has demonstrated its usefulness in reaching top-level performance on a number of application domains. However, the achievement of robust prediction capabilities on multi-database scenarios referring to a common task is still a broad of concern. The problem arises associated with different sources of variability modulating the respective database generative processes. Hence, even though great performance can be obtained during validation on a local (source) dataset, maintenance of prediction capabilities on external databases, or target domains, is usually problematic. Such scenario has been studied in the past by the authors in the context of inter-database generalization in the domain of sleep medicine. In this work we build up over past work and explore the use of different local deep-learning model's combination strategies to analyze their effects on the resulting inter-database generalization performance. More specifically, we investigate the use of three different ensemble combination strategies, namely max-voting, output averaging, and weighted Nelder-Mead output combination, and compare them to the more classical database-aggregation approach. We compare the performance resulting from each of these strategies using six independent, heterogeneous and open sleep staging databases. Based on the results of our experimentation we analyze and discuss the advantages and disadvantages of each of the examined approaches.