Frontiers in Computer Science最新文献

筛选
英文 中文
Exploring Deep Transfer Learning Techniques for Alzheimer's Dementia Detection. 探索用于阿尔茨海默氏症痴呆症检测的深度迁移学习技术。
IF 2.6
Frontiers in Computer Science Pub Date : 2021-05-01 Epub Date: 2021-05-12 DOI: 10.3389/fcomp.2021.624683
Youxiang Zhu, Xiaohui Liang, John A Batsis, Robert M Roth
{"title":"Exploring Deep Transfer Learning Techniques for Alzheimer's Dementia Detection.","authors":"Youxiang Zhu, Xiaohui Liang, John A Batsis, Robert M Roth","doi":"10.3389/fcomp.2021.624683","DOIUrl":"10.3389/fcomp.2021.624683","url":null,"abstract":"<p><p>Examination of speech datasets for detecting dementia, collected via various speech tasks, has revealed links between speech and cognitive abilities. However, the speech dataset available for this research is extremely limited because the collection process of speech and baseline data from patients with dementia in clinical settings is expensive. In this paper, we study the spontaneous speech dataset from a recent ADReSS challenge, a Cookie Theft Picture (CTP) dataset with balanced groups of participants in age, gender, and cognitive status. We explore state-of-the-art deep transfer learning techniques from image, audio, speech, and language domains. We envision that one advantage of transfer learning is to eliminate the design of handcrafted features based on the tasks and datasets. Transfer learning further mitigates the limited dementia-relevant speech data problem by inheriting knowledge from similar but much larger datasets. Specifically, we built a variety of transfer learning models using commonly employed MobileNet (image), YAMNet (audio), Mockingjay (speech), and BERT (text) models. Results indicated that the transfer learning models of text data showed significantly better performance than those of audio data. Performance gains of the text models may be due to the high similarity between the pre-training text dataset and the CTP text dataset. Our multi-modal transfer learning introduced a slight improvement in accuracy, demonstrating that audio and text data provide limited complementary information. Multi-task transfer learning resulted in limited improvements in classification and a negative impact in regression. By analyzing the meaning behind the AD/non-AD labels and Mini-Mental State Examination (MMSE) scores, we observed that the inconsistency between labels and scores could limit the performance of the multi-task learning, especially when the outputs of the single-task models are highly consistent with the corresponding labels/scores. In sum, we conducted a large comparative analysis of varying transfer learning models focusing less on model customization but more on pre-trained models and pre-training datasets. We revealed insightful relations among models, data types, and data labels in this research area.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":"3 ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8153512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39027802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of the ImageJ Ecosystem in the KNIME Analytics Platform. ImageJ生态系统在KNIME分析平台的集成。
IF 2.6
Frontiers in Computer Science Pub Date : 2020-03-01 Epub Date: 2020-03-17 DOI: 10.3389/fcomp.2020.00008
Christian Dietz, Curtis T Rueden, Stefan Helfrich, Ellen T A Dobson, Martin Horn, Jan Eglinger, Edward L Evans, Dalton T McLean, Tatiana Novitskaya, William A Ricke, Nathan M Sherer, Andries Zijlstra, Michael R Berthold, Kevin W Eliceiri
{"title":"Integration of the ImageJ Ecosystem in the KNIME Analytics Platform.","authors":"Christian Dietz,&nbsp;Curtis T Rueden,&nbsp;Stefan Helfrich,&nbsp;Ellen T A Dobson,&nbsp;Martin Horn,&nbsp;Jan Eglinger,&nbsp;Edward L Evans,&nbsp;Dalton T McLean,&nbsp;Tatiana Novitskaya,&nbsp;William A Ricke,&nbsp;Nathan M Sherer,&nbsp;Andries Zijlstra,&nbsp;Michael R Berthold,&nbsp;Kevin W Eliceiri","doi":"10.3389/fcomp.2020.00008","DOIUrl":"https://doi.org/10.3389/fcomp.2020.00008","url":null,"abstract":"<p><p>Open-source software tools are often used for analysis of scientific image data due to their flexibility and transparency in dealing with rapidly evolving imaging technologies. The complex nature of image analysis problems frequently requires many tools to be used in conjunction, including image processing and analysis, data processing, machine learning and deep learning, statistical analysis of the results, visualization, correlation to heterogeneous but related data, and more. However, the development, and therefore application, of these computational tools is impeded by a lack of integration across platforms. Integration of tools goes beyond convenience, as it is impractical for one tool to anticipate and accommodate the current and future needs of every user. This problem is emphasized in the field of bioimage analysis, where various rapidly emerging methods are quickly being adopted by researchers. ImageJ is a popular open-source image analysis platform, with contributions from a global community resulting in hundreds of specialized routines for a wide array of scientific tasks. ImageJ's strength lies in its accessibility and extensibility, allowing researchers to easily improve the software to solve their image analysis tasks. However, ImageJ is not designed for development of complex end-to-end image analysis workflows. Scientists are often forced to create highly specialized and hard-to-reproduce scripts to orchestrate individual software fragments and cover the entire life-cycle of an analysis of an image dataset. KNIME Analytics Platform, a user-friendly data integration, analysis, and exploration workflow system, was designed to handle huge amounts of heterogeneous data in a platform-agnostic, computing environment and has been successful in meeting complex end-to-end demands in several communities, such as cheminformatics and mass spectrometry. Similar needs within the bioimage analysis community led to the creation of the KNIME Image Processing extension which integrates ImageJ into KNIME Analytics Platform, enabling researchers to develop reproducible and scalable workflows, integrating a diverse range of analysis tools. Here we present how users and developers alike can leverage the ImageJ ecosystem via the KNIME Image Processing extension to provide robust and extensible image analysis within KNIME workflows. We illustrate the benefits of this integration with examples, as well as representative scientific use cases.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":"2 ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3389/fcomp.2020.00008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38359736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection. D-PAttNet:用于动作单元检测的动态补丁-注意力深度网络
IF 2.6
Frontiers in Computer Science Pub Date : 2019-11-01 Epub Date: 2019-11-29 DOI: 10.3389/fcomp.2019.00011
Itir Onal Ertugrul, Le Yang, László A Jeni, Jeffrey F Cohn
{"title":"D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.","authors":"Itir Onal Ertugrul, Le Yang, László A Jeni, Jeffrey F Cohn","doi":"10.3389/fcomp.2019.00011","DOIUrl":"10.3389/fcomp.2019.00011","url":null,"abstract":"<p><p>Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.</p>","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":"1 ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6953909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37536194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信