2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)最新文献

筛选
英文 中文
Effect of walking variations on complementary filter based inertial data fusion for ankle angle measurement 步态变化对互补滤波惯性数据融合踝关节角度测量的影响
Lin Meng, Baihan Li, C. Childs, A. Buis, Feng He, Dong Ming
{"title":"Effect of walking variations on complementary filter based inertial data fusion for ankle angle measurement","authors":"Lin Meng, Baihan Li, C. Childs, A. Buis, Feng He, Dong Ming","doi":"10.1109/CIVEMSA45640.2019.9071595","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071595","url":null,"abstract":"A key problem on the measurement of lower-limb joint angles using inertial sensors is drift resulted in error accumulation after time integration. Several types of methods have been proposed to eliminate the drift. Among these methods, complementary filter-based sensor fusion algorithms are widely used in real-time applications due to its efficiency. Results from existing studies have shown that the performance of methods is relevant to walking speed. However, factors of walking variation have not been explored. This study first systematically investigated the walking variation factors and their effects on the accuracy of a proposed sensor fusion method during treadmill walking. Ten able-bodied participants participated in the experiment and walked on a treadmill with three different speeds (0.5, 1.0 and 1.5 m/s). A 12 camera Vicon motion capture system was used as the reference. The accuracy of the proposed method was evaluated in terms of the root-mean-square errors (RMSE), offsets and Pearson's correlation coefficients (PCC) in phases of a normalised gait cycle. A general linear model of analysis of variance (ANOVA) was used to analyze the factors including treadmill speed and gait phases. Results showed both factors had a significant influence on the RMSE, and only the treadmill speed had a significant influence on the offset. It provides an insight to improve the complementary filter-based method in future work.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116863210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Positive Emotion Impairs Verbal Working Memory Updating: A Brain Network Study 积极情绪对言语工作记忆更新的影响:一项脑网络研究
Siyu Zhai, Shuang Liu, Jingjing Tong, Xiaoya Liu, Dong Ming
{"title":"Positive Emotion Impairs Verbal Working Memory Updating: A Brain Network Study","authors":"Siyu Zhai, Shuang Liu, Jingjing Tong, Xiaoya Liu, Dong Ming","doi":"10.1109/CIVEMSA45640.2019.9071608","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071608","url":null,"abstract":"Emotion affects the cognition and behavior of human beings directly or indirectly, as an inevitable fact penetrating into different aspects in daily life. As the rapid development of brain imaging techniques in recent years, increasing attention has paid to the relationship between cognition and emotion with neurophysiological approach. Here, we mainly focused on the effect of different emotions on working memory(WM), which termed as an integral part in the study of cognitive activities. 16 subjects were recruited to experience positive, neutral or negative emotions evoked by pictures from International Affective Picture System(IAPS) firstly, and then required to perform the four-digit verbal working memory task, a virtual measurement of emotional impact on working memory updating was developed by analyzing the brain network characteristic parameters. Partial directed coherence(PDC) was employed to compute the brain network characteristic parameters, including node degree, shortest path length, clustering coefficient and global efficiency during the period of working memory updating under three kinds of emotion states. The results showed that the node degree under positive state was significantly smaller than the neutral and negative states. The global efficiency of positive state was significantly less than the neutral state(p=0.047), as well as the negative state(p=0.027). The complexity of brain network connectivity on positive emotion state is significantly declined, indicating positive emotion impairs verbal working memory during updating period.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116901999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Influence of Stimuli Color Combination on Online SSVEP-based BCI Performance 刺激颜色组合对基于ssvep的BCI在线表现的影响
Xiaodong Li, Xiaojun Wang, C. Wong, Rongwei Wen, F. Wan, Yong Hu
{"title":"Influence of Stimuli Color Combination on Online SSVEP-based BCI Performance","authors":"Xiaodong Li, Xiaojun Wang, C. Wong, Rongwei Wen, F. Wan, Yong Hu","doi":"10.1109/CIVEMSA45640.2019.9071623","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071623","url":null,"abstract":"The stimuli color greatly affects the performance of a steady-state visual evoked potential (SSVEP)-based brain computer interface (BCI). However, few studies have evaluated the effect of stimuli color combination on the multiple-target BCI when the stimuli adopt different colors. In this study, 16 sets of colors were tested by using an online dual-task BCI, and the detection accuracy, time and information transmission rate (ITR) were measured. The results showed that the yellow-red combination achieved the highest accuracy, shortest time and highest ITR, which is the optimal choice for improving the performance of BCI.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125294254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Pilot Study on Objective Evaluation of Human Auditory Ability using Hybrid EEG and FNIRS Acquisition 基于脑电和FNIRS采集的人听觉能力客观评价的初步研究
Zihao Xu, G. Ni, Siyang Han, Q. Zheng, Dong Ming
{"title":"Pilot Study on Objective Evaluation of Human Auditory Ability using Hybrid EEG and FNIRS Acquisition","authors":"Zihao Xu, G. Ni, Siyang Han, Q. Zheng, Dong Ming","doi":"10.1109/CIVEMSA45640.2019.9071629","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071629","url":null,"abstract":"The multimodal brain-computer interface is very useful for identifying brain states of the human. It can leverage its strengths to improve objective evaluation capabilities, such as increased spatial resolution and temporal resolution. In this paper, we use EEG signals and functional near-infrared signals (fNIRS) to describe responses of the brain region to sounds. Subjects were first trained in simple adaptation and then were asked to make judgments on five types of sound. During the test, EEG and hemodynamic information of the subjects were collected synchronously in real time. Finally, collected information was preprocessed, feature extraction for example, and analyzed. Through the analysis of the results of this paradigm, it can be concluded that the subjects' brain responses are different to different sound types, which means the EEG information and the hemodynamic parameters could be used as objective evaluation indexes of the subject's auditory ability. We hope to conduct many experiments in the future, verify our conjectures and be able to propose an objective evaluation system for auditory assessment, especially for those with impaired hearing.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Modeling Strategies and Spatial Filters for Improving the Performance of P300-speller within and across Individuals 提高p300拼写者在个体内部和个体之间表现的建模策略和空间过滤器
Tao Wang, Pengxiao Liu, X. An, Yufeng Ke, Jinzhao Xu, Mingpeng Xu, Linghan Kong, Wentao Liu, Dong Ming
{"title":"Modeling Strategies and Spatial Filters for Improving the Performance of P300-speller within and across Individuals","authors":"Tao Wang, Pengxiao Liu, X. An, Yufeng Ke, Jinzhao Xu, Mingpeng Xu, Linghan Kong, Wentao Liu, Dong Ming","doi":"10.1109/CIVEMSA45640.2019.9071607","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071607","url":null,"abstract":"In recent years, improving the performance of cross-individual brain-computer interfaces (BCI) has become a research hotspot. This paper proposes a within-individual model and two cross-individual models for P300 speller character recognition and uses canonical correlation analysis (CCA) spatial filter and task-related component analysis (TRCA) spatial filter to optimize the model. Those methods are compared in their performance to allow for an accurate classification of P300 speller. As a result, the best classification accuracy rate of the within-individual recognition model is 98.83%, and the best classification accuracy rate in cross-individual model is 85.09%.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114466553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A feasibility study of a video-based heart rate estimation method with convolutional neural networks 基于卷积神经网络的视频心率估计方法的可行性研究
Senle Zhang, Rencheng Song, Juan Cheng, Yunfei Zhang, Xun Chen
{"title":"A feasibility study of a video-based heart rate estimation method with convolutional neural networks","authors":"Senle Zhang, Rencheng Song, Juan Cheng, Yunfei Zhang, Xun Chen","doi":"10.1109/CIVEMSA45640.2019.9071634","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071634","url":null,"abstract":"Remote photoplethysmography (rPPG) is a kind of video-based heart rate (HR) estimation technique which has widely potential applications in health monitoring and human- computer interaction. However, the accuracy of conventional rPPG methods is easily disturbed by motion and illumination artifacts. Recently, some deep learning based rPPG methods have attracted many attentions due to its good performance and robustness to noise. This paper proposes a new rPPG scheme using a convolutional neural network (CNN) to map the pulse accumulated image to corresponding true heart rate value, where the spatial-temporal input images are constructed with raw pulses from conventional rPPG methods. In order to check the feasibility and ideal performance of this method, synthetic rPPG pulses are built using real electrocardiograph (ECG) or blood volume pulse (BVP) signals through a modified Akima cubic Hermite interpolation. We test the proposed method in three cases, subject dependent, subject independent, and also a cross-dataset one. The experimental results show that our method performs well in heart rate value estimation with synthetic rPPG pulses even for the cross-dataset case (mean absolute error HRmae = 4.36 BPM, root mean square error HRrmse = 6.26 BPM, mean error rate percentage HRmer = 5.46%). This pilot study verifies the feasibility of the proposed method and provides a solid foundation for the follow-up research with real rPPG pulses.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114523631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Different Patterns of Reward Magnitude: A Scalp EEG Research 奖赏强度的不同模式:头皮脑电图研究
Qin Tao, Yajing Si, Fali Li, Keyi Duan, Yuanling Jiang, Yuanyuan Liao, D. Yao, Peng Xu
{"title":"The Different Patterns of Reward Magnitude: A Scalp EEG Research","authors":"Qin Tao, Yajing Si, Fali Li, Keyi Duan, Yuanling Jiang, Yuanyuan Liao, D. Yao, Peng Xu","doi":"10.1109/CIVEMSA45640.2019.9071601","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071601","url":null,"abstract":"Efficiently distinguishing the current feedback condition is crucial for the individual to make their following decision. An event-related potential named medial frontal negativity (MFN) has been demonstrated to be sensitive to error and gambling loss. In this study, we conducted the sample gambling game, in which subjects decide to choose one from two cards with different bets (big or small bet), and thereby two critical types of reward features: magnitude (small or large) and valence (loss or gain) were investigated. We analyzed the MFN in different feedback conditions to get insight into the differences in individual behaviors between magnitude and valence. Results of this study demonstrated that the MFN is sensitive to reward valence but insensitive to reward magnitude. Particularly, from the perspective of dynamic functional brain network, differences between small and large magnitudes were uncovered; compared to the large condition, the network pattern related to small reward involved the left central lobe (near electrode C3); whereas the large reward involved the left prefrontal lobe (near electrode Fp1) and right medial temporal lobe (near electrode T8).","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132128279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Using Long Short- Term Memory Network for Recognizing Motor Imagery Tasks 利用长短期记忆网络识别运动意象任务
Xiaoyan Xu, Fangzhou Xu, M. Shu, Yingchun Zhang, Qi Yuan, Yuanjie Zheng
{"title":"Using Long Short- Term Memory Network for Recognizing Motor Imagery Tasks","authors":"Xiaoyan Xu, Fangzhou Xu, M. Shu, Yingchun Zhang, Qi Yuan, Yuanjie Zheng","doi":"10.1109/CIVEMSA45640.2019.9071630","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071630","url":null,"abstract":"Classifying the electrocorticogram (ECoG) signals based on motor imagery (MI) is one of the important issues of the BCI systems. Deep learning approaches have been most popularly applied to learn representations and classify different types of data. However, the number of studies that modeling cognitive events from ECoG signals are very limited. In this paper, we propose a deep learning method to use long short-term memory (LSTM) recurrent neural networks for learning representations from ECoG and gradient boosting (GB) for classifying MI ECoG, and demonstrate its advantages. First, we transform multichannel ECoG time-series into the LSTM-GB model including sequential information. After that, we train an LSTM neural network to learn robust spatial-temporal representations. The subtle temporal dependencies of ECoG data streams can be extracted from LSTM with unique information processing mechanism. The LSTM features coupled with the GB classifier can yield the satisfactory accuracy of 100% on publicly available ECoG dataset. Experiments demonstrate that the proposed method can effectively recognize different MI tasks. Empirical evaluation on the MI classification tasks demonstrates significant improvements in classification accuracy over current state-of-the-art approaches in MI-based BCI field.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116006162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Indonesian Image Caption Generation using CNN-LSTM Model and FEEH-ID Dataset 基于CNN-LSTM模型和feh - id数据集的印尼语图像标题自动生成
E. Mulyanto, Esther Irawati Setiawan, E. M. Yuniarno, M. Purnomo
{"title":"Automatic Indonesian Image Caption Generation using CNN-LSTM Model and FEEH-ID Dataset","authors":"E. Mulyanto, Esther Irawati Setiawan, E. M. Yuniarno, M. Purnomo","doi":"10.1109/CIVEMSA45640.2019.9071632","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071632","url":null,"abstract":"Image captioning is a challenge in computer vision research. This paper extends research on automatic image captioning generation in the Indonesian dimension. Description in Indonesian sentences is generated for unlabeled images. The dataset used is FEEH-ID, this is the first Indonesian image captioning dataset. This research is crucial due to unavailability of a corpus for image captioning in Indonesian. This paper will compare the experimental results in the FEEH-ID dataset with English, Chinese and Japanese datasets using the CNN and LSTM models. The performance of the model proposed in the test set provides promising results of 50.0 for the BLEU-1 score and 23.9 for BLEU-3, which is above average of the Bleu evaluation results in other language datasets. The merging model between CNN and LSTM displays pretty good results for the FEEH-ID dataset. The experimental results will be better with a larger dataset.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116144316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Design of Visual Gaze Target Locating Device Based on Depth Camera 基于深度相机的视觉凝视目标定位装置设计
Yuxin Liu, Lintao Hu, Bowen Xiao, Xiao Y. Wu, Yanhan. Chen, Donghai. Ye, W. Hou, Xiaolin Zheng
{"title":"Design of Visual Gaze Target Locating Device Based on Depth Camera","authors":"Yuxin Liu, Lintao Hu, Bowen Xiao, Xiao Y. Wu, Yanhan. Chen, Donghai. Ye, W. Hou, Xiaolin Zheng","doi":"10.1109/CIVEMSA45640.2019.9071627","DOIUrl":"https://doi.org/10.1109/CIVEMSA45640.2019.9071627","url":null,"abstract":"Based on the research of machine vision positioning application in recent years, this paper discusses the common machine vision positioning system and its principle, and designs a visual target positioning device to detect people's target intention. It mainly includes dynamic following and target positioning functions. Dynamic following can determine the fixation direction of the tester, and target positioning can obtain the spatial position information of the fixation target. The device can recognize the intention of people by observing the visual fixation direction of the subject and obtain the spatial position information of the target, which is a human-computer interaction path with a widely used scene.","PeriodicalId":293990,"journal":{"name":"2019 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"23 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116639864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信