IEEE Transactions on Cognitive and Developmental Systems最新文献

筛选
英文 中文
IEEE Computational Intelligence Society 电气和电子工程师学会计算智能学会
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-02-02 DOI: 10.1109/TCDS.2024.3352773
{"title":"IEEE Computational Intelligence Society","authors":"","doi":"10.1109/TCDS.2024.3352773","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352773","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C3-C3"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419134","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Cognitive and Developmental Systems Publication Information 电气和电子工程师学会认知与发展系统论文集》出版信息
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-02-02 DOI: 10.1109/TCDS.2024.3352771
{"title":"IEEE Transactions on Cognitive and Developmental Systems Publication Information","authors":"","doi":"10.1109/TCDS.2024.3352771","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352771","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C2-C2"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419103","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Cognitive and Developmental Systems Information for Authors 电气和电子工程师学会《认知与发展系统》期刊 为作者提供的信息
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-02-02 DOI: 10.1109/TCDS.2024.3352775
{"title":"IEEE Transactions on Cognitive and Developmental Systems Information for Authors","authors":"","doi":"10.1109/TCDS.2024.3352775","DOIUrl":"https://doi.org/10.1109/TCDS.2024.3352775","url":null,"abstract":"","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 1","pages":"C4-C4"},"PeriodicalIF":5.0,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10419135","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139676399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback 基于脑电图的脑机接口,通过虚拟现实神经反馈调节情绪
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-29 DOI: 10.1109/TCDS.2024.3357547
Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li
{"title":"An Electroencephalography-Based Brain–Computer Interface for Emotion Regulation With Virtual Reality Neurofeedback","authors":"Kendi Li;Weichen Huang;Wei Gao;Zijing Guan;Qiyun Huang;Jin-Gang Yu;Zhu Liang Yu;Yuanqing Li","doi":"10.1109/TCDS.2024.3357547","DOIUrl":"10.1109/TCDS.2024.3357547","url":null,"abstract":"An increasing number of people fail to properly regulate their emotions for various reasons. Although brain–computer interfaces (BCIs) have shown potential in neural regulation, few effective BCI systems have been developed to assist users in emotion regulation. In this article, we propose an electroencephalography (EEG)-based BCI for emotion regulation with virtual reality (VR) neurofeedback. Specifically, music clips with positive, neutral, and negative emotions were first presented, based on which the participants were asked to regulate their emotions. The BCI system simultaneously collected the participants’ EEG signals and then assessed their emotions. Furthermore, based on the emotion recognition results, the neurofeedback was provided to participants in the form of a facial expression of a virtual pop star on a three-dimensional (3-D) virtual stage. Eighteen healthy participants achieved satisfactory performance with an average accuracy of 81.1% with neurofeedback. Additionally, the average accuracy increased significantly from 65.4% at the start to 87.6% at the end of a regulation trial (a trial corresponded to a music clip). In comparison, these participants could not significantly improve the accuracy within a regulation trial without neurofeedback. The results demonstrated the effectiveness of our system and showed that VR neurofeedback played a key role during emotion regulation.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1405-1417"},"PeriodicalIF":5.0,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kernel-Ridge-Regression-Based Randomized Network for Brain Age Classification and Estimation 基于核岭回归的脑年龄分类与估算随机网络
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-18 DOI: 10.1109/TCDS.2024.3349593
Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan
{"title":"Kernel-Ridge-Regression-Based Randomized Network for Brain Age Classification and Estimation","authors":"Raveendra Pilli;Tripti Goel;R. Murugan;M. Tanveer;P. N. Suganthan","doi":"10.1109/TCDS.2024.3349593","DOIUrl":"10.1109/TCDS.2024.3349593","url":null,"abstract":"Accelerated brain aging and abnormalities are associated with variations in brain patterns. Effective and reliable assessment methods are required to accurately classify and estimate brain age. In this study, a brain age classification and estimation framework is proposed using structural magnetic resonance imaging (sMRI) scans, a 3-D convolutional neural network (3-D-CNN), and a kernel ridge regression-based random vector functional link (KRR-RVFL) network. We used 480 brain MRI images from the publicly availabel IXI database and segmented them into gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) images to show age-related associations by region. Features from MRI images are extracted using 3-D-CNN and fed into the wavelet KRR-RVFL network for brain age classification and prediction. The proposed algorithm achieved high classification accuracy, 97.22%, 99.31%, and 95.83% for GM, WM, and CSF regions, respectively. Moreover, the proposed algorithm demonstrated excellent prediction accuracy with a mean absolute error (MAE) of \u0000<inline-formula><tex-math>$3.89$</tex-math></inline-formula>\u0000 years, \u0000<inline-formula><tex-math>$3.64$</tex-math></inline-formula>\u0000 years, and \u0000<inline-formula><tex-math>$4.49$</tex-math></inline-formula>\u0000 years for GM, WM, and CSF regions, confirming that changes in WM volume are significantly associated with normal brain aging. Additionally, voxel-based morphometry (VBM) examines age-related anatomical alterations in different brain regions in GM, WM, and CSF tissue volumes.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1342-1351"},"PeriodicalIF":5.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10405861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Expressive Robot Behavior on Users’ Mental Effort: A Pupillometry Study 表情机器人行为对用户脑力劳动的影响:瞳孔测量研究
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-15 DOI: 10.1109/TCDS.2024.3352893
Marieke van Otterdijk;Bruno Laeng;Diana Saplacan Lindblom;Jim Torresen
{"title":"The Effect of Expressive Robot Behavior on Users’ Mental Effort: A Pupillometry Study","authors":"Marieke van Otterdijk;Bruno Laeng;Diana Saplacan Lindblom;Jim Torresen","doi":"10.1109/TCDS.2024.3352893","DOIUrl":"10.1109/TCDS.2024.3352893","url":null,"abstract":"Robots are becoming part of our social landscape. Social interaction with humans must be efficient and intuitive to understand because nonverbal cues make social interactions between humans and robots more efficient. This study measures mental effort to investigate what factors influence the intuitive understanding of expressive nonverbal robot motions. Fifty participants were asked to watch, while their pupil response and gaze were measured with an eye tracker, eighteen short video clips of three different robot types while performing expressive robot behaviors. Our findings indicate that the appearance of the robot, the viewing angle, and the expression shown by the robot all influence the cognitive load, and therefore, they may influence the intuitive understanding of expressive robot behavior. Furthermore, we found differences in the fixation time for different features of the different robots. With these insights, we identified possible improvement directions for making interactions between humans and robots more efficient and intuitive.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 2","pages":"474-484"},"PeriodicalIF":5.0,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TR-TransGAN: Temporal Recurrent Transformer Generative Adversarial Network for Longitudinal MRI Dataset Expansion TR-TransGAN:用于纵向磁共振成像数据集扩展的时序递归变换生成对抗网络
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-08 DOI: 10.1109/TCDS.2023.3345922
Chen-Chen Fan;Hongjun Yang;Liang Peng;Xiao-Hu Zhou;Shiqi Liu;Sheng Chen;Zeng-Guang Hou
{"title":"TR-TransGAN: Temporal Recurrent Transformer Generative Adversarial Network for Longitudinal MRI Dataset Expansion","authors":"Chen-Chen Fan;Hongjun Yang;Liang Peng;Xiao-Hu Zhou;Shiqi Liu;Sheng Chen;Zeng-Guang Hou","doi":"10.1109/TCDS.2023.3345922","DOIUrl":"10.1109/TCDS.2023.3345922","url":null,"abstract":"Longitudinal magnetic resonance imaging (MRI) datasets have important implications for the study of degenerative diseases because such datasets have data from multiple points in time to track disease progression. However, longitudinal datasets are often incomplete due to unexpected quits of patients. In previous work, we proposed an augmentation method temporal recurrent generative adversarial network (TR-GAN) that can complement missing session data of MRI datasets. TR-GAN uses a simple U-Net as a generator, which limits its performance. Transformers have had great success in the research of computer vision and this article attempts to introduce it into longitudinal dataset completion tasks. The multihead attention mechanism in transformer has huge memory requirements, and it is difficult to train 3-D MRI data on graphics processing units (GPUs) with small memory. To build a memory-friendly transformer-based generator, we introduce a Hilbert transform module (HTM) to convert 3-D data to 2-D data that preserves locality fairly well. To make up for the insufficiency of convolutional neural network (CNN)-based models that are difficult to establish long-range dependencies, we propose an Swin transformer-based up/down sampling module (STU/STD) module that combines the Swin transformer module and CNN module to capture global and local information simultaneously. Extensive experiments show that our model can reduce mean squared error (MMSE) by at least 7.16% compared to the previous state-of-the-art method.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1223-1232"},"PeriodicalIF":5.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Instance Learning for Cheating Detection and Localization in Online Examinations 在线考试作弊检测和定位的多实例学习
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-05 DOI: 10.1109/TCDS.2024.3349705
Yemeng Liu;Jing Ren;Jianshuo Xu;Xiaomei Bai;Roopdeep Kaur;Feng Xia
{"title":"Multiple Instance Learning for Cheating Detection and Localization in Online Examinations","authors":"Yemeng Liu;Jing Ren;Jianshuo Xu;Xiaomei Bai;Roopdeep Kaur;Feng Xia","doi":"10.1109/TCDS.2024.3349705","DOIUrl":"10.1109/TCDS.2024.3349705","url":null,"abstract":"The spread of the Coronavirus disease-2019 epidemic has caused many courses and exams to be conducted online. The cheating behavior detection model in examination invigilation systems plays a pivotal role in guaranteeing the equality of long-distance examinations. However, cheating behavior is rare, and most researchers do not comprehensively take into account features such as head posture, gaze angle, body posture, and background information in the task of cheating behavior detection. In this article, we develop and present CHEESE, a CHEating detection framework via multiple instance learning. The framework consists of a label generator that implements weak supervision and a feature encoder to learn discriminative features. In addition, the framework combines body posture and background features extracted by 3-D convolution with eye gaze, head posture, and facial features captured by OpenFace 2.0. These features are fed into the spatiotemporal graph module by stitching to analyze the spatiotemporal changes in video clips to detect the cheating behaviors. Our experiments on three datasets, University of Central Florida (UCF)-Crime, ShanghaiTech, and online exam proctoring (OEP), prove the effectiveness of our method as compared to the state-of-the-art approaches and obtain the frame-level area under the curve (AUC) score of 87.58% on the OEP dataset.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1315-1326"},"PeriodicalIF":5.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139850467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIMo: A Multimodal Infant Model for Studying Cognitive Development MIMo:用于研究认知发展的多模式婴儿模型
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-05 DOI: 10.1109/TCDS.2024.3350448
Dominik Mattern;Pierre Schumacher;Francisco M. López;Marcel C. Raabe;Markus R. Ernst;Arthur Aubret;Jochen Triesch
{"title":"MIMo: A Multimodal Infant Model for Studying Cognitive Development","authors":"Dominik Mattern;Pierre Schumacher;Francisco M. López;Marcel C. Raabe;Markus R. Ernst;Arthur Aubret;Jochen Triesch","doi":"10.1109/TCDS.2024.3350448","DOIUrl":"10.1109/TCDS.2024.3350448","url":null,"abstract":"Human intelligence and human consciousness emerge gradually during the process of cognitive development. Understanding this development is an essential aspect of understanding the human mind and may facilitate the construction of artificial minds with similar properties. Importantly, human cognitive development relies on embodied interactions with the physical and social environment, which is perceived via complementary sensory modalities. These interactions allow the developing mind to probe the causal structure of the world. This is in stark contrast to common machine learning approaches, e.g., for large language models, which are merely passively “digesting” large amounts of training data, but are not in control of their sensory inputs. However, computational modeling of the kind of self-determined embodied interactions that lead to human intelligence and consciousness is a formidable challenge. Here, we present Multimodal Infant Model (MiMo), an open-source multimodal infant model for studying early cognitive development through computer simulations. MIMo's body is modeled after an 18-month-old child with detailed five-fingered hands. MIMo perceives its surroundings via binocular vision, a vestibular system, proprioception, and touch perception through a full-body virtual skin, while two different actuation models allow control of his body. We describe the design and interfaces of MIMo and provide examples illustrating its use.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1291-1301"},"PeriodicalIF":5.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfiguration of Cognitive Control Networks During a Long-Duration Flanker Task 长时间侧翼任务中认知控制网络的重新配置
IF 5 3区 计算机科学
IEEE Transactions on Cognitive and Developmental Systems Pub Date : 2024-01-05 DOI: 10.1109/TCDS.2024.3350974
Jia Liu;Yongjie Zhu;Zheng Chang;Tiina Parviainen;Christian Antfolk;Timo Hämäläinen;Fengyu Cong
{"title":"Reconfiguration of Cognitive Control Networks During a Long-Duration Flanker Task","authors":"Jia Liu;Yongjie Zhu;Zheng Chang;Tiina Parviainen;Christian Antfolk;Timo Hämäläinen;Fengyu Cong","doi":"10.1109/TCDS.2024.3350974","DOIUrl":"10.1109/TCDS.2024.3350974","url":null,"abstract":"Continuous task engagement generally leads to vigilance decrement and deteriorates task performance. However, how conflict effect is modulated by vigilance decrement has no consistent evidence, and little is known about the underlying neural mechanisms. Here, we adopted an electroencephalogram (EEG) dataset collected during a prolonged flanker task to examine the interactions between vigilance and congruency on behavioral performance and neural measures. Specifically, we extracted a sequence of event-related potentials (ERPs) using temporal principal component analysis (PCA) and performed functional network analysis with graph measures. Behavioral analysis results showed that behavioral performance deteriorated due to vigilance decrement, but the capability of conflict processing was maintained over time. Regarding the neural analysis results, the conflict effect reflected in P3a and P3b was changed and maintained, respectively, when affected by vigilance decrement. The theta band frontoparietal network was observed in the face of conflicting interference and the conflict effect for graph measures disappeared over time. These results demonstrated deteriorated task performance, impaired cognitive functions, and the reconfiguration of cognitive control networks during a prolonged flanker task. Our findings also support the evidence that temporal PCA, and event-related network analysis might be efficient for the investigation of the neural dynamics of complex cognitive processes.","PeriodicalId":54300,"journal":{"name":"IEEE Transactions on Cognitive and Developmental Systems","volume":"16 4","pages":"1364-1373"},"PeriodicalIF":5.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信