Chrysa D. Papadaniil, V. Kosmidou, A. Tsolaki, L. Hadjileontiadis, M. Tsolaki, Y. Kompatsiaris
{"title":"Age Effect in Human Brain Responses to Emotion Arousing Images: The EEG 3D-Vector Field Tomography Modeling Approach","authors":"Chrysa D. Papadaniil, V. Kosmidou, A. Tsolaki, L. Hadjileontiadis, M. Tsolaki, Y. Kompatsiaris","doi":"10.1109/TAMD.2015.2416977","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2416977","url":null,"abstract":"Understanding of the brain responses to emotional stimulation remains a great challenge. Studies on the aging effect in neural activation report controversial results. In this paper, pictures of two classes of facial affect, i.e., anger and fear, were presented to young and elderly participants. High-density 256-channel EEG data were recorded and an innovative methodology was used to map the activated brain state at the N170 event-related potential component. The methodology, namely 3D Vector Field Tomography, reconstructs the electrostatic field within the head volume and requires no prior modeling of the individual's brain. Results showed that the elderly exhibited greater N170 amplitudes, while age-based differences were also observed in the topographic distribution of the EEG recordings at the N170 component. The brain activation analysis was performed by adopting a set of regions of interest. Results on the maximum activation area appeared to be emotion-specific; the anger emotional conditions induced the maximum activation in the inferior frontal gyrus, while fear activated more the superior temporal gyrus. The approach used here shows the potential of the proposed computational model to reveal the age effect on the brain activation upon emotion arousing images, which could be further transferred to the design of assistive clinical applications.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"223-235"},"PeriodicalIF":0.0,"publicationDate":"2015-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2416977","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decoding Semantics Categorization during Natural Viewing of Video Streams","authors":"Xintao Hu, Lei Guo, Junwei Han, Tianming Liu","doi":"10.1109/TAMD.2015.2415413","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2415413","url":null,"abstract":"Exploring the functional mechanism of the human brain during semantics categorization and subsequently leverage current semantics-oriented multimedia analysis by functional brain imaging have been receiving great attention in recent years. In the field, most of existing studies utilized strictly controlled laboratory paradigms as experimental settings in brain imaging data acquisition. They also face the critical problem of modeling functional brain response from acquired brain imaging data. In this paper, we present a brain decoding study based on sparse multinomial logistic regression (SMLR) algorithm to explore the brain regions and functional interactions during semantics categorization. The setups of our study are two folds. First, we use naturalistic video streams as stimuli in functional magnetic resonance imaging (fMRI) to simulate the complex environment for semantics perception that the human brain has to process in real life. Second, we model brain responses to semantics categorization as functional interactions among large-scale brain networks. Our experimental results show that semantics categorization can be accurately predicted by both intrasubject and intersubject brain decoding models. The brain responses identified by the decoding model reveal that a wide range of brain regions and functional interactions are recruited during semantics categorization. Especially, the working memory system exhibits significant contributions. Other substantially involved brain systems include emotion, attention, vision and language systems.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"201-210"},"PeriodicalIF":0.0,"publicationDate":"2015-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2415413","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motor-Primed Visual Attention for Humanoid Robots","authors":"L. Lukic, A. Billard, J. Santos-Victor","doi":"10.1109/TAMD.2015.2417353","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2417353","url":null,"abstract":"We present a novel, biologically inspired, approach to an efficient allocation of visual resources for humanoid robots in a form of a motor-primed visual attentional landscape. The attentional landscape is a more general, dynamic and a more complex concept of an arrangement of spatial attention than the popular “attentional spotlight” or “zoom-lens” models of attention. Motor-priming of attention is a mechanism for prioritizing visual processing to motor-relevant parts of the visual field, in contrast to other, motor-irrelevant, parts. In particular, we present two techniques for constructing a visual “attentional landscape”. The first, more general, technique, is to devote visual attention to the reachable space of a robot (peripersonal space-primed attention). The second, more specialized, technique is to allocate visual attention with respect to motor plans of the robot (motor plans-primed attention). Hence, in our model, visual attention is not exclusively defined in terms of visual saliency in color, texture or intensity cues, it is rather modulated by motor information. This computational model is inspired by recent findings in visual neuroscience and psychology. In addition to two approaches to constructing the attentional landscape, we present two methods for using the attentional landscape for driving visual processing. We show that motor-priming of visual attention can be used to very efficiently distribute limited computational resources devoted to the visual processing. The proposed model is validated in a series of experiments conducted with the iCub robot, both using the simulator and the real robot.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"76-91"},"PeriodicalIF":0.0,"publicationDate":"2015-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2417353","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Dong, Pu Wang, Yi Bin, Jiayan Deng, Y. Li, Leiting Chen, C. Luo, D. Yao
{"title":"Local Multimodal Serial Analysis for Fusing EEG-fMRI: A New Method to Study Familial Cortical Myoclonic Tremor and Epilepsy","authors":"Li Dong, Pu Wang, Yi Bin, Jiayan Deng, Y. Li, Leiting Chen, C. Luo, D. Yao","doi":"10.1109/TAMD.2015.2411740","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2411740","url":null,"abstract":"Integrating information of neuroimaging multimodalities, such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), has become popularly for investigating various types of epilepsy. However, there are also some problems for the analysis of simultaneous EEG-fMRI data in epilepsy: one is the variation of HRFs, and another is low signal-to-noise ratio (SNR) in the data. Here, we propose a new multimodal unsupervised method, termed local multimodal serial analysis (LMSA), which may compensate for these deficiencies in multimodal integration. A simulation study with comparison to the traditional EEG-informed fMRI analysis which directly implemented the general linear model (GLM) was conducted to confirm the superior performance of LMSA. Then, applied to the simultaneous EEG-fMRI data of familial cortical myoclonic tremor and epilepsy (FCMTE), some meaningful information of BOLD changes related to the EEG discharges, such as the cerebellum and frontal lobe (especially in the inferior frontal gyrus), were found using LMSA. These results demonstrate that LMSA is a promising technique for exploring various data to provide integrated information that will further our understanding of brain dysfunction.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"311-319"},"PeriodicalIF":0.0,"publicationDate":"2015-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2411740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu
{"title":"Sparsity-Constrained fMRI Decoding of Visual Saliency in Naturalistic Video Streams","authors":"Xintao Hu, Cheng Lv, Gong Cheng, Jinglei Lv, Lei Guo, Junwei Han, Tianming Liu","doi":"10.1109/TAMD.2015.2409835","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2409835","url":null,"abstract":"Naturalistic stimuli such as video watching have been increasingly used in functional magnetic resonance imaging (fMRI)-based brain encoding and decoding studies since they can provide real and dynamic information that the human brain has to process in everyday life. In this paper, we propose a sparsity-constrained decoding model to explore whether bottom-up visual saliency in continuous video streams can be effectively decoded by brain activity recorded by fMRI, and to examine whether sparsity constraints can improve visual saliency decoding. Specifically, we use a biologically-plausible computational model to quantify the visual saliency in video streams, and adopt a sparse representation algorithm to learn the atomic fMRI signal dictionaries that are representative of the patterns of whole-brain fMRI signals. Sparse representation also links the learned atomic dictionary with the quantified video saliency. Experimental results show that the temporal visual saliency in video stream can be well decoded and the sparse constraints can improve the performance of fMRI decoding models.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"65-75"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2409835","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ecological Active Vision: Four Bioinspired Principles to Integrate Bottom–Up and Adaptive Top–Down Attention Tested With a Simple Camera-Arm Robot","authors":"D. Ognibene, G. Baldassarre","doi":"10.1109/TAMD.2014.2341351","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2341351","url":null,"abstract":"Vision gives primates a wealth of information useful to manipulate the environment, but at the same time it can easily overwhelm their computational resources. Active vision is a key solution found by nature to solve this problem: a limited fovea actively displaced in space to collect only relevant information. Here we highlight that in ecological conditions this solution encounters four problems: 1) the agent needs to learn where to look based on its goals; 2) manipulation causes learning feedback in areas of space possibly outside the attention focus; 3) good visual actions are needed to guide manipulation actions, but only these can generate learning feedback; and 4) a limited fovea causes aliasing problems. We then propose a computational architecture (“BITPIC”) to overcome the four problems, integrating four bioinspired key ingredients: 1) reinforcement-learning fovea-based top-down attention; 2) a strong vision-manipulation coupling; 3) bottom-up periphery-based attention; and 4) a novel action-oriented memory. The system is tested with a simple simulated camera-arm robot solving a class of search-and-reach tasks involving color-blob “objects.” The results show that the architecture solves the problems, and hence the tasks, very efficiently, and highlight how the architecture principles can contribute to a full exploitation of the advantages of active vision in ecological conditions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"3-25"},"PeriodicalIF":0.0,"publicationDate":"2015-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2341351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mental States, EEG Manifestations, and Mentally Emulated Digital Circuits for Brain-Robot Interaction","authors":"S. Bozinovski, Adrijan Božinovski","doi":"10.1109/TAMD.2014.2387271","DOIUrl":"https://doi.org/10.1109/TAMD.2014.2387271","url":null,"abstract":"This paper focuses on electroencephalogram (EEG) manifestations of mental states and actions, emulation of control and communication structures using EEG manifestations, and their application in brain-robot interactions. The paper introduces a mentally emulated demultiplexer, a device which uses mental actions to demultiplex a single EEG channel into multiple digital commands. The presented device is applicable in controlling several objects through a single EEG channel. The experimental proof of the concept is given by an obstacle-containing trajectory which should be negotiated by a robotic arm with two degrees of freedom, controlled by mental states of a human brain using a single EEG channel. The work is presented in the framework of Human-Robot interaction (HRI), specifically in the framework of brain-robot interaction (BRI). This work is a continuation of a previous work on developing mentally emulated digital devices, such as a mental action switch, and a mental states flip-flop.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"39-51"},"PeriodicalIF":0.0,"publicationDate":"2015-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2014.2387271","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiming Shen, K. Dautenhahn, J. Saunders, H. Kose-Bagci
{"title":"Can Real-Time, Adaptive Human–Robot Motor Coordination Improve Humans’ Overall Perception of a Robot?","authors":"Qiming Shen, K. Dautenhahn, J. Saunders, H. Kose-Bagci","doi":"10.1109/TAMD.2015.2398451","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2398451","url":null,"abstract":"Previous research on social interaction among humans suggested that interpersonal motor coordination can help to establish social rapport. Our research addresses the question of whether, in a human-humanoid interaction experiment, the human's overall perception of a robot can be improved by realizing motor coordination behavior that allows the robot to adapt in real-time to a person's behavior. A synchrony detection method using information distance was adopted to realize the real-time human-robot motor coordination behavior, which guided the humanoid robot to coordinate its movements to a human by measuring the behavior synchrony between the robot and the human. The feedback of the participants indicated that most of the participants preferred to interact with the humanoid robot with the adaptive motor coordination capability. The results of this proof-of-concept study suggest that the motor coordination mechanism improved humans' overall perception of the humanoid robot. Together with our previous findings, namely that humans actively coordinate their behaviors to a humanoid robot's behaviors, this study further supports the hypothesis that bidirectional motor coordination could be a valid approach to facilitate adaptive human-humanoid interaction.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"52-64"},"PeriodicalIF":0.0,"publicationDate":"2015-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2398451","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junwei Han, Tianming Liu, C. Guo, Deniz Erdoğmuş, J. Weng
{"title":"Guest Editorial Multimodal Modeling and Analysis Informed by Brain Imaging - Part 1","authors":"Junwei Han, Tianming Liu, C. Guo, Deniz Erdoğmuş, J. Weng","doi":"10.1109/TAMD.2015.2495698","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2495698","url":null,"abstract":"","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"38 1","pages":"158-161"},"PeriodicalIF":0.0,"publicationDate":"2015-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75088975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}