Yunzhi Wang, V. Chattaraman, Hyejeong Kim, G. Deshpande
{"title":"Predicting Purchase Decisions Based on Spatio-Temporal Functional MRI Features Using Machine Learning","authors":"Yunzhi Wang, V. Chattaraman, Hyejeong Kim, G. Deshpande","doi":"10.1109/TAMD.2015.2434733","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2434733","url":null,"abstract":"Machine learning algorithms allow us to directly predict brain states based on functional magnetic resonance imaging (fMRI) data. In this study, we demonstrate the application of this framework to neuromarketing by predicting purchase decisions from spatio-temporal fMRI data. A sample of 24 subjects were shown product images and asked to make decisions of whether to buy them or not while undergoing fMRI scanning. Eight brain regions which were significantly activated during decision-making were identified using a general linear model. Time series were extracted from these regions and input into a recursive cluster elimination based support vector machine (RCE-SVM) for predicting purchase decisions. This method iteratively eliminates features which are unimportant until only the most discriminative features giving maximum accuracy are obtained. We were able to predict purchase decisions with 71% accuracy, which is higher than previously reported. In addition, we found that the most discriminative features were in signals from medial and superior frontal cortices. Therefore, this approach provides a reliable framework for using fMRI data to predict purchase-related decision-making as well as infer its neural correlates.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"248-255"},"PeriodicalIF":0.0,"publicationDate":"2015-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2434733","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks","authors":"Wei-Long Zheng, Bao-Liang Lu","doi":"10.1109/TAMD.2015.2431497","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2431497","url":null,"abstract":"To investigate critical frequency bands and channels, this paper introduces deep belief networks (DBNs) to constructing EEG-based emotion recognition models for three emotions: positive, neutral and negative. We develop an EEG dataset acquired from 15 subjects. Each subject performs the experiments twice at the interval of a few days. DBNs are trained with differential entropy features extracted from multichannel EEG data. We examine the weights of the trained DBNs and investigate the critical frequency bands and channels. Four different profiles of 4, 6, 9, and 12 channels are selected. The recognition accuracies of these four profiles are relatively stable with the best accuracy of 86.65%, which is even better than that of the original 62 channels. The critical frequency bands and channels determined by using the weights of trained DBNs are consistent with the existing observations. In addition, our experiment results show that neural signatures associated with different emotions do exist and they share commonality across sessions and individuals. We compare the performance of deep models with shallow models. The average accuracies of DBN, SVM, LR, and KNN are 86.08%, 83.99%, 82.70%, and 72.60%, respectively.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"73 1","pages":"162-175"},"PeriodicalIF":0.0,"publicationDate":"2015-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2431497","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yang Liu, Yan Liu, Chaoguang Wang, Xiaohong Wang, Pei-Yuan Zhou, Gino Yu, Keith C. C. Chan
{"title":"What Strikes the Strings of Your Heart?-Multi-Label Dimensionality Reduction for Music Emotion Analysis via Brain Imaging","authors":"Yang Liu, Yan Liu, Chaoguang Wang, Xiaohong Wang, Pei-Yuan Zhou, Gino Yu, Keith C. C. Chan","doi":"10.1109/TAMD.2015.2429580","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2429580","url":null,"abstract":"After 20 years of extensive study in psychology, some musical factors have been identified that can evoke certain kinds of emotions. However, the underlying mechanism of the relationship between music and emotion remains unanswered. This paper intends to find the genuine correlates of music emotion by exploring a systematic and quantitative framework. The task is formulated as a dimensionality reduction problem, which seeks the complete and compact feature set with intrinsic correlates for the given objectives. Since a song generally elicits more than one emotion, we explore dimensionality reduction techniques for multi-label classification. One challenging problem is that the hard label cannot represent the extent of the emotion and it is also difficult to ask the subjects to quantize their feelings. This work tries utilizing the electroencephalography (EEG) signal to solve this challenge. A learning scheme called EEG-based emotion smoothing ( ${{rm E}^2}{rm S}$ ) and a bilinear multi-emotion similarity preserving embedding (BME-SPE) algorithm are proposed. We validate the effectiveness of the proposed framework on standard dataset CAL-500. Several influential correlates have been identified and the classification via those correlates has achieved good performance. We build a Chinese music dataset according to the identified correlates and find that the music from different cultures may share similar emotions.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"31 1","pages":"176-188"},"PeriodicalIF":0.0,"publicationDate":"2015-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82958144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EEG-Based Perceived Tactile Location Prediction","authors":"Deng Wang, Yadong Liu, D. Hu, Gunnar Blohm","doi":"10.1109/TAMD.2015.2427581","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2427581","url":null,"abstract":"Previous studies have attempted to investigate the peripheral neural mechanisms implicated in tactile perception, but the neurophysiological data in humans involved in tactile spatial location perception to help the brain orient the body and interact with its surroundings are not well understood. In this paper, we use single-trial electroencephalogram (EEG) measurements to explore the perception of tactile stimuli located on participants' right forearm, which were approximately equally spaced centered on the body midline, 2 leftward and 2 rightward of midline. An EEG-based signal analysis approach to predict the location of the tactile stimuli is proposed. Offline classification suggests that tactile location can be detected from EEG signals in single trial (four-class classifier for location discriminate can achieve up to 96.76%) with a short response time (600 milliseconds after stimulus presentation). From a human-machine-interaction (HMI) point of view, this could be used to design a real-time reactive control machine for patients, e.g., suffering from hypoesthesia.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"342-348"},"PeriodicalIF":0.0,"publicationDate":"2015-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2427581","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Randomized Structural Sparsity-Based Support Identification with Applications to Locating Activated or Discriminative Brain Areas: A Multicenter Reproducibility Study","authors":"Yilun Wang, Sheng Zhang, Junjie Zheng, Heng Chen, Huafu Chen","doi":"10.1109/TAMD.2015.2427341","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2427341","url":null,"abstract":"In this paper, we focus on how to locate the relevant or discriminative brain regions related with external stimulus or certain mental decease, which is also called support identification, based on the neuroimaging data. The main difficulty lies in the extremely high dimensional voxel space and relatively few training samples, easily resulting in an unstable brain region discovery (or called feature selection in context of pattern recognition). When the training samples are from different centers and have between-center variations, it will be even harder to obtain a reliable and consistent result. Corresponding, we revisit our recently proposed algorithm based on stability selection and structural sparsity. It is applied to the multicenter MRI data analysis for the first time. A consistent and stable result is achieved across different centers despite the between-center data variation while many other state-of-the-art methods such as two sample t-test fail. Moreover, we have empirically showed that the performance of this algorithm is robust and insensitive to several of its key parameters. In addition, the support identification results on both functional MRI and structural MRI are interpretable and can be the potential biomarkers.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"287-300"},"PeriodicalIF":0.0,"publicationDate":"2015-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2427341","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Wörgötter, C. Geib, M. Tamosiunaite, E. Aksoy, J. Piater, Hanchen Xiong, A. Ude, B. Nemec, D. Kraft, N. Krüger, Mirko Wächter, T. Asfour
{"title":"Structural Bootstrapping—A Novel, Generative Mechanism for Faster and More Efficient Acquisition of Action-Knowledge","authors":"F. Wörgötter, C. Geib, M. Tamosiunaite, E. Aksoy, J. Piater, Hanchen Xiong, A. Ude, B. Nemec, D. Kraft, N. Krüger, Mirko Wächter, T. Asfour","doi":"10.1109/TAMD.2015.2427233","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2427233","url":null,"abstract":"Humans, but also robots, learn to improve their behavior. Without existing knowledge, learning either needs to be explorative and, thus, slow or-to be more efficient-it needs to rely on supervision, which may not always be available. However, once some knowledge base exists an agent can make use of it to improve learning efficiency and speed. This happens for our children at the age of around three when they very quickly begin to assimilate new information by making guided guesses how this fits to their prior knowledge. This is a very efficient generative learning mechanism in the sense that the existing knowledge is generalized into as-yet unexplored, novel domains. So far generative learning has not been employed for robots and robot learning remains to be a slow and tedious process. The goal of the current study is to devise for the first time a general framework for a generative process that will improve learning and which can be applied at all different levels of the robot's cognitive architecture. To this end, we introduce the concept of structural bootstrapping-borrowed and modified from child language acquisition-to define a probabilistic process that uses existing knowledge together with new observations to supplement our robot's data-base with missing information about planning-, object-, as well as, action-relevant entities. In a kitchen scenario, we use the example of making batter by pouring and mixing two components and show that the agent can efficiently acquire new knowledge about planning operators, objects as well as required motor pattern for stirring by structural bootstrapping. Some benchmarks are shown, too, that demonstrate how structural bootstrapping improves performance.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"140-154"},"PeriodicalIF":0.0,"publicationDate":"2015-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2427233","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Zhang, Peng Xu, R. Chen, Teng Ma, Xulin Lv, Fali Li, Peiyang Li, Tiejun Liu, D. Yao
{"title":"An Adaptive Motion-Onset VEP-Based Brain-Computer Interface","authors":"Rui Zhang, Peng Xu, R. Chen, Teng Ma, Xulin Lv, Fali Li, Peiyang Li, Tiejun Liu, D. Yao","doi":"10.1109/TAMD.2015.2426176","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2426176","url":null,"abstract":"Motion-onset visual evoked potential (mVEP) has been recently proposed for EEG-based brain-computer interface (BCI) system. It is a scalp potential of visual motion response, and typically composed of three components: P1, N2, and P2. Usually several repetitions are needed to increase the signal-to-noise ratio (SNR) of mVEP, but more repetitions will cost more time thus lower the efficiency. Considering the fluctuation of subject's state across time, the adaptive repetitions based on the subject's real-time signal quality is important for increasing the communication efficiency of mVEP-based BCI. In this paper, the amplitudes of the three components of mVEP are proposed to build a dynamic stopping criteria according to the practical information transfer rate (PITR) from the training data. During online test, the repeated stimulus stopped once the predefined threshold was exceeded by the real-time signals and then another circle of stimulus newly began. Evaluation tests showed that the proposed dynamic stopping strategy could significantly improve the communication efficiency of mVEP-based BCI that the average PITR increases from 14.5 bit/min of the traditional fixed repetition method to 20.8 bit/min. The improvement has great value in real-life BCI applications because the communication efficiency is very important.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"349-356"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2426176","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation with Motionese","authors":"Emre Ugur, Y. Nagai, E. Sahin, Erhan Öztop","doi":"10.1109/TAMD.2015.2426192","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2426192","url":null,"abstract":"Inspired by infant development, we propose a three staged developmental framework for an anthropomorphic robot manipulator. In the first stage, the robot is initialized with a basic reach-and- enclose-on-contact movement capability, and discovers a set of behavior primitives by exploring its movement parameter space. In the next stage, the robot exercises the discovered behaviors on different objects, and learns the caused effects; effectively building a library of affordances and associated predictors. Finally, in the third stage, the learned structures and predictors are used to bootstrap complex imitation and action learning with the help of a cooperative tutor. The main contribution of this paper is the realization of an integrated developmental system where the structures emerging from the sensorimotor experience of an interacting real robot are used as the sole building blocks of the subsequent stages that generate increasingly more complex cognitive capabilities. The proposed framework includes a number of common features with infant sensorimotor development. Furthermore, the findings obtained from the self-exploration and motionese guided human-robot interaction experiments allow us to reason about the underlying mechanisms of simple-to-complex sensorimotor skill progression in human infants.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"119-139"},"PeriodicalIF":0.0,"publicationDate":"2015-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2426192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Action Priors for Learning Domain Invariances","authors":"Benjamin Rosman, S. Ramamoorthy","doi":"10.1109/TAMD.2015.2419715","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2419715","url":null,"abstract":"An agent tasked with solving a number of different decision making problems in similar environments has an opportunity to learn over a longer timescale than each individual task. Through examining solutions to different tasks, it can uncover behavioral invariances in the domain, by identifying actions to be prioritized in local contexts, invariant to task details. This information has the effect of greatly increasing the speed of solving new problems. We formalise this notion as action priors, defined as distributions over the action space, conditioned on environment state, and show how these can be learnt from a set of value functions. We apply action priors in the setting of reinforcement learning, to bias action selection during exploration. Aggressive use of action priors performs context based pruning of the available actions, thus reducing the complexity of lookahead during search. We additionally define action priors over observation features, rather than states, which provides further flexibility and generalizability, with the additional benefit of enabling feature selection. Action priors are demonstrated in experiments in a simulated factory environment and a large random graph domain, and show significant speed ups in learning new tasks. Furthermore, we argue that this mechanism is cognitively plausible, and is compatible with findings from cognitive psychology.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"107-118"},"PeriodicalIF":0.0,"publicationDate":"2015-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2419715","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Probabilistic Concept Web on a Humanoid Robot","authors":"H. Çelikkanat, Guner Orhan, Sinan Kalkan","doi":"10.1109/TAMD.2015.2418678","DOIUrl":"https://doi.org/10.1109/TAMD.2015.2418678","url":null,"abstract":"It is now widely accepted that concepts and conceptualization are key elements towards achieving cognition on a humanoid robot. An important problem on this path is the grounded representation of individual concepts and the relationships between them. In this article, we propose a probabilistic method based on Markov Random Fields to model a concept web on a humanoid robot where individual concepts and the relations between them are captured. In this web, each individual concept is represented using a prototype-based conceptualization method that we proposed in our earlier work. Relations between concepts are linked to the cooccurrences of concepts in interactions. By conveying input from perception, action, and language, the concept web forms rich, structured, grounded information about objects, their affordances, words, etc. We demonstrate that, given an interaction, a word, or the perceptual information from an object, the corresponding concepts in the web are activated, much the same way as they are in humans. Moreover, we show that the robot can use these activations in its concept web for several tasks to disambiguate its understanding of the scene.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"7 1","pages":"92-106"},"PeriodicalIF":0.0,"publicationDate":"2015-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2015.2418678","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62763438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}