{"title":"An Autonomous Social Robot in Fear","authors":"Álvaro Castro González, M. Malfaz, M. Salichs","doi":"10.1109/TAMD.2012.2234120","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2234120","url":null,"abstract":"Currently artificial emotions are being extensively used in robots. Most of these implementations are employed to display affective states. Nevertheless, their use to drive the robot's behavior is not so common. This is the approach followed by the authors in this work. In this research, emotions are not treated in general but individually. Several emotions have been implemented in a real robot, but in this paper, authors focus on the use of the emotion of fear as an adaptive mechanism to avoid dangerous situations. In fact, fear is used as a motivation which guides the behavior during specific circumstances. Appraisal of fear is one of the cornerstones of this work. A novel mechanism learns to identify the harmful circumstances which cause damage to the robot. Hence, these circumstances elicit the fear emotion and are known as fear releasers. In order to prove the advantages of considering fear in our decision making system, the robot's performance with and without fear are compared and the behaviors are analyzed. The robot's behaviors exhibited in relation to fear are natural, i.e., the same kind of behaviors can be observed on animals. Moreover, they have not been preprogrammed, but learned by real inter actions in the real world. All these ideas have been implemented in a real robot living in a laboratory and interacting with several items and people.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"135-151"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2234120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Redundant Neural Vision Systems—Competing for Collision Recognition Roles","authors":"Shigang Yue, F. Rind","doi":"10.1109/TAMD.2013.2255050","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2255050","url":null,"abstract":"Ability to detect collisions is vital for future robots that interact with humans in complex visual environments. Lobula giant movement detectors (LGMD) and directional selective neurons (DSNs) are two types of identified neurons found in the visual pathways of insects such as locusts. Recent modeling studies showed that the LGMD or grouped DSNs could each be tuned for collision recognition. In both biological and artificial vision systems, however, which one should play the collision recognition role and the way the two types of specialized visual neurons could be functioning together are not clear. In this modeling study, we compared the competence of the LGMD and the DSNs, and also investigate the cooperation of the two neural vision systems for collision recognition via artificial evolution. We implemented three types of collision recognition neural subsystems - the LGMD, the DSNs and a hybrid system which combines the LGMD and the DSNs subsystems together, in each individual agent. A switch gene determines which of the three redundant neural subsystems plays the collision recognition role. We found that, in both robotics and driving environments, the LGMD was able to build up its ability for collision recognition quickly and robustly therefore reducing the chance of other types of neural networks to play the same role. The results suggest that the LGMD neural network could be the ideal model to be realized in hardware for collision recognition.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"51 1","pages":"173-186"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2255050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Brain-Like Emergent Temporal Processing: Emergent Open States","authors":"J. Weng, M. Luciw, Qi Zhang","doi":"10.1109/TAMD.2013.2258398","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2258398","url":null,"abstract":"Informed by brain anatomical studies, we present the developmental network (DN) theory on brain-like temporal information processing. The states of the brain are at its effector end, emergent and open. A finite automaton (FA) is considered an external symbolic model of brain's temporal behaviors, but the FA uses handcrafted states and is without “internal” representations. The term “internal” means inside the network “skull.” Using action-based state equivalence and the emergent state representations, the time driven processing of DN performs state-based abstraction and state-based skill transfer. Each state of DN, as a set of actions, is openly observable by the external environment (including teachers). Thus, the external environment can teach the state at every frame time. Through incremental learning and autonomous practice, the DN lumps (abstracts) infinitely many temporal context sequences into a single equivalent state. Using this state equivalence, a skill learned under one sequence is automatically transferred to other infinitely many state-equivalent sequences in the future without the need for explicit learning. Two experiments are shown as examples: The experiments for video processing showed almost perfect recognition rates in disjoint tests. The experiment for text language, using corpora from the Wall Street Journal, treated semantics and syntax in a unified interactive way.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"89-116"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2258398","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reaching for the Unreachable: Reorganization of Reaching with Walking.","authors":"Beata J Grzyb, Linda B Smith, Angel P Del Pobil","doi":"10.1109/TAMD.2013.2255872","DOIUrl":"10.1109/TAMD.2013.2255872","url":null,"abstract":"<p><p>Previous research suggests that reaching and walking behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an apparent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compared nonwalkers, walkers with help, and independent walkers in a reaching task to targets at varying distances. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time it was presented. Nonwalkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking.</p>","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 2","pages":"162-172"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4476390/pdf/nihms692559.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"33419586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxime Petit, S. Lallée, Jean-David Boucher, G. Pointeau, Pierrick Cheminade, D. Ognibene, E. Chinellato, U. Pattacini, I. Gori, Uriel Martinez-Hernandez, Hector Barron-Gonzalez, Martin Inderbitzin, Andre L. Luvizotto, V. Vouloutsi, Y. Demiris, G. Metta, Peter Ford Dominey
{"title":"The Coordinating Role of Language in Real-Time Multimodal Learning of Cooperative Tasks","authors":"Maxime Petit, S. Lallée, Jean-David Boucher, G. Pointeau, Pierrick Cheminade, D. Ognibene, E. Chinellato, U. Pattacini, I. Gori, Uriel Martinez-Hernandez, Hector Barron-Gonzalez, Martin Inderbitzin, Andre L. Luvizotto, V. Vouloutsi, Y. Demiris, G. Metta, Peter Ford Dominey","doi":"10.1109/TAMD.2012.2209880","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2209880","url":null,"abstract":"One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”—which defines the interlaced actions of the two cooperating agents—in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"5 1","pages":"3-17"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2209880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to \"Human-Recognizable Robotic Gestures\" [Dec 12 305-314]","authors":"J. Cabibihan, W. So, S. Pramanik","doi":"10.1109/TAMD.2013.2251711","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2251711","url":null,"abstract":"In the above-named article [ibid., vol. 4, no. 4, pp. 305-314, Dec. 2012], the current affiliation within the biography of J.-J. Cabibihan was mistakenly written as Gemalto Singapore, Singapore. That is the current affiliation of S. Pramanik. Dr. Cabibihan's current affiliation is the National University of Singapore.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"879 7","pages":"85"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2013.2251711","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72433429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cem Karaoguz, Tobias Rodemann, B. Wrede, C. Goerick
{"title":"Learning Information Acquisition for Multitasking Scenarios in Dynamic Environments","authors":"Cem Karaoguz, Tobias Rodemann, B. Wrede, C. Goerick","doi":"10.1109/TAMD.2012.2226241","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2226241","url":null,"abstract":"Real world environments are so dynamic and unpredictable that a goal-oriented autonomous system performing a set of tasks repeatedly never experiences the same situation even though the task routines are the same. Hence, manually designed solutions to execute such tasks are likely to fail due to such variations. Developmental approaches seek to solve this problem by implementing local learning mechanisms to the systems that can unfold capabilities to achieve a set of tasks through interactions with the environment. However, gathering all the information available in the environment for local learning mechanisms to process is hardly possible due to limited resources of the system. Thus, an information acquisition mechanism is necessary to find task-relevant information sources and applying a strategy to update the knowledge of the system about these sources efficiently in time. A modular systems approach may provide a useful structured and formalized basis for that. In such systems different modules may request access to the constrained system resources to acquire information they are tuned for. We propose a reward-based learning framework that achieves an efficient strategy for distributing the constrained system resources among modules to keep relevant environmental information up to date for higher level task learning and executing mechanisms in the system. We apply the proposed framework to a visual attention problem in a system using the iCub humanoid in simulation.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"8 1","pages":"46-61"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2226241","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62760991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey of the Ontogeny of Tool Use: From Sensorimotor Experience to Planning","authors":"Frank Guerin, N. Krüger, D. Kraft","doi":"10.1109/TAMD.2012.2209879","DOIUrl":"https://doi.org/10.1109/TAMD.2012.2209879","url":null,"abstract":"In this paper, we review current knowledge on tool use development in infants in order to provide relevant information to cognitive developmental roboticists seeking to design artificial systems that develop tool use abilities. This information covers: 1) sketching developmental pathways leading to tool use competences; 2) the characterization of learning and test situations; 3) the crystallization of seven mechanisms underlying the developmental process; and 4) the formulation of a number of challenges and recommendations for designing artificial systems that exhibit tool use abilities in complex contexts.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"23 1","pages":"18-45"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TAMD.2012.2209879","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62761084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial - TAMD Outstanding Paper Award and Open Access Publication Established","authors":"Zhengyou Zhang","doi":"10.1109/TAMD.2013.2251691","DOIUrl":"https://doi.org/10.1109/TAMD.2013.2251691","url":null,"abstract":"In an editorial of the IEEE Transactions on Autonomous Mental Development (TAMD) (Vol. 4, No. 3), the author noted the progress made in establishing the IEEE TAMD Outstanding Paper Award to recognize annually outstanding papers published in the TRANSACTIONS. He is pleased to report that the IEEE Technical Activities Board (TAB) has approved the motion and that the IEEE TAMD Outstanding Paper Award will be formally established in 2013. This is the first year the Award will be bestowed. For the current round of competition, any paper published in 2011 (Volume 3) is eligible for consideration. The prize includes a US$1000 honorarium, to be split equally among coauthors, and certificates to the author and coauthors of the selected paper. Please note, no self-nomination is allowed. On another topic, IEEE TAMD is a hybrid transactions allowing either traditional publications or author-pay Open Access (OA) publications. The OA option, if selected, enables unrestricted public access to the article via IEEE Xplore. The OA option will be offered to the author at the time the manuscript is submitted. If selected, the OA fee must be paid before the article is published in the TRANSACTIONS. IEEE currently offers the discounted rate of US$1750 per article. The traditional option, if selected, enables access to all qualified subscribers and purchasers via IEEE Xplore. For the traditional option, no OA payment is required. The IEEE peer review standard of excellence is applied consistently to all submissions. All accepted articles will be included in the print issuemailed to subscribers.","PeriodicalId":49193,"journal":{"name":"IEEE Transactions on Autonomous Mental Development","volume":"99 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2013-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76522220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}