2009 IEEE 8th International Conference on Development and Learning最新文献

筛选
英文 中文
Biomimetic Eye-Neck Coordination 仿生眼颈协调
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175535
M. Lopes, A. Bernardino, J. Santos-Victor, K. Rosander, C. Von Hofsten
{"title":"Biomimetic Eye-Neck Coordination","authors":"M. Lopes, A. Bernardino, J. Santos-Victor, K. Rosander, C. Von Hofsten","doi":"10.1109/DEVLRN.2009.5175535","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175535","url":null,"abstract":"We describe a method for coordinating eye and neck motions in the control of a humanoid robotic head. Based on the characteristics of human oculomotor behavior, we formulate the target tracking problem in a state-space control framework and show that suitable controller gains can be either manually tuned with optimal control techniques or learned from bio-mechanical data recorded from newborns subjects. The basic controller relies on eye-neck proprioceptive feedback. In biological systems, vestibular signals and target prediction compensate for external motions and allow target tracking with low lag. We provide ways to integrate inertial and prediction signals in the basic control architecture, whenever these are available. We demonstrate the ability of the method in replicating the behavior of subjects with different ages and show results obtained through a real-time implementation in a humanoid platform.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130702525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Learning and performing place-based mobile manipulation 学习和执行基于位置的移动操作
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175510
F. Stulp, Andreas Fedrizzi, M. Beetz
{"title":"Learning and performing place-based mobile manipulation","authors":"F. Stulp, Andreas Fedrizzi, M. Beetz","doi":"10.1109/DEVLRN.2009.5175510","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175510","url":null,"abstract":"What it means for an object to be ‘within reach’ depends very much on the morphology and skills of a robot. In this paper, we enable a mobile manipulation robot to learn a concept of PLACE from which successful manipulation is possible through trial-and-error interaction with the environment. Due to this developmental approach, PLACE is very much grounded in observed experience, and takes the hardware and skills of the robot into account. During task-execution, this model is used to determine optimal grasp places in a least-commitment approach. This PLACE takes into account uncertainties in both robot and target object positions, and leads to more robust behavior.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132351176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Fuzzy-GIST for emotion recognition in natural scene images 基于模糊gist的自然场景图像情感识别
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175518
Qing Zhang, M. Le
{"title":"Fuzzy-GIST for emotion recognition in natural scene images","authors":"Qing Zhang, M. Le","doi":"10.1109/DEVLRN.2009.5175518","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175518","url":null,"abstract":"Emotion modeling evoked by natural scenes is challenging issue. In this paper, we propose a novel scheme for analyzing the emotion reflected by a natural scene, considering the human emotional status. Based on the concept of original GIST, we developed the fuzzy-GIST to build the emotional feature space. According to the relationship between emotional factors and the characters of image, L*C*H* color and orientation information are chosen to study the relationship between human's low level emotions and image characteristics. And it is realized that we need to analyze the visual features at semantic level, so we incorporate the fuzzy concept to extract features with semantic meanings. Moreover, we treat emotional electroencephalography (EEG) using the fuzzy logic based on possibility theory rather than widely used conventional probability theory to generate the semantic feature of the human emotions. Fuzzy-GIST consists of both semantic visual information and linguistic EEG feature, it is used to represent emotional gist of a natural scene in a semantic level. The emotion evoked by an image is predicted from fuzzy-GIST by using a support vector machine, and the mean opinion score (MOS) is used for performance evaluation for the proposed scheme. The experiments results show that positive and negative emotions can be recognized with high accuracy for a given dataset.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128255038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Robot navigation and manipulation based on a predictive associative memory 基于预测性联想记忆的机器人导航与操作
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175519
S. Jockel, Mateus Mendes, Jianwei Zhang, A. Coimbra, M. Crisostomo
{"title":"Robot navigation and manipulation based on a predictive associative memory","authors":"S. Jockel, Mateus Mendes, Jianwei Zhang, A. Coimbra, M. Crisostomo","doi":"10.1109/DEVLRN.2009.5175519","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175519","url":null,"abstract":"Proposed in the 1980s, the Sparse Distributed Memory (SDM) is a model of an associative memory based on the properties of a high dimensional binary space. This model has received some attention from researchers of different areas and has been improved over time. However, a few problems have to be solved when using it in practice, due to the non-randomness characteristics of the actual data. We tested an SDM using different forms of encoding the information, and in two different domains: robot navigation and manipulation. Our results show that the performance of the SDM in the two domains is affected by the way the information is actually encoded, and may be improved by some small changes in the model.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134026081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A theoretical framework for transfer of knowledge across modalities in artificial and biological systems 在人工和生物系统中跨模式知识转移的理论框架
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175515
Francesco Orabona, B. Caputo, A. Fillbrandt, F. Ohl
{"title":"A theoretical framework for transfer of knowledge across modalities in artificial and biological systems","authors":"Francesco Orabona, B. Caputo, A. Fillbrandt, F. Ohl","doi":"10.1109/DEVLRN.2009.5175515","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175515","url":null,"abstract":"Learning from sensory patterns associated with different kinds of sensors is paramount for biological systems, as it permits them to cope with complex environments where events rarely appear twice in the same way. In this paper we want to investigate how perceptual categories formed in one modality can be transferred to another modality in biological and artificial systems. We first present a study on Mongolian gerbils that show clear evidence of transfer of knowledge for a perceptual category from the auditory modality to the visual modality. We then introduce an algorithm that mimics the behavior of the rodents within the online learning framework. Experiments on simulated data produced promising results, showing the pertinence of our approach.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122428309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning grasping affordances from local visual descriptors 学习从局部视觉描述符抓取启示
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175529
L. Montesano, M. Lopes
{"title":"Learning grasping affordances from local visual descriptors","authors":"L. Montesano, M. Lopes","doi":"10.1109/DEVLRN.2009.5175529","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175529","url":null,"abstract":"In this paper we study the learning of affordances through self-experimentation. We study the learning of local visual descriptors that anticipate the success of a given action executed upon an object. Consider, for instance, the case of grasping. Although graspable is a property of the whole object, the grasp action will only succeed if applied in the right part of the object. We propose an algorithm to learn local visual descriptors of good grasping points based on a set of trials performed by the robot. The method estimates the probability of a successful action (grasp) based on simple local features. Experimental results on a humanoid robot illustrate how our method is able to learn descriptors of good grasping points and to generalize to novel objects based on prior experience.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"589 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127806172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Cultural differences in relational knowledge 关系知识的文化差异
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175506
M. Kuwabara, Linda B. Smith
{"title":"Cultural differences in relational knowledge","authors":"M. Kuwabara, Linda B. Smith","doi":"10.1109/DEVLRN.2009.5175506","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175506","url":null,"abstract":"Much research in developmental psychology and cognitive development presumes a universal developmental trend that is independent of culture. One such trend, from object to relational knowledge, is seen over and over. However, most of this research is based on the study of children and individuals from Western cultures. This paper considers the possibility that this developmental trend might differ in different cultures.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126205370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evolving predictive visual motion detectors 进化预测视觉运动检测器
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175524
Jonas Ruesch, A. Bernardino
{"title":"Evolving predictive visual motion detectors","authors":"Jonas Ruesch, A. Bernardino","doi":"10.1109/DEVLRN.2009.5175524","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175524","url":null,"abstract":"The geometrical organization of a visual sensor is of major importance for the later processing of sensed stimuli. We present an approach to evolve artificial visual detectors which adapt their size and orientation according to the experienced sensory stimulation. The criterion for the introduced optimization method is given by a Reichardt correlation measure on the input signal. Under the described conditions, the visual receptors organize their spatial arrangement following the average luminance flow recorded by the sensor over time.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116749130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Computational Developmental Model based on Synthetic Approaches 基于综合方法的计算发展模型
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175544
M. Asada, K. Hosoda, H. Ishiguro, Y. Kuniyoshi, T. Inui
{"title":"Towards Computational Developmental Model based on Synthetic Approaches","authors":"M. Asada, K. Hosoda, H. Ishiguro, Y. Kuniyoshi, T. Inui","doi":"10.1109/DEVLRN.2009.5175544","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175544","url":null,"abstract":"Cognitive developmental robotics (CDR) [1] has been tackling the issue of how human cognitive functions develop by means of a synthetic approach that developmentally constructs cognitive functions. “Physical embodiment” is the core idea of CDR, and is revisited to make its role clearer, that is, to enable information structuring through interactions with the environment, including other agents. This paper attempts to reveal the developmental process of human cognitive function from a viewpoint of synthetic approach towards building a computational developmental model for the process with brief introductions of existing CDR approaches.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128690371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards a computational model of Acoustic Packaging 声学封装的计算模型研究
2009 IEEE 8th International Conference on Development and Learning Pub Date : 2009-06-05 DOI: 10.1109/DEVLRN.2009.5175523
Lars Schillingmann, B. Wrede, K. Rohlfing
{"title":"Towards a computational model of Acoustic Packaging","authors":"Lars Schillingmann, B. Wrede, K. Rohlfing","doi":"10.1109/DEVLRN.2009.5175523","DOIUrl":"https://doi.org/10.1109/DEVLRN.2009.5175523","url":null,"abstract":"In order to learn and interact with humans, robots need understand actions and make use of language in social interactions. The use of language for the learning of actions has been emphasized by Hirsh-Pasek & Golinkoff introducing the idea of Acoustic Packaging [1]. Accordingly, it has been suggested that acoustic information, typically in the form of narration, overlaps with action sequences and provides infants with a bottom-up guide to attend to relevant events and to find structure within them. Following the promising results achieved by Brand & Tapscott for infants who packaged sequences together when acoustic narration was provided, in this paper, we make the first approach towards a computational model of the multimodal interplay of action and language in tutoring situations. For our purpose, we understand events as temporal intervals, which have to be segmented in both the visual and the acoustic signal in order to perform Acoustic Packaging. For the visual modality, we looked at the amount of motion over time via a motion history image based approach. The visual signal is segmented by detecting local minima in the amount of motion. For the acoustic modality, we used a phoneme recognizer, which currently segments the acoustic signal into speech and non-speech intervals. Our Acoustic Packaging algorithm merges the segments from both modalities based on temporal synchrony. First evaluation results show that Acoustic Packaging can provide a meaningful segmentation of tutoring behavior.","PeriodicalId":192225,"journal":{"name":"2009 IEEE 8th International Conference on Development and Learning","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117123383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信