2008 7th IEEE International Conference on Development and Learning最新文献

筛选
英文 中文
What prosody tells infants to believe 韵律告诉婴儿要相信什么
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640842
E.S. Kim, K. Gold, B. Scassellati
{"title":"What prosody tells infants to believe","authors":"E.S. Kim, K. Gold, B. Scassellati","doi":"10.1109/DEVLRN.2008.4640842","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640842","url":null,"abstract":"We examined whether evidence for prosodic signals about shared belief can be quantitatively found within the acoustic signal of infant-directed speech. Two transcripts of infant-directed speech for infants aged 1;4 and 1;6 were labeled with distinct speaker intents to modify shared beliefs, based on Pierrehumbert and Hirschbergpsilas theory of the meaning of prosody [1]. Acoustic predictions were made from intent labels first within a simple single-tone model that reflected only whether the speaker intended to add a wordpsilas information to the discourse (high tone, H*) or not (low tone, L*). We also predicted pitch within a more complicated five-category model that added intents to suggest a word as one of several possible alternatives (L*+H), a contrasting alternative (L+H*), or something about which the listener should make an inference (H*+L). The acoustic signal was then manually segmented and automatically classified based solely on whether the pitches at the beginning, end, and peak intensity points of stressed syllables in salient words, were closer to the utterancepsilas pitch minimum or maximum on a log scale. Evidence supporting our intent-based pitch predictions was found for L*, H*, and L*+H accents, but not for L+H* or H*+L. No evidence was found to support the hypothesis that infant-directed speech simplifies two-tone into single-tone pitch accents.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116201764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multimodal joint attention through cross facilitative learning based on μX principle 基于μX原理的交叉促进学习多模态联合注意
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640834
Y. Yoshikawa, T. Nakano, M. Asada, H. Ishiguro
{"title":"Multimodal joint attention through cross facilitative learning based on μX principle","authors":"Y. Yoshikawa, T. Nakano, M. Asada, H. Ishiguro","doi":"10.1109/DEVLRN.2008.4640834","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640834","url":null,"abstract":"Simultaneous learning of multiple functions is one of the fundamental issues not only to design intelligent robots but also to understand humanpsilas cognitive developmental process since we, human, do so in our daily lives but we do not know how to do. Drawing an analogy to the well-known bias in child language development, we propose the mutual exclusivity selection principle (muX principle) for learning multi-modal mappings: selecting more mutually exclusive output leads experiences to make underdeveloped complementary mappings more disambiguated. The muX principle is applied to multi-modal joint attention with utterances for lexicon acquisition, and synthetically modeled in both intra- and inter-module levels of output. Through the series of computer simulations, the effects of the muX principle on the mutual facilitation in learning multi-functions and robustness against errors in segmentation of observation are analyzed. Finally, the correspondence of the synthesized development to infantpsilas one is argued based on the simulation with careful behavior by a caregiver.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122963523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Adaptive temporal difference learning of spatial memory in the water maze task 水迷宫任务中空间记忆的自适应时间差学习
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640810
Erik E. Stone, M. Skubic, James M. Keller
{"title":"Adaptive temporal difference learning of spatial memory in the water maze task","authors":"Erik E. Stone, M. Skubic, James M. Keller","doi":"10.1109/DEVLRN.2008.4640810","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640810","url":null,"abstract":"The Morris water maze task is a spatial memory task in which an association between cues from the environment and position must be learned in order to locate a hidden platform. This paper details the results of using a temporal difference (TD) learning approach to learn associations between perceptual states, which are discretized using a self organizing map (SOM), and actions necessary for a robot to successfully locate the hidden platform in a ldquodryrdquo version of the water maze task. Additionally, the adaptability of the temporal difference learning approach in non-stationary environments is explored.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129852702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Development of joint attention related actions based on reproducing interaction contingency 基于再现交互偶然性的联合注意相关动作的开发
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640839
H. Sumioka, Y. Yoshikawa, M. Asada
{"title":"Development of joint attention related actions based on reproducing interaction contingency","authors":"H. Sumioka, Y. Yoshikawa, M. Asada","doi":"10.1109/DEVLRN.2008.4640839","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640839","url":null,"abstract":"Understanding the developmental process of joint attention related actions, such as gaze following and alternation, is one of essential issues for the emergence of communication. Previous synthetic studies have proposed learning methods for gaze following without any explicit instructions as the first step to understand the development of these actions. However, a robot was given a priori knowledge about which pair of sensory information and action should be associated. This paper addresses the development of social actions without such knowledge with a learning mechanism that iteratively acquires social actions by finding and reproducing the contingency inherent in the interaction with a caregiver. The measurement of contingency based on transfer entropy is used to find appropriate pairs of variables for acquiring social actions from possible candidates. The reproduction of found contingency promotes a change of contingent structure in the subsequent actions of a caregiver and a robot. In computer simulations of human-robot interaction, we examine what kinds of actions related to joint attention can be acquired in which order by controlling the behavior of caregiver agents. The result shows that a robot acquires joint attention related actions in an order that resembles an infantpsilas development of joint attention.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124180830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Body-part categories of early-learned verbs: Different granularities at different points in development 早期学习动词的身体部位类别:不同发展阶段的不同粒度
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640841
J. Maouene, S. Hidaka, L.B. Smith
{"title":"Body-part categories of early-learned verbs: Different granularities at different points in development","authors":"J. Maouene, S. Hidaka, L.B. Smith","doi":"10.1109/DEVLRN.2008.4640841","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640841","url":null,"abstract":"This paper builds on our previous finding that early verbs are strongly related to body parts. One evidence for this relation is the strong word associations among adults between common verbs and body parts. Although many common verbs are related to body parts, the prior evidence suggests that some verbs are strongly related to highly specific body regions (e.g., fingers) and others to larger or more diffuse regions (e.g., hand and arm). Here we ask whether this granularity or specificity in associations is related to age of acquisition. We examine the structure of adult associations of common verbs to body parts as a function of age of acquisition for a 101 verbs normatively acquired between 16 to 30 months. And we propose a new analysis to look at the development of granularity over a short time period: 16 months and for a small number of verbs: 101. We generated verb clusters based on body parts features, and analysed how these body-partsbased clusters account for variance of age of acquisition (AoA) of verbs. By applying this analysis from the 50 earliest learned verbs to the 50 latest learned ones, we found several clusters relevant to AoA in different granularity of body parts. The results fit with growing behavioural and neuro-imaging results on the role of the body - and sensory-motor interactions in the world - in verb processing.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128079400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Embodied solution: The world from a toddler’s point of view 具体的解决方案:从一个蹒跚学步的孩子的角度看世界
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640812
Chen Yu, L.B. Smith, A. Pereira
{"title":"Embodied solution: The world from a toddler’s point of view","authors":"Chen Yu, L.B. Smith, A. Pereira","doi":"10.1109/DEVLRN.2008.4640812","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640812","url":null,"abstract":"An important goal in studying both human intelligence and artificial intelligence is an understanding of how a natural or artificial learning system deals with the uncertainty and ambiguity in the real world. We suggest that the relevant aspects in a learning environment for the learner are only those that make contact with the learnerpsilas sensory system. Moreover, in a real-world interaction, what the learner perceives in his sensory system critically depends on both his own and his social partnerpsilas actions, and his interactions with the world. In this way, the perception-action loops both within a learner and between the learner and his social partners may provide an embodied solution that significantly simplifies the social and physical learning environment, and filters irrelevant information for a current learning task which ultimately leads to successful learning. In light of this, we report new findings using a novel method that seeks to describe the visual learning environment from a young childpsilas point of view. The method consists of a multi-camera sensing environment consisting of two head-mounted mini cameras that are placed on both the childpsilas and the parentpsilas foreheads respectively. The main results are that (1) the adultpsilas and childpsilas views are fundamentally different when they interact in the same environment; (2) what the child perceives most often depends on his own actions and his social partnerpsilas actions; (3) the actions generated by both social partners provide more constrained and clean input to facilitate learning. These findings have broad implications for how one studies and thinks about human and artificial learning systems.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131130089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A self-referential childlike model to acquire phones, syllables and words from acoustic speech 从语音中习得电话、音节和单词的自我指涉儿童模型
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640801
H. Brandl, B. Wrede, F. Joublin, C. Goerick
{"title":"A self-referential childlike model to acquire phones, syllables and words from acoustic speech","authors":"H. Brandl, B. Wrede, F. Joublin, C. Goerick","doi":"10.1109/DEVLRN.2008.4640801","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640801","url":null,"abstract":"Speech understanding requires the ability to parse spoken utterances into words. But this ability is not innate and needs to be developed by infants within the first years of their life. So far almost all computational speech processing systems neglected this bootstrapping process. Here we propose a model for early infant word learning embedded into a layered architecture comprising phone, phonotactics and syllable learning. Our model uses raw acoustic speech as input and aims to learn the structure of speech unsupervised on different levels of granularity. We present first experiments which evaluate our model on speech corpora that have some of the properties of infant-directed speech. To further motivate our approach we outline how the proposed model integrates into an embodied multimodal learning and interaction framework running on Hondapsilas ASIMO robot.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128814241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A sub-symbolic process underlying the usage-based acquisition of a compositional representation: Results of robotic learning experiments of goal-directed actions 基于使用的合成表征习得的子符号过程:目标导向行为的机器人学习实验结果
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640817
Y. Sugita, J. Tani
{"title":"A sub-symbolic process underlying the usage-based acquisition of a compositional representation: Results of robotic learning experiments of goal-directed actions","authors":"Y. Sugita, J. Tani","doi":"10.1109/DEVLRN.2008.4640817","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640817","url":null,"abstract":"We propose a sub-symbolic connectionist model in which a compositional system self-organizes by learning a provided set of goal-directed actions. This approach is compatible with an idea taken from usage-based accounts of the developmental learning of language. The model explains a possible continuous process underlying the transitions from rote knowledge to systematized knowledge by drawing an analogy to the formation process of a regular geometric arrangement of points. An experiment was performed using a simulated mobile robot reaching or turning toward a colored target. By using an identical learning model, three different types of combinatorial generalization are observed depending on the provided examples. Based on the experimental results, a dynamical systems interpretation of conventional usage-based models is discussed.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128883283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
I-POMDP: An infomax model of eye movement I-POMDP:眼球运动的信息模型
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640819
N. Butko, J. Movellan
{"title":"I-POMDP: An infomax model of eye movement","authors":"N. Butko, J. Movellan","doi":"10.1109/DEVLRN.2008.4640819","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640819","url":null,"abstract":"Modeling eye-movements during search is important for building intelligent robotic vision systems, and for understanding how humans select relevant information and structure behavior in real time. Previous models of visual search (VS) rely on the idea of ldquosaliency mapsrdquo which indicate likely locations for targets of interest. In these models the eyes move to locations with maximum saliency. This approach has several drawbacks: (1) It assumes that oculomotor control is a greedy process, i.e., every eye movement is planned as if no further eye movements would be possible after it. (2) It does not account for temporal dynamics and how information is integrated as over time. (3) It does not provide a formal basis to understand how optimal search should vary as a function of the operating characteristics of the visual system. To address these limitations, we reformulate the problem of VS as an Information-gathering Partially Observable Markov Decision Process (I-POMDP). We find that the optimal control law depends heavily on the Foveal-Peripheral Operating Characteristic (FPOC) of the visual system.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133792712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
A robot rehearses internally and learns an affordance relation 机器人在内部进行排练,并学习一种辅助关系
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640846
E. Erdemir, C. B. Frankel, S. Thornton, B. Ulutas, K. Kawamura
{"title":"A robot rehearses internally and learns an affordance relation","authors":"E. Erdemir, C. B. Frankel, S. Thornton, B. Ulutas, K. Kawamura","doi":"10.1109/DEVLRN.2008.4640846","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640846","url":null,"abstract":"This paper introduces a novel approach to a crucial problem in robotics: Constructing robots that can learn general affordance relations from their experiences. Our approach has two components. (a) The robot models affordances as statistical relations between actual actions, object properties and the experienced effects of actions on objects. (b) To exploit the general-knowledge potential of its actual experiences, the robot, much like people, engages in internal rehearsal, playing-out ldquoimaginedrdquo scenarios grounded in but different from actual experience. To the extent the robot veridically appreciates affordance relations, the robot can autonomously predict the outcomes of its behaviors before executing them. Accurate outcome prediction in turn facilitates planning of a sequence of behaviors, toward executing the robotpsilas given task successfully. In this paper, we report very first steps in this approach to affordance learning, viz., the results of simulations and humanoid-robot-embodied experiments targeted toward having the robot learn one of the simplest of affordance relations, that a space affords traversability vs. impediment to a goal-object in the space.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124111496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信