2008 7th IEEE International Conference on Development and Learning最新文献

筛选
英文 中文
Implicit learning of arithmetic principles 隐式学习算术原理
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640833
R. Prather, M. Alibali
{"title":"Implicit learning of arithmetic principles","authors":"R. Prather, M. Alibali","doi":"10.1109/DEVLRN.2008.4640833","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640833","url":null,"abstract":"Past research has investigated childrenpsilas knowledge of arithmetic principles over development. However, little is known about the mechanisms involved in acquiring principle knowledge. We hypothesize that experience with equations that violate a to-be-learned principle will lead to changes in equation encoding, which in turn will promote acquisition of principle knowledge. Adultspsila knowledge of an arithmetic principle was evaluated before and after a training session in which some participants were exposed to equations that violated the principle. Participants who were exposed to temporally proximal principle violations increased their knowledge more than participants who were exposed to widely spaced violations. Learners with low principle knowledge post-training were also poor at encoding key features of the equations. Thus, variations in experience lead to variations in principle learning, and encoding is an important component of principle knowledge.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130662762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Realizing being imitated: Vowel mapping with clearer articulation 实现被模仿:元音映射,发音更清晰
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640840
K. Miura, Y. Yoshikawa, M. Asada
{"title":"Realizing being imitated: Vowel mapping with clearer articulation","authors":"K. Miura, Y. Yoshikawa, M. Asada","doi":"10.1109/DEVLRN.2008.4640840","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640840","url":null,"abstract":"The previous approach to vowel imitation learning between a caregiver and an infant (robot) has assumed that the robot can segment the caregiverpsilas utterance into its phoneme category, where the caregiver always imitates the robot utterance. However, in real situations, the caregiver does not always imitate the robot utterance, nor the robot does have the phoneme category (no segmentation capability). This paper presents a method to solve these issues, a weakly-supervised learning along with auto-regulation, that is active selection of action and data with underdeveloped classifier. To cope with not-always imitation problem, a weakly-supervised learning method is applied that is capable to handle incompletely segmented samples (not perfectly imitated voices). Further, the regulation classifier of the imitated voices is recursively applied in order to select good vocal primitives and to segment caregiverpsilas imitated voices that improve the performance of the classifier itself. The simulation results are shown and the future issues are given.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131167471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
From pixels to policies: A bootstrapping agent 从像素到策略:一个引导代理
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640813
J. Stober, B. Kuipers
{"title":"From pixels to policies: A bootstrapping agent","authors":"J. Stober, B. Kuipers","doi":"10.1109/DEVLRN.2008.4640813","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640813","url":null,"abstract":"An embodied agent senses the world at the pixel level through a large number of sense elements. In order to function intelligently, an agent needs high-level concepts, grounded in the pixel level. For human designers to program these concepts and their grounding explicitly is almost certainly intractable, so the agent must learn these foundational concepts autonomously. We describe an approach by which an autonomous learning agent can bootstrap its way from pixel-level interaction with the world, to individuating and tracking objects in the environment, to learning an effective policy for its behavior. We use methods drawn from computational scientific discovery to identify derived variables that support simplified models of the dynamics of the environment. These derived variables are abstracted to discrete qualitative variables, which serve as features for temporal difference learning. Our method bridges the gap between the continuous tracking of objects and the discrete state representation necessary for efficient and effective learning. We demonstrate and evaluate this approach with an agent experiencing a simple simulated world, through a sensory interface consisting of 60,000 time-varying binary variables in a 200 x 300 array, plus a three-valued motor signal and a real-valued reward signal.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116450490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Autonomous segmentation of human action for behaviour analysis 用于行为分析的人类行为的自主分割
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640838
J. E. Hunter, D. Wilkes, D. Levin, C. Heaton, M. Saylor
{"title":"Autonomous segmentation of human action for behaviour analysis","authors":"J. E. Hunter, D. Wilkes, D. Levin, C. Heaton, M. Saylor","doi":"10.1109/DEVLRN.2008.4640838","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640838","url":null,"abstract":"To correctly understand human actions, it is necessary to segment a continuous series of movements into units that can be associated with meaningful goals and subgoals. Recent research in cognitive science and machine vision has explored the perceptual and conceptual factors that (a) determine the segment boundaries that human observers place in a range of actions, and (b) allow successful discrimination among different action-types. In this project we investigated the degree to which specific movements effectively predict key sub-events in a broad range of actions in which a human model interacts with objects. In addition, we aimed to create an accessible tool to track human actions for use in a wide range of machine vision and cognitive science applications. Results from our analysis suggest that a set of basic movement cues can successfully predict key sub-events such as hand-to-object contact, across a wide range of specific tasks, and we specify parameters under which this prediction might be maximized.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124228266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Sound versus meaning: What matters most in early word learning? 声音与意义:早期单词学习中什么最重要?
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640843
S. Sahni, T. Rogers
{"title":"Sound versus meaning: What matters most in early word learning?","authors":"S. Sahni, T. Rogers","doi":"10.1109/DEVLRN.2008.4640843","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640843","url":null,"abstract":"Previous work suggests that phonological neighborhood density is a key factor in shaping early lexical acquisition. Such studies have, however, have not considered how semantic neighborhoods may influence word-learning. We studied how phonological and semantic densities affect both comprehension and production of nouns from the Macarthur-Bates Communicative Development Inventory (MCDI). New measures of semantic and phonological densities, along with child-directed word frequency counts were used to predict the percentage of children who know each word at different ages (8 - 30 months) as indicated in MCDI lexical norms. Production was predicted by frequency and phonological density at all time points, replicating previous research. Semantic density predicted production only at 30 months. Comprehension norms were predicted by frequency and semantic density, and never by phonological density. Two- and three-way interactions reveal that semantic density may moderate effects in production, while sound density may moderate effects in comprehension.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131028822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Where-what network 1: “Where” and “what” assist each other through top-down connections Where-what网络1:“Where”和“what”通过自上而下的连接相互帮助
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640806
Zhengping Ji, J. Weng, D. Prokhorov
{"title":"Where-what network 1: “Where” and “what” assist each other through top-down connections","authors":"Zhengping Ji, J. Weng, D. Prokhorov","doi":"10.1109/DEVLRN.2008.4640806","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640806","url":null,"abstract":"This paper describes the design of a single learning network that integrates both object location (ldquowhererdquo) and object type (ldquowhatrdquo), from images of learned objects in natural complex backgrounds. The in-place learning algorithm is used to develop the internal representation (including synaptic bottom-up and top-down weights of every neuron) in the network, such that every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through interactions with other neurons in the same layer. In contrast with the previous fully connected MILN [13], the cells in each layer are locally connected in the network. Local analysis is achieved through multi-scale receptive fields, with increasing sizes of perception from earlier to later layers. The results of the experiments showed how one type of information (ldquowhererdquo or ldquowhatrdquo) assists the network to suppress irrelevant information from background (from ldquowhererdquo) or irrelevant object information (from ldquowhatrdquo), so as to give the required missing information (ldquowhererdquo or ldquowhatrdquo) in the motor output.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127061530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
VIP neuron model: Head-centered cross-modal representation of the peri-personal space around the face VIP神经元模型:以头部为中心的面部周围空间的交叉模态表征
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640820
S. Fuke, M. Ogino, M. Asada
{"title":"VIP neuron model: Head-centered cross-modal representation of the peri-personal space around the face","authors":"S. Fuke, M. Ogino, M. Asada","doi":"10.1109/DEVLRN.2008.4640820","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640820","url":null,"abstract":"Since body representation is one of the most fundamental issues for physical agents (humans, primates, and also robots) to adaptively perform various kinds of tasks, a number of learning methods have attempted to make robots acquire their body representation. However, these previous methods have supposed that the reference frame is given and fixed a priori. Therefore, such acquisition has not been dealt. This paper presents a model that enables a robot to acquire cross-modal representation of its face based on VIP neurons whose function (found in neuroscience) is not only to code the location of visual stimuli in the head-centered reference frame and but also to connect visual and tactile sensations. Preliminary simulation results are shown and future issues are discussed.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123710071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic cry detection in early childhood education settings 幼儿教育设置中的自动哭泣检测
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640830
P. Ruvolo, J. Movellan
{"title":"Automatic cry detection in early childhood education settings","authors":"P. Ruvolo, J. Movellan","doi":"10.1109/DEVLRN.2008.4640830","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640830","url":null,"abstract":"We present results on applying a novel machine learning approach for learning auditory moods in natural environments [1] to the problem of detecting crying episodes in preschool classrooms. The resulting system achieved levels of performance approaching that of human coders and also significantly outperformed previous approaches to this problem [2].","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123731936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Internal state predictability as an evolutionary precursor of self-awareness and agency 内部状态的可预测性是自我意识和能动性的进化先驱
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640814
Jaerock Kwon, Y. Choe
{"title":"Internal state predictability as an evolutionary precursor of self-awareness and agency","authors":"Jaerock Kwon, Y. Choe","doi":"10.1109/DEVLRN.2008.4640814","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640814","url":null,"abstract":"What is the evolutionary value of self-awareness and agency in intelligent agents? One way to make this problem tractable is to think about the necessary conditions that lay the foundation for the emergence of agency, and assess their evolutionary origin.We postulate that one such requirement is the predictability of the internal state trajectory. A distinct property of onepsilas own actions compared to someone elsepsilas is that onepsilas own is highly predictable, and this gives the sense of ldquoauthorshiprdquo. In order to investigate if internal state predictability has any evolutionary value, we evolved sensorimotor control agents driven by a recurrent neural network in a 2D pole-balancing task. The hidden layer activity of the network was viewed as the internal state of an agent, and the predictability of its trajectory was measured. We took agents exhibiting equal levels of performance during evolutionary trials, and grouped them into those with high or low internal state predictability (ISP). The high-ISP group showed better performance than the low-ISP group in novel tasks with substantially harder initial conditions. These results indicate that regularity or predictability of neural activity in internal dynamics of agents can have a positive impact on fitness, and, in turn, can help us better understand the evolutionary role of self-awareness and agency.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124278716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
What roles can attention play in recognition? 注意在识别中起什么作用?
2008 7th IEEE International Conference on Development and Learning Pub Date : 2008-10-10 DOI: 10.1109/DEVLRN.2008.4640805
J. Tsotsos
{"title":"What roles can attention play in recognition?","authors":"J. Tsotsos","doi":"10.1109/DEVLRN.2008.4640805","DOIUrl":"https://doi.org/10.1109/DEVLRN.2008.4640805","url":null,"abstract":"Does attention have relevance for visual recognition and if so, under what circumstances? Is there a particular role (or roles) for attentive processes? These are not so simple to answer. Attention, if used at all in computer vision, has traditionally played one or both of the following roles: where to look next (or selection of region of interest), or top-down task influence on visual computation. In this paper, I argue that these are only two of the possible roles. Attention is also closely linked to binding and it is the triad of attention, binding and recognition that go hand in hand for non-trivial visual recognition tasks. This paper describes a set of four novel binding processes that employ a variety of attentive mechanisms to achieve recognition beyond the first feed-forward pass. The description is at a conceptual level with many pointers to papers where details may be found.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132866779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信