{"title":"Statistical Characteristics of Velocity of Movements of Limbs in Young Infants during the Conjugate Reinforcement Mobile Task","authors":"R. Saji, H. Watanabe, G. Taga","doi":"10.1109/DEVLRN.2005.1490984","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490984","url":null,"abstract":"In this paper, we demonstrate statistical identifications for the time series of velocity of the movements of limbs in young infants during the conjugate reinforcement mobile task. The mean square velocity and the probability density function (PDF) of the time rate change of velocity are estimated. We found that the PDF is universally symmetric with a sharpened peak at the origin and exponential-tails. The result suggests that the PDF is a useful measure that reflects the motor pattern generation and memory formation during the mobile task","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134520066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inhibition in Cognitive Development: Contextual Impairments in Autism","authors":"P. Bjorne, B. Johansson, C. Balkenius","doi":"10.1109/DEVLRN.2005.1490978","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490978","url":null,"abstract":"Persons with autism, probably due to early sensory impairments, attend to and select for stimuli in an uncommon way. Inhibition of some features of a stimulus, such as location and shape, might be intact, while other features are not as readily inhibited, for example color. Stimuli irrelevant to the task might be attended to. This results in a learning process where irrelevant stimuli are erroneously activated and maintained. Therefore, we propose that the developmental pathway and behavior of persons with autism needs to be understood in a framework including discussions of inhibitory processes and context learning and maintenance. We believe that this provides a fruitful framework for understanding the causes of the seemingly diverse and complex cognitive difficulties seen in autism","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128023064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Other Motion Equivalence Learning for Head Movement Imitation","authors":"Y. Nagai","doi":"10.1109/DEVLRN.2005.1490958","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490958","url":null,"abstract":"Summary form only given. This paper presents a learning model for head movement imitation using motion equivalence between the actions of the self and the actions of another person. Human infants can imitate head and facial movements presented by adults. An open question regarding the imitation ability of infants is what equivalence between themselves and other infants utilize to imitate actions presented by adults (Meltzolf and Moore, 1997). A self-produced head movement or facial movement cannot be perceived in the same modality that the action of another is perceived. Some researchers have developed robotic models to imitate human head movement. However, their models used human posture data that cannot be detected by robots, and/or the relationships between the actions of humans and robots were fully defined by the designers. The model presented here enables a robot to learn self-other equivalence to imitate human head movement by using only self-detected sensor information. On the basis of the evidence that infants more imitate actions when they observed the actions with movement rather than without movement my model utilizes motion information about actions. The motion of a self-produced action, which is detected by the robot's somatic sensors, is represented as angular displacement vectors of the robot's head. The motion of a human action is detected as optical flow in the robot's visual perception when the robot gazes at a human face. By using these representations, a robot learns self-other motion equivalence for head movement imitation through the experiences of visually tracking a human face. In face-to-face interactions as shown, the robot first looks at the person's face as an interesting target and detects optical flow in its camera image when the person turns her head to one side. The article also shows the optical flow detected when the person turned her head from the center to the robot's left. Then, the ability visually to track a human face enables the robot to turn its head into the same direction as the person because the position of the person's lace moves in the camera image. This also shows the robot's movement vectors detected when it turned its head to the left side by tracking the person's face, in which the lines in the circles denote the angular displacement vectors in the eight motion directions. As a result, the robot finds that the self-movement vectors are activated in the same motion directions as the optical flow of the human head movement. This self-other motion equivalence is acquired through Hebbian learning. Experiments using the robot shown verified that the model enabled the robot to acquire the motion equivalence between itself and a human within a few minutes of online learning. The robot was able to imitate human head movement by using the acquired sensorimotor mapping. This imitation ability could lead to the development of joint visual attention by using an object as a target to be attended (Nagai, 200","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122630714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning the Correspondence between Continuous Speeches and Motions","authors":"O. Natsuki, N. Arata, I. Yoshiaki","doi":"10.1109/DEVLRN.2005.1490983","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490983","url":null,"abstract":"Summary form only given. Roy (1999) developed a computational model of early lexical learning to address three questions: First, how do infants discover linguistic units? Second, how do they learn perceptually-grounded semantic categories? And third, how do they learn to associate linguistic units with appropriate semantic categories? His model coupled speech recordings with static images of objects, and acquired a lexicon of shape names. Kaplan et al. (2001) presented a model for teaching names of actions to an enhanced version of AIBO. The AIBO had built-in speech recognition facilities and behaviors. In this paper, we try to build a system that learns the correspondence between continuous speeches and continuous motions without a built-in speech recognizer nor built-in behaviors. We teach RobotPHONE to respond to voices properly by taking its hands. For example, one says 'bye-bye' to the RobotPHONE holding its hand and waving. From continuous input, the system must segment speech and discover acoustic units which correspond to words. The segmentation is done based on recurrent patterns which was found by incremental reference interval-free continuous DP (IRIFCDP) by Kiyama et al. (1996) and Utsunomiya et al. (2004), and we accelerate the IRIFCDP using ShiftCDP (Itoh and Tanaka, 2004). The system also segments motion by the accelerated IRIFCDP, and it memorizes co-occurring speech and motion patterns. Then, it can respond to taught words properly by detecting taught words in speech input by ShiftCDP. We gave a demonstration with a RobotPHONE at the conference. We expect that it can learn words in any languages because it has no built-in facilities specific to any language","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Robot Soccer Team Behaviours Through Approximate Simulation","authors":"S. R. Young, S. Chalup","doi":"10.1109/DEVLRN.2005.1490968","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490968","url":null,"abstract":"Robot soccer is now recognized as one of the most popular and efficient testbeds for intelligent robotics. It involves many challenges for computation, mechanics, control, software engineering, machine learning, and other fields. The international RoboCup initiative supports research into robot soccer and provides an excellent environment to investigate machine learning for robotics in simulation and the real world","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125552929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze
{"title":"Color Tone Perception and Naming: Development in Acquisition of Color Modifiers","authors":"D. R. Wanasinghe, Charith N. W. Giragama, N. Bianchi-Beithouze","doi":"10.1109/DEVLRN.2005.1490954","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490954","url":null,"abstract":"Color is one of the most obvious attributes with which children usually start to classify objects they see. The purpose of this study was to investigate the development of children's ability to discriminate and name colors that varied in saturation and intensity (value) for a given hue (i.e., color tones). Perceptual and naming behaviors were assessed in 221 children, aged between 8 and 24, grouped in three categories, elementary, junior high school and university students. Color tone perception was observed through odd-one-out task and naming responses were obtained in terms of modifiers: vivid, strong, dark, bright, dull, and pale. Results revealed that the discrimination of subtle variations of color tones in two younger age groups was similar to that of the university students. In addition, it was found that elementary school children reliably start interpreting their experience of such variations with just three modifier terms: bright, strong, and dark. The knowledge of color modifier terms varied with age. When the naming task was constrained, a developmental order in the acquisition of such terms was observed. Salient dimensions underlying the judgments of color modifier terms were identified. The importance of each dimension varied with age. At the level of elementary, the semantic classification of color tones was strongly based only on intensity. At the junior high school level, it was found that saturation emerged as an important dimension in assigning modifiers","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129267321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transient Synchrony and Dynamical Representation of Behavioral Goals of the Prefrontal Cortex","authors":"K. Sakamoto, H. Mushiake, N. Saito, J. Tanji","doi":"10.1109/DEVLRN.2005.1490985","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490985","url":null,"abstract":"Summary form only given. Behavioral planning requires organizing actions by integrating perceived or memorized information to achieve goals. Studies have suggested that the underlying neural mechanisms involve updating representation of goals for action in associative cortices such as the prefrontal cortex (Saito et al., 2005). Although the underlying neural mechanisms are still unknown, we assume that functional linking of neurons would contribute to this transformation of behavioral goals. Thus, we investigated the relation of synchronous firing of neurons to the transformation of goal representation by recording neurons from the dorsolateral prefrontal cortex (DLPFC), while the monkeys performed a path-planning task (Mushiake et al., 2001) that requires them to plan immediate goals of actions to achieve final goals. Two monkeys were trained to perform a path-planning task that required them to move a cursor to a goal in a lattice-like display. After the cursor emerged in the center of the lattice (start display), a goal was presented in a corner (final goal display). The delay 1 period was followed by the delay 2 period, in which a part of the path in the lattice was blocked that disabled the cursor to move through the path. Then, a go signal was provided to allow the monkey to move the cursor for one check of the lattice. To dissociate arm movements and cursor movements, the monkeys to perform with three different arm-cursor assignments, which were changed every 48 trials. Neuronal pairs that were recorded simultaneously during more than two arm-cursor assignment blocks (> 96 trials) were included in the dataset. The analysis for task-related modulation of synchronous firing was based on the time-resolved cross-correlation method (Baker et al., 2001). This method can estimate neuronal synchrony well, because it can exclude the influence of firing rate change in and among trials by using instantaneous firing rate (IFK) for the predictor. In an example, weak and strong increase in co-firing rate of the neuronal pair is seen at final goal display and delay 2 period respectively, while synchronized firing can be recognized at delay 1 period without accompanying co-firing rate increase. We selected DLPFC neurons showing significant synchrony and goal-related activity with gradual shift of representation from final to immediate goals before initiation of the action. Many of the DLPFC neurons were found to show transient enhancement of synchrony without firing-rate increases. Furthermore, such enhancement was nearly coincident with the timing of shift in their goal representations. These results suggest that transient synchrony plays an important role in the transforming process of goal representations during behavioral planning","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133377556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional elicitation by dynamic facial expressions","authors":"W. Sato, S. Yoshikawa","doi":"10.1109/DEVLRN.2005.1490973","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490973","url":null,"abstract":"In the present study, we investigated the emotional effect of the dynamic presentation of facial expressions. Dynamic presentation of facial expressions was implemented using a computer-morphing technique. We presented dynamic and static expressions of fear and happiness, as well as other dynamic and static mosaic images, to 17 subjects. Subjects rated the valence and arousal of their emotional response to the images. Results indicated higher reported arousal in response to dynamic presentations than to static facial expressions (for both emotions) and to mosaic images. These results suggest that the specific effect of the dynamic presentation of emotional facial expressions is that it enhances the overall emotional experience without a corresponding qualitative change in that experience, and that this effect is not restricted to facial images","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131285255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prototype-specific learning for children's vocabulary","authors":"S. Hidaka, J. Saiki","doi":"10.1109/DEVLRN.2005.1490982","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490982","url":null,"abstract":"Several studies suggested that knowledge about the relationship between vocabulary and perceptual objects work as a constraint to enable children to generalize novel words quickly. Children's bias in novel word generalization is considered to reflect their prior knowledge and is investigated in various contexts. In particular, children have a bias to attend to shape similarity of solid objects and material similarity of nonsolid substance in novel word acquisition (Imai and Gentner, 1997). A few studies reported that a model based on Boltzmann machine could explain categorization bias among shape, material and solidity by learning an artificial vocabulary environment (Colunga and Smith, 2000 and Samuelson, 2002). The model has few constraints within its internal structure, but bias emerges through learning artificial vocabulary using simple statistical property about entities' shape, solidity and count/mass syntactical class (Samuelson and Smith, 1999). We proposed a model (prototype-specific attention learning; PSAL) that could learn optimal feature attention for specific prototype of vocabulary. The Boltzmann machine model learns vocabulary in uniform feature space. On the other hand, PSAL learns it in feature space with different metric specific to proximal prototypes. Real children show categorization bias robustly in various learning environment, thus a model should have robustness to various environments. Therefore, we investigated how the two models behave in a few typical vocabulary environments and discuss how prototype-specific learning influence categorization bias","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121286781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Computational Model which Learns to Selectively Attend in Category Learning","authors":"Lingyun Zhang, G. Cottrell","doi":"10.1109/DEVLRN.2005.1490981","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490981","url":null,"abstract":"Shepard et al. (1961) made empirical and theoretical investigation of the difficulties of different kinds of classifications using both learning and memory tasks. As the difficulty rank mirrors the number of feature dimensions relevant to the category, later researchers took it as evidence that category learning includes learning how to selectively attend to only useful features, i.e. learning to optimally allocate the attention to those dimensions relative to the category (Rosch and Mervis, 1975). We built a recurrent neural network model that sequentially attended to individual features. Only one feature is explicitly available at one time (as in Rehder and Hoffman's eye tracking settings (Render and Hoffman, 2003)) and previous information is represented implicitly in the network. The probabilities of eye movement from one feature to the next is kept as a fixation transition table. The fixations started randomly without much bias on any particular feature or any movement. The network learned the relevant feature(s) and did the classification by sequentially attending to these features. The rank of the learning time qualitatively matched the difficulty of the categories","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}