{"title":"'Infants' preference for infants and adults','','','','','','','','93','95',","authors":"W. Sanefuji, H. Ohgami, K. Hashiya","doi":"10.1109/DEVLRN.2005.1490950","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490950","url":null,"abstract":"In natural settings, human infants tend to prefer infants to older children. Some laboratory-based studies reported that infants also show preference for adults, as much as for the age-mates. We showed that infants looked at infants longer than at children and that they showed banging behaviors more frequently while looking at infants and at adults than while looking at children. Our study suggested different cognitive basis for the infants' preference for infants and for adults: infants' preference for infants might be explained as a combination of the preference for babyish characteristics (same as adults) and the perceptual preference for similar others. On the other hand, the preference for adults might reflect the infants' daily learning through experience. Infants might prefer adults as familiar others","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114331289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motor interference between Humans and Humanoid Robots: Effect of Biological and Artificial Motion","authors":"T. Chaminade, D. W. Franklin, E. Oztop, G. Cheng","doi":"10.1109/DEVLRN.2005.1490951","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490951","url":null,"abstract":"If humanoid robots are to become commonplace in our society, it is important to understand how they are perceived by humans. An influent model in social cognitive neuroscience posits that in human face-to-face interaction, the observation of another individual performing an action facilitates the execution of a similar action, and interferes with the execution of different action. In one interference experiment, null interference was reported when subjects observed an industrial robotic arm moving at a constant velocity perform an incongruent task, suggesting that this effect may be specific to interacting with other humans. This experimental paradigm was adapted to investigate how humanoid robots interfere with humans. Subjects performed rhythmic arm movements while observing either a human agent or humanoid robot performing either congruent or incongruent movements with comparable kinematics. The variance of the executed movements was used as a measure of the amount of interference in the movements. In a previous report, we reported that in contrast to the robotic arm, the humanoid robot caused a significant increase of the variance of the movement during the incongruent condition. In the present report we investigate the effect of the movement kinematics on the interference. The humanoid robot moved either with a biological motion, based on a realistic model of human motion, or with an artificial motion. We investigated the variance of the subjects' movement during the incongruent condition, with the hypothesis that it should be reduced for the artificial movement in comparison to the biological movement. We found a significant effect of the factors defining the experimental conditions, congruency and type of movements' kinematics, on the subjects' variation. Congruency was found to have the expected effect on the area, but the increase in incongruent conditions was only significant when the robot movements followed biological motion. This result implies that motion is a significant factor for the interference effect","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124300006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Yoshikawa, Mamoru Yoshimura, K. Hosoda, M. Asada
{"title":"Visio-tactile binding through double-touching by a robot with an anthropomorphic tactile sensor","authors":"Y. Yoshikawa, Mamoru Yoshimura, K. Hosoda, M. Asada","doi":"10.1109/DEVLRN.2005.1490957","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490957","url":null,"abstract":"Binding is one of the most fundamental cognitive functions, how to find the correspondence of sensations between different modalities. It is still unclear how to bind different sensor modalities such as vision and touch. Without a priori knowledge on its sensing structure it is a formidable issue for a robot even to match the foci of attention in different modalities since the sensory data from different sensors are not always caused from the same physical phenomenon. In this study, previous method to make a robot capable of quantizing touch sensors by itself was extended","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116799956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mapping the space of skills: An approach for comparing embodied sensorimotor organizations","authors":"F. Kaplan, V. Hafner","doi":"10.1109/DEVLRN.2005.1490960","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490960","url":null,"abstract":"This article presents a mathematical framework based on information theory to compare temporally-extended embodied sensorimotor organizations. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. Because information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration characterizes an embodied interaction with a particular environment. In this approach, collections of skills can be mapped in a unified space as configurations of configurations. This article describes these different abstractions in a formal manner and presents results of preliminary experiments showing how this framework can be used to capture the behavioral organization of an autonomous robot","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"482 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122587194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Timing-Based Model of Body Schema Adaptation and its Role in Perception and Tool Use: A Robot Case Study","authors":"C. Nabeshima, M. Lungarella, Y. Kuniyoshi","doi":"10.1109/DEVLRN.2005.1490935","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490935","url":null,"abstract":"The multisensory representation of our body (body schema), and its conscious and manipulable counterpart (body image) play a pivotal role in the development and expression of many higher level cognitive functions, such as tool use, imitation, spatial perception, and self-awareness. This paper addresses the issue of how the body schema changes as a result of tool use-dependent experience. Although it is plausible to assume that such an alteration is inevitable, the mechanisms underlying such plasticity have yet to be clarified. To tackle the problem, we propose a novel model of body schema adaptation which we instantiate in a tool using robot. Our experimental results confirm the validity of our model. They also show that timing is a particularly important feature of our model because it supports the integration of visual, tactile, and proprioceptive sensory information. We hope that the approach exposed in this study allows gaining further insights into the development of tool use skills and its relationship to body schema plasticity","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124736749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Six-month-old infants' expectations for interactive-humanoid robots","authors":"A. Arita, K. Hiraki, T. Kanda, H. Ishiguro","doi":"10.1109/DEVLRN.2005.1490962","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490962","url":null,"abstract":"Summary form only given. As technology advances, many human-like robots are being developed. These humanoid robots should be classified as inanimate objects; however, they share many properties with human beings. This raises the question of how infants classify them. Developmental psychology has addressed the issue of how infants come to characterize humans as agents having mental states that is indispensable foundation for sociality. Some studies suggest that infants attribute mental states only to humans. For instance, Legerstee et al. (2000) found that 6-month-old infants do expect people to communicate with people, not with objects. These results indicate that human cognition specializes in human in early infancy. Other studies have suggested, however, that infants attribute mental states to non-human objects that appear to be interactive with a person. For instance, Johnson et al. (1999) indicated that 12-month-old infants did gaze following to a non-human but interactive stuff. These results imply that interactivity between humans and objects is the key factor in mental attribution, however, interesting questions remain to be answered: do infants also have expectation for robots to communicate with person? In this study, we investigated whether 6-month-old infants expected an experimenter to talk to a humanoid robot \"Robovie\" [Ishiguro, et al., (2001) using infants' looking time as a measurement of violation-of-expectation. Violation-of-expectation method uses infants' property that they look longer at the event that they do not expect than at the event that they expect. During test trials, we show infants the stimulus in which an actor talks to the robot and another person. If infants regard robots as communicative existence like human, they will not be surprised and look at the robot as long as at the person. But if infants do not attribute communicational property to robots, they will look longer at the robot than at the person. To show infants how the robot behaved and interacted with people, we added a familiarization period prior to the test trials, which phase provided infants with prior knowledge about the robots. The stimuli in the familiarization of these conditions are as follows: 1) interactive robot condition: the robot behaved like a human, and the person and the robot interacted with each other; 2) non-active robot condition: the robot was stationary and the person was both active and talked to the robot; 3) active robot condition: the robot behaved like a human, and the person was stationary and silent. If the robots' appearance is dominant for expectation, the results of all condition are same. If robot' action is dominant, the results of the interactive robot condition and the active robot condition are same. And if human-robot interaction is dominant, the result of the interactive robot condition is only different. In the results, infants who had watched the interactive robot looked at the robot as long as at the person. ","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory for Faces in Infants: A Comparison to the Memory for Objects","authors":"R. Morimoto, K. Hashiya","doi":"10.1109/DEVLRN.2005.1490977","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490977","url":null,"abstract":"Memory for faces and objects was investigated in 8- to 10-month infants. As the experience for memorizing the target face or object, face-to-face interactions between infant and experimenter in almost natural settings were conducted. To assess memory retention, two-alternative preferential looking tests were done after both a 3-minute delay and a 1-week delay from the familiarization phase. In the 3-minute delay condition, the infants looked more at the novel (not-the-experimenter) face that had not been experienced before, than the familiar (the experimenter) one. This shows that the infants memorize faces from limited experience at least for 3 minutes. On the other hand, the infants showed no such results in the object condition. These results might suggest specific processing for faces that cannot be applied for object stimuli. More detailed examinations should be done to examine this possibility","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128509098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The RUBI/QRIO Project: Origins, Principles, and First Steps","authors":"J. Movellan, F. Tanaka, B. Fortenberry, K. Aisaka","doi":"10.1109/DEVLRN.2005.1490948","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490948","url":null,"abstract":"Computers are already powerful enough to sustain useful robots that interact and assist humans in every-day life. However progress requires a scientific shakedown in goals and methods not unlike the cognitive revolution that occurred 40 years ago. The document presents the origin and early steps of the RUBI/QRIO project, in which two humanoid robots, RUBI and QRIO, are being brought to an early childhood education center on a daily bases for a period of time of at least one year. The goal of the RUBI/QRIO project is to accelerate progress on everyday life interactive robots by addressing the problem at multiple levels, including the development of new scientific methods, formal approaches, and scientific agenda. The current focus of the project is on educational environments, exploring the ways in which this technology could be used to assist teachers and enrich the educational experiences of children. We describe the origins, philosophy and first steps of the project, which included immersion of the researchers in the Early Childhood Education Center at UCSD, development of a social robot prototype named RUBI, and daily field studies with RUBI and QRIO, a prototype humanoid developed by Sony","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement Learning of Informative Attention Patterns for Object Recognition","authors":"L. Paletta, G. Fritz, Christin Seifert","doi":"10.1109/DEVLRN.2005.1490979","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490979","url":null,"abstract":"Attention is a highly important phenomenon emerging in infant development (Ruff and Rothbart, 1996). In human perception, sequential visual sampling about the environment is mandatory for object recognition purposes. Sequential attention is viewed in the framework of a saccadic decision process that aims at minimizing the uncertainty about the semantic interpretation for object or scene recognition. Methodologically, this work provides a framework for learning sequential attention in real-world visual object recognition, using an architecture of three processing stages. The first stage rejects irrelevant local descriptors providing candidates for foci of interest (FOI). The second stage investigates the information in the FOI using a codebook matcher. The third stage integrates local information via shifts of attention to characterize object discrimination. A Q-learner adapts then from explorative search on the FOI sequences. The methodology is successfully evaluated on representative indoors and outdoors imagery, demonstrating the significant impact of the learning procedures on recognition accuracy and processing time","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128246111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Young Infants' Sensitivity to Social Contingency from Mother and Stranger: Developmental Changes","authors":"M. Okanda, S. Itakura","doi":"10.1109/DEVLRN.2005.1490971","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490971","url":null,"abstract":"We investigated whether 1- and 4-month-old infants are sensitive to social contingency from mother and stranger via DV live-replay paradigm. The result indicated that 1-month-old infants could detect mother's non-contingency. Four-month-olds infants might be able to use smile as a social tool to make a stranger's response contingent again. We defined that there are two subdivision components in sensitivity to social contingency such as detection and expectancy. Detection is a basic ability, and expectancy is an ability what infants form to partner's contingency. Development of detection may be earlier than that of expectancy. Those two components are necessary for development of sensitivity to social contingency. Using smile as a social tool is one of applied abilities, and it develops later. We also found that infants' interest in mother and stranger differed in two age groups. One-month-old can only detect mother's unusual responses but not stranger's. By age of 4 months, infants became more sensitive to contingency from strangers because they are interested in strangers more","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123121498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}