{"title":"From Unknown Sensors and Actuators to Visually Guided Movement","authors":"L. Olsson, C. Nehaniv, D. Polani","doi":"10.1109/DEVLRN.2005.1490934","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490934","url":null,"abstract":"This paper describes a developmental system implemented on a real robot that learns a model of its own sensory and actuator apparatuses. There is no innate knowledge regarding the modality or representation of the sensoric input and the actuators, and the system relies on generic properties of the robot's world such as piecewise smooth effects of movement on sensory changes. The robot develops the model of its sensorimotor system by first performing random movements to create an informational map of the sensors. Using this map the robot then learns what effects the different possible actions have on the sensors. After this developmental process the robot can perform simple motion tracking","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124948040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information Self-Structuring: Key Principle for Learning and Development","authors":"M. Lungarella, O. Sporns","doi":"10.1109/DEVLRN.2005.1490938","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490938","url":null,"abstract":"Intelligence and intelligence-like processes are characterized by a complex yet balanced interplay across multiple time scales between an agent's brain, body, and environment. Through sensor and motor activity natural organisms and robots are continuously and dynamically coupled to their environments. We argue that such coupling represents a major functional rationale for the ability of embodied agents to actively structure their sensory input and to generate statistical regularities. Such regularities in the multimodal sensory data relayed to the brain are critical for enabling appropriate developmental processes, perceptual categorization, adaptation, and learning. We show how information theoretical measures can be used to quantify statistical structure in sensory and motor channels of a robot capable of saliency-driven, attention-guided behavior. We also discuss the potential importance of such measures for understanding sensorimotor coordination in organisms (in particular, visual attention) and for robot design","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125442291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Minato, Michihiro Shimada, S. Itakura, Kang Lee, H. Ishiguro
{"title":"Does Gaze Reveal the Human Likeness of an Android?","authors":"T. Minato, Michihiro Shimada, S. Itakura, Kang Lee, H. Ishiguro","doi":"10.1109/DEVLRN.2005.1490953","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490953","url":null,"abstract":"The development of androids that closely resemble human beings enables as to investigate many phenomena related to human interaction that could not otherwise be investigated with mechanical-looking robots. This is because more humanlike devices are in a better position to elicit the kinds of responses that people direct toward each other. In particular, we cannot ignore the role of appearance in giving us a subjective impression of human presence or intelligence. However, this impression is influenced by behavior and the complex relationship between appearance and behavior. We propose a hypothesis about how appearance and behavior are related and map out a plan for android research to investigate the hypothesis. We then examine a study that evaluates the behavior of androids according to the patterns of gaze fixations they elicit. Studies such as these, which integrate the development of androids with the investigation of human behavior, constitute a new research area that fuses engineering and science","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126715562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Motion-triggered human-robot synchronization for autonomous acquisition of joint attention","authors":"H. Sumioka, K. Hosoda, Y. Yoshikawa, M. Asada","doi":"10.1109/DEVLRN.2005.1490980","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490980","url":null,"abstract":"Joint attention, a behavior to attend to an object to which another person attends, is an important element not only for human-human communication but also human-robot communication. Building a robot that autonomously acquires the behavior is supposed to be a formidable issue both to establish the design principle of a robot communicating with humans and to understand the developmental process of human communication. To accelerate learning of the behavior, the motion synchronization among the object, the caregiver, and the robot is important since it ensures the information consistency between them. In this paper, we propose a control architecture to utilize the motion information for synchronization necessary to find the consistency. The task given for the caregiver is to pick up an object on the table and to investigate it with his/her hands, which is a quite natural task for humans. If only the caregiver can move the objects in the environment, the observed motion is that of the caregiver's face and/or that of the object moved by him/her. When the caregiver is looking around to find an interesting object, the image flow of the face is observed. After he/she fixates the object and picks it up, the flow of the face stops and that of the object is observed","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122800930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imitation faculty based on a simple visuo-motor mapping towards interaction rule learning with a human partner","authors":"M. Ogino, H. Toichi, M. Asada, Y. Yoshikawa","doi":"10.1109/DEVLRN.2005.1490964","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490964","url":null,"abstract":"Imitation has been regarded as one of the key technologies indispensable for communication since the mirror neuron made a sensation not only in physiology but also in other disciplines such as cognitive science, and even robotics. This paper is aimed at building a human-robot communication system and proposes an observation-to-motion mapping system as the first step towards the final goal of learning natural communication. This system enables a humanoid platform to imitate the observed human motion, that is, a mapping from observed human motion data to its own motor commands. To validate the effectiveness of the proposed system, we examine whether the robot can acquire the interaction rule in an environment in which a human motion occurs under an artificial interaction rule","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114368726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic Evolution of Language Games between two Autonomous Robots","authors":"Jean-Christophe Baillie, M. Nottale","doi":"10.1109/DEVLRN.2005.1490946","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490946","url":null,"abstract":"The \"talking robots\" experiment, inspired by the \"talking heads\" experiment from Sony, explores possibilities on how to ground symbols into perception. We present here the first results of this experiment and outline a possible extension to social behaviors grounding: the purpose is to have the robots develop not only a lexicon but also the interaction protocol, or language game that they use to create the lexicon. This raises several complex problems that we review here","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129774250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Plans for Developing Real-time Dance Interaction between QRIO and Toddlers in a Classroom Environment","authors":"F. Tanaka, B. Fortenberry, K. Aisaka, J. Movellan","doi":"10.1109/DEVLRN.2005.1490963","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490963","url":null,"abstract":"This paper introduces the early stages of a study designed to understand the development of dance interactions between QRIO and toddlers in a classroom environment. The study is part of a project to explore the potential use of interactive robots as instructional tools in education. After 3 months observation period, we are starting the experiment. After explaining the experimental environment, component technologies used in it are described: an interactive dance with visual feedback, exploiting the active detection of contingency and robotic emotion expression","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125787951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-development of motor abilities resulting from the growth of a neural network reinforced by pleasure and tensions","authors":"Juan Liu, A. Buller","doi":"10.1109/DEVLRN.2005.1490956","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490956","url":null,"abstract":"We present a novel method of machine learning toward emergent motor behaviors. The method is based on a growing neural network that initially produces senseless signals but later associates rewarding signals and quasi-rewarding signals with recent perceptions and motor activities and, based on these data, incorporates new cells and creates new connections. The rewarding signals are produced in a device that plays a role of a \"pleasure center\", whereas the quasi-rewarding signals (that represent pleasure expectation) are generated by the network itself. The network was tested using a simulated mobile robot equipped with a pair of motors, a set of touch sensors, and a camera. Despite a lack of innate wiring for a useful behavior, the robot learned without an external guidance how to avoid obstacles and approach an object of interest, which is fundamental for creatures and usually handcrafted in traditional robotic systems","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117170562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distinguishing Intentional Actions from Accidental Actions","authors":"K. Harui, N. Oka, Y. Yamada","doi":"10.1109/DEVLRN.2005.1490972","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490972","url":null,"abstract":"Summary form only given. Although even human infants have the ability to recognize intention by Meltzoff (1995) and Tomasello (1997), its engineering realization has not been established yet. It is important to realize a man-machine interface which can adapt naturally to human by guessing whether the behavior of human is intentional or accidental. Various information, for example, voice, facial expression, and gesture can be used to distinguish whether a behavior is intentional or not, we however pay attention to the prosody and the timing of utterances in this study, because when one did an accidental movement, we think that he tends to utter words, e.g. `oops', in a characteristic fashion unintentionally. In this study, a video game was built in which one can play an agent with a ball and recorded the interaction between a subject and the agent. Then, a system was built using a decision tree by Quinlan (1996) that learns to distinguish intentional actions of subjects from accidental ones, and analyzed the precision of the trees. Continuous inputs for C4.5 algorithm, and discretized inputs at regular intervals for ID3 algorithm were used. The difference in inputs is the cause of the difference in the precision in table I","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128375713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Infomax Controller for Real Time Detection of Social Contingency","authors":"J. Movellan","doi":"10.1109/DEVLRN.2005.1490937","DOIUrl":"https://doi.org/10.1109/DEVLRN.2005.1490937","url":null,"abstract":"We present a model of behavior according to which organisms react to the environment in a manner that maximizes the information gained about events of interest. We call the approach \"Infomax control\" for it combines the theory of optimal control with information maximization models of perception. The approach is reactive, not cognitive, in that it is better described as a continuous \"dance\" of actions and reactions with the world, rather than a turn-taking inferential process like chess-playing. The approach however is intelligent in that it produces behaviors that optimize long-term information gain. We illustrate how Infomax control can be used to understand the detection of social contingency in 10 month old infants. The results suggest that, while lacking language, by this age infants actively \"ask questions\" to the environment, i.e., schedule their actions in a manner that maximizes the expected information return. A real time Infomax controller was implemented on a humanoid robot to detect people using contingency information. The system worked robustly requiring little bandwidth and computational cost","PeriodicalId":297121,"journal":{"name":"Proceedings. The 4nd International Conference on Development and Learning, 2005.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131352155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}