Jong-Chan Park, Hyunsoo Song, S. Koo, Young-Min Kim, D. Kwon
{"title":"Robot's behavior expressions according to the sentence types and emotions with modification by personality","authors":"Jong-Chan Park, Hyunsoo Song, S. Koo, Young-Min Kim, D. Kwon","doi":"10.1109/ARSO.2010.5680043","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680043","url":null,"abstract":"Expression has become one of important parts in human-robot interaction as an intuitive communication channel between humans and robots. However it is very difficult to construct robot's behaviors one by one. Developers consider how to make various motions of the robot easily. Therefore we propose an useful behavior expression method according to the sentence types and emotions. In this paper, robots express behaviors using motion sets of multi-modalities described as a combination of sentence types and emotions. In order to gather the data of multi-modal motion sets, we used video analysis of the actress for human modalities and did user-test for non-human modalities. We developed a behavior edit-toolkit to make and modify robot's behaviors easily. And also we proposed stereotyped actions according to the robot's personality for diversifying behavior expressions. Defined 25 behaviors based on the sentence types and emotions are applied to Silbot, a test-bed robot in CIR of Korea, and used for the English education.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129087450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A ubiquitous Smart Parenting and Customized Education service robot","authors":"Ho-Joon Lee, Jong C. Park","doi":"10.1109/ARSO.2010.5679634","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679634","url":null,"abstract":"In this paper, we introduce a u-SPACE service robot, designed to help children who may be left alone while their caregivers are away from home. In order to protect children from indoor dangers, this service robot provides customized guiding messages taking into account the location information and behavioral patterns of a child, after the detection of dangerous objects and situations. And these guiding messages are vocalized by our emotional speech generation system. This emotional speech generation system is also being put to use in reading fairy tales to a child, as a part of a home education service. The outward appearance of the u-SPACE service robot is modeled on a teddy bear, in order to provide a safe and comforting environment for children. Two touch sensors designed for basic interactions between a child and the robot are installed on each hand of the robot, and an RFID tag is placed inside the body. A PDA with a Wi-Fi communication module, a touch screen, and a speaker is used as a main operating device of this u-SPACE service robot.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128390989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"uRON v1.5: A device-independent and reconfigurable robot navigation library","authors":"Sunglok Choi, Jae-Y. Lee, Wonpil Yu","doi":"10.1109/ARSO.2010.5679696","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679696","url":null,"abstract":"Many laboratories and companies are developing a mobile robot with various sensors and actuators. They implement navigation techniques usually tailored to their own robot. In this paper, we introduce a novel robot navigation library, Universal Robot Navigation (uRON). uRON is designed to be portable and independent from robot hardware and operating systems. Users can apply uRON to their robots with small amounts of codes. Moreover, uRON provides reusable navigation components and reconfigurable navigation framework. It contains the navigation components such as localization, path planning, path following, and obstacle avoidance. Users can create their own component using the existing ones. uRON also includes the navigation framework which assembles each component and wraps them as high-level functions. Users can achieve their robot service easily and quickly with this framework. We applied uRON to three service robots in Tomorrow City, Incheon, South Korea. Three robots had different hardwares and performed different services. uRON enables three robots movable and satisfies complex service requirements with less than 500 lines of codes.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124344409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Lin, Han-Pang Huang, Sheng-Yen Lo, Chun-Hung Huang
{"title":"Scene space inference based on stereo vision","authors":"K. Lin, Han-Pang Huang, Sheng-Yen Lo, Chun-Hung Huang","doi":"10.1109/ARSO.2010.5680017","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680017","url":null,"abstract":"This paper provides an intuitive way to inference the space of a scene using stereo cameras. We first segmented the ground out of the image by adaptively learning the ground model in the image. We then used the convex hull to approximate the scene space. Objects within the scene can also be detected with the stereo cameras. Finally, we organized the scene space and the objects within the scene into a graphical model, and then used particle filters to approximate the solution. Experiments were conducted to test the accuracy of the ground segmentation and the precision and recall of object detection within the scene. The precision and recall of object detection was about 50% in our system. With additional tracking of the object, the recall could improve approximately 5%. The result can be considered as prior knowledge for further image tasks, e.g. obstacle avoidance or object recognition.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126331247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical database based on feature parameters for various multimodal expression generation of robot","authors":"W. Kim, J. Park, Won Hyong Lee, M. Chung","doi":"10.1109/ARSO.2010.5679627","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679627","url":null,"abstract":"In this paper, we propose reliable, diverse, expansible, and usable expression generation system. Proposed system is to generate synchronized multimodal expression automatically based on hierarchical database and context information such as robot's emotional state and sentence robot is trying to say. Compared to prior system, our system based on feature parameters is much easier to generate new expression and modify expressions according to the robot's emotion. In our system, there are sentence module, emotion module, and expression module. We focus on only robot's expression module. In order to generate expressions automatically, we use outputs of the sentence and emotion modules. We have classified robot sentence under 13 types and robot emotion under 3 types. About all 39 categories and body language, we have constructed behavior database with 128 expressions. For the reliability and the variety of expressions, a professional actor's expression data have been obtained and we requested a cartoonist to draw sketch of robot's expressions corresponding to defined categories.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131068841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Control performance of a motion controller for robot-assisted surgery","authors":"Sungchoon Lee, Jeong-Geun Lim, Kyunghwan Kim","doi":"10.1109/ARSO.2010.5679619","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679619","url":null,"abstract":"Total Knee/Hip Replacement(TKR/THR) is one of the most important orthopedic surgical techniques of this century. If patient's whole joint is damaged, an artificial joint (total hip/knee replacement surgery) can relieve patient's pain and help the patient get back normal activities. The goal for TKR/THR is to relieve the pain in the joint caused by the damage done to the cartilage. The surgeon will replace the damaged parts of the joint. For example, in an arthritic knee the damaged ends of the bones and cartilage are replaced with metal and plastic surfaces that are shaped to restore knee movement and function. In an arthritic hip, the damaged ball (the upper end of the femur) is replaced by a metal ball attached to a metal stem fitted into the femur, and a plastic socket is implanted into the pelvis, replacing the damaged socket. Using the “new” joint shortly after the operation is strongly encouraged. After a TKR/TRH, patient will often stand and begin walking the day after surgery.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115515694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Miura, Shin'ichiro Nakaoka, Shuji Kajita, K. Kaneko, F. Kanehiro, M. Morisawa, K. Yokoi
{"title":"Trials of cybernetic human HRP-4C toward humanoid business","authors":"K. Miura, Shin'ichiro Nakaoka, Shuji Kajita, K. Kaneko, F. Kanehiro, M. Morisawa, K. Yokoi","doi":"10.1109/ARSO.2010.5679688","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679688","url":null,"abstract":"We have developed a humanoid robot (a cybernetic human called “HRP-4C”) which has the appearance and shape of a human being, can walk and move like one, and interacts with humans using speech recognition. Standing 158 cm tall and weighing 43 kg (including the battery), with the joints and dimensions set to average values for young Japanese females, HRP-4C looks very human-like. In this paper, we present ongoing challenges to create a new bussiness in the contents industry with HRP-4C.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114895152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangseok Yun, C. G. Kim, Munsang Kim, Mun-Taek Choi
{"title":"Robust robot's attention for human based on the multi-modal sensor and robot behavior","authors":"Sangseok Yun, C. G. Kim, Munsang Kim, Mun-Taek Choi","doi":"10.1109/ARSO.2010.5680037","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5680037","url":null,"abstract":"In this paper, we propose the robust robot's attention for human based on the multi-modal sensor and robot behavior. All of the robot components for attention are operating on the intelligent robot software architecture. Human search collect the human information from the sensing information for vision and voice in various illumination change and dynamic environments. And human tracker follows the face trajectory with efficiency and safety. Unlike common belief, the biggest obstacle and competitive factor in robotics is expected to be human robot interaction. Since robot intelligence is not yet at a practical level, the creation of a general interaction manager using an intelligence system will not be realized for some time. Instead, it focuses on one-way speaking and expressing based on emotion to human. Experimental results show that the proposed scheme works successfully in real environments.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Design of mine detection robot for Korean mine field","authors":"S. Kang, Junho Choi, SeungBeum Suh, Sungchul Kang","doi":"10.1109/ARSO.2010.5679622","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679622","url":null,"abstract":"This paper presents the critical design constraints of mine detection robots for Korean minefield. As a part of a demining robot development project, the environment of Korean minefield was investigated, and the requirements for suitable robot design were determined. Most of landmines in Korean minefield were buried close to the demilitarized zone (DMZ) more than half of a century ago. The areas have not been urbanized at all since the Korea War, and the potential locations of the explosives by military tactics have been covered by vegetation. Therefore, at the initial stage of the demining robot system development, the target areas were investigated and the suitable design for Korean minefield terrain was determined. The design includes a track type main platform with a simple moving arm and a mine detection sensor (consists of a metal detector and a GPR at this stage). In addition, in order to maintain the effective distance between the landmine sensors and ground surface, a distance sensing technique for terrain adaptability was developed and briefly introduced in this paper. The overall design of this robot was determined by considering the speed of the whole mine detection process and a point of economic view to replace human in minefield. Thus, the detail of the conceptual design and the mine detection scenario is presented in this paper.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133640722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyung-Geune Oh, Chan-Yul Jung, Mun-Taek Choi, Seung-Jong Kim
{"title":"Eye motion generation in a mobile service robot ‘SILBOT II’","authors":"Kyung-Geune Oh, Chan-Yul Jung, Mun-Taek Choi, Seung-Jong Kim","doi":"10.1109/ARSO.2010.5679626","DOIUrl":"https://doi.org/10.1109/ARSO.2010.5679626","url":null,"abstract":"Face of a robot capable of facial expression has a complex structure inside consisting of many actuators, sensors, and other parts. Specially, eyes and its neighbor elements such as upper/lower eyelids, cameras and eyelids, are very densely arranged. In this paper, a compact eyeball module is suggested which is driven with Teflon tube enveloped metal wires, so that one directional motion can be made with one wire pushed or pulled by a motor. And the cylindrical and ball-type pivot parts are used in ends of the tubes and wires, so as to permit their rotation during eye movement. The performance of the suggested module is verified through the comparison between experimental and analytical results. The results are well matched with each other and the degree of positioning precision is also good.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"238 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114187220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}