S. Fujie, D. Watanabe, Yuhi Ichikawa, Hikaru Taniyama, Kosuke Hosoya, Yoichi Matsuyama, Tetsunori Kobayashi
{"title":"个性化对话的多模态集成:走向日常生活中的人形","authors":"S. Fujie, D. Watanabe, Yuhi Ichikawa, Hikaru Taniyama, Kosuke Hosoya, Yoichi Matsuyama, Tetsunori Kobayashi","doi":"10.1109/ICHR.2008.4756014","DOIUrl":null,"url":null,"abstract":"Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication every day. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robotpsilas eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in real-time. The effectiveness of these elements as well as the example of conversation are shown in experiments.","PeriodicalId":402020,"journal":{"name":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Multi-modal integration for personalized conversation: Towards a humanoid in daily life\",\"authors\":\"S. Fujie, D. Watanabe, Yuhi Ichikawa, Hikaru Taniyama, Kosuke Hosoya, Yoichi Matsuyama, Tetsunori Kobayashi\",\"doi\":\"10.1109/ICHR.2008.4756014\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication every day. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robotpsilas eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in real-time. The effectiveness of these elements as well as the example of conversation are shown in experiments.\",\"PeriodicalId\":402020,\"journal\":{\"name\":\"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots\",\"volume\":\"17 1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICHR.2008.4756014\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Humanoids 2008 - 8th IEEE-RAS International Conference on Humanoid Robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICHR.2008.4756014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-modal integration for personalized conversation: Towards a humanoid in daily life
Humanoid with spoken language communication ability is proposed and developed. To make humanoid live with people, spoken language communication is fundamental because we use this kind of communication every day. However, due to difficulties of speech recognition itself and implementation on the robot, a robot with such an ability has not been developed. In this study, we propose a robot with the technique implemented to overcome these problems. This proposed system includes three key features, image processing, sound source separation, and turn-taking timing control. Processing image captured with camera mounted on the robotpsilas eyes enables to find and identify whom the robot should talked to. Sound source separation enables distant speech recognition, so that people need no special device, such as head-set microphones. Turn-taking timing control is often lacked in many conventional spoken dialogue system, but this is fundamental because the conversation proceeds in real-time. The effectiveness of these elements as well as the example of conversation are shown in experiments.