Gabriel D. C. Seppelfelt, Tomoki Asaka, T. Nagai, Soh Yukizaki
{"title":"HumanoidBot:全身人形聊天系统","authors":"Gabriel D. C. Seppelfelt, Tomoki Asaka, T. Nagai, Soh Yukizaki","doi":"10.1109/Humanoids53995.2022.10000209","DOIUrl":null,"url":null,"abstract":"State-of-the-art chatbot models have been refined over the past few years, especially in the category of open-domain chatbots, thanks to the development of new model architectures, capable of storing the context of conversations more reliably, or the development of larger data sets to train such models. With such improvements, chatbot text applications can simulate human replies. However, when one of those text applications is implemented in a humanoid robot, that uses mainly sound to communicate, the result may not be as humanlike as the text by itself. In this paper, we develop a full-body humanoid chitchat system: the HumanoidBot. It has the objective to further discuss the influence of gestures in full-body humanoid robots performed simultaneously with speech utterances, aiming to improve its humanlikeness in face-to-face human-robot open-domain dialogues.","PeriodicalId":180816,"journal":{"name":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","volume":"69 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"HumanoidBot: Full-Body Humanoid Chitchat System\",\"authors\":\"Gabriel D. C. Seppelfelt, Tomoki Asaka, T. Nagai, Soh Yukizaki\",\"doi\":\"10.1109/Humanoids53995.2022.10000209\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"State-of-the-art chatbot models have been refined over the past few years, especially in the category of open-domain chatbots, thanks to the development of new model architectures, capable of storing the context of conversations more reliably, or the development of larger data sets to train such models. With such improvements, chatbot text applications can simulate human replies. However, when one of those text applications is implemented in a humanoid robot, that uses mainly sound to communicate, the result may not be as humanlike as the text by itself. In this paper, we develop a full-body humanoid chitchat system: the HumanoidBot. It has the objective to further discuss the influence of gestures in full-body humanoid robots performed simultaneously with speech utterances, aiming to improve its humanlikeness in face-to-face human-robot open-domain dialogues.\",\"PeriodicalId\":180816,\"journal\":{\"name\":\"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)\",\"volume\":\"69 5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Humanoids53995.2022.10000209\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Humanoids53995.2022.10000209","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
State-of-the-art chatbot models have been refined over the past few years, especially in the category of open-domain chatbots, thanks to the development of new model architectures, capable of storing the context of conversations more reliably, or the development of larger data sets to train such models. With such improvements, chatbot text applications can simulate human replies. However, when one of those text applications is implemented in a humanoid robot, that uses mainly sound to communicate, the result may not be as humanlike as the text by itself. In this paper, we develop a full-body humanoid chitchat system: the HumanoidBot. It has the objective to further discuss the influence of gestures in full-body humanoid robots performed simultaneously with speech utterances, aiming to improve its humanlikeness in face-to-face human-robot open-domain dialogues.