J. F. Maas, Thorsten P. Spexard, J. Fritsch, B. Wrede, G. Sagerer
{"title":"BIRON,主题是什么?一种改进人机交互的多模态主题跟踪器","authors":"J. F. Maas, Thorsten P. Spexard, J. Fritsch, B. Wrede, G. Sagerer","doi":"10.1109/ROMAN.2006.314390","DOIUrl":null,"url":null,"abstract":"Creating robots with extendable social skills and interaction capabilities that suffice their operation in the real world with naive users is a very challenging task. In this paper we present a new approach using topic tracking on multi-modal dialogue to provide a mobile robot with a higher level situation awareness in human-robot interaction. The robot is no longer operating in laboratory surroundings, but in its own real world flat. We describe how our topic tracking approach is implemented in this integrated system, operating on verbal speech input. Different modalities like data from video cameras and laser scans are used as additional cues to a semantic understanding and grouping of user utterances into different topics. Both the amount of topics and the according topic names are created dynamically. Evaluating an offline speech corpus demonstrates the suitability of our approach. It is now possible to ask \"BIRON, what's the topic?\", making the interaction more social","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"31","resultStr":"{\"title\":\"BIRON, what's the topic? A Multi-Modal Topic Tracker for improved Human-Robot Interaction\",\"authors\":\"J. F. Maas, Thorsten P. Spexard, J. Fritsch, B. Wrede, G. Sagerer\",\"doi\":\"10.1109/ROMAN.2006.314390\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Creating robots with extendable social skills and interaction capabilities that suffice their operation in the real world with naive users is a very challenging task. In this paper we present a new approach using topic tracking on multi-modal dialogue to provide a mobile robot with a higher level situation awareness in human-robot interaction. The robot is no longer operating in laboratory surroundings, but in its own real world flat. We describe how our topic tracking approach is implemented in this integrated system, operating on verbal speech input. Different modalities like data from video cameras and laser scans are used as additional cues to a semantic understanding and grouping of user utterances into different topics. Both the amount of topics and the according topic names are created dynamically. Evaluating an offline speech corpus demonstrates the suitability of our approach. It is now possible to ask \\\"BIRON, what's the topic?\\\", making the interaction more social\",\"PeriodicalId\":254129,\"journal\":{\"name\":\"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2006-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"31\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ROMAN.2006.314390\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2006.314390","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
BIRON, what's the topic? A Multi-Modal Topic Tracker for improved Human-Robot Interaction
Creating robots with extendable social skills and interaction capabilities that suffice their operation in the real world with naive users is a very challenging task. In this paper we present a new approach using topic tracking on multi-modal dialogue to provide a mobile robot with a higher level situation awareness in human-robot interaction. The robot is no longer operating in laboratory surroundings, but in its own real world flat. We describe how our topic tracking approach is implemented in this integrated system, operating on verbal speech input. Different modalities like data from video cameras and laser scans are used as additional cues to a semantic understanding and grouping of user utterances into different topics. Both the amount of topics and the according topic names are created dynamically. Evaluating an offline speech corpus demonstrates the suitability of our approach. It is now possible to ask "BIRON, what's the topic?", making the interaction more social