{"title":"利用声学特征和面部方向估计用户第一次说话前的内部状态","authors":"Yuya Chiba, Masashi Ito, A. Ito","doi":"10.1109/HSI.2012.13","DOIUrl":null,"url":null,"abstract":"Introduction of user models (e.g. models of a user's belief, skill and familiarity to the system) is believed to increase flexibility of response of a dialogue system. Conventionally, the internal state is estimated based on linguistic information of the previous utterance, but this approach cannot applied to the user who did not make an input utterance in the first place. Thus, we are developing a method to estimate an internal state of a spoken dialogue system's user before his/her input utterance. In a previous report, we used three acoustic features and a visual feature based on manual labels. In this paper, we introduced new features for the estimation: length of filled pause and face orientation angles. Then, we examined effectiveness of the proposed features by experiments. As a result, we obtained a three-class discrimination accuracy of 85.6% in an open test, which was 1.5 point higher than the result obtained using the previous feature set.","PeriodicalId":222377,"journal":{"name":"2012 5th International Conference on Human System Interactions","volume":"54 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Estimation of User's Internal State before the User's First Utterance Using Acoustic Features and Face Orientation\",\"authors\":\"Yuya Chiba, Masashi Ito, A. Ito\",\"doi\":\"10.1109/HSI.2012.13\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Introduction of user models (e.g. models of a user's belief, skill and familiarity to the system) is believed to increase flexibility of response of a dialogue system. Conventionally, the internal state is estimated based on linguistic information of the previous utterance, but this approach cannot applied to the user who did not make an input utterance in the first place. Thus, we are developing a method to estimate an internal state of a spoken dialogue system's user before his/her input utterance. In a previous report, we used three acoustic features and a visual feature based on manual labels. In this paper, we introduced new features for the estimation: length of filled pause and face orientation angles. Then, we examined effectiveness of the proposed features by experiments. As a result, we obtained a three-class discrimination accuracy of 85.6% in an open test, which was 1.5 point higher than the result obtained using the previous feature set.\",\"PeriodicalId\":222377,\"journal\":{\"name\":\"2012 5th International Conference on Human System Interactions\",\"volume\":\"54 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-06-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 5th International Conference on Human System Interactions\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HSI.2012.13\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 5th International Conference on Human System Interactions","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HSI.2012.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Estimation of User's Internal State before the User's First Utterance Using Acoustic Features and Face Orientation
Introduction of user models (e.g. models of a user's belief, skill and familiarity to the system) is believed to increase flexibility of response of a dialogue system. Conventionally, the internal state is estimated based on linguistic information of the previous utterance, but this approach cannot applied to the user who did not make an input utterance in the first place. Thus, we are developing a method to estimate an internal state of a spoken dialogue system's user before his/her input utterance. In a previous report, we used three acoustic features and a visual feature based on manual labels. In this paper, we introduced new features for the estimation: length of filled pause and face orientation angles. Then, we examined effectiveness of the proposed features by experiments. As a result, we obtained a three-class discrimination accuracy of 85.6% in an open test, which was 1.5 point higher than the result obtained using the previous feature set.