DM@EACLPub Date : 2014-04-01DOI: 10.3115/v1/W14-0212
Spyros Kousidis, C. Kennington, Timo Baumann, Hendrik Buschmeier, S. Kopp, David Schlangen
{"title":"Situationally Aware In-Car Information Presentation Using Incremental Speech Generation: Safer, and More Effective","authors":"Spyros Kousidis, C. Kennington, Timo Baumann, Hendrik Buschmeier, S. Kopp, David Schlangen","doi":"10.3115/v1/W14-0212","DOIUrl":"https://doi.org/10.3115/v1/W14-0212","url":null,"abstract":"Holding non-co-located conversations while driving is dangerous (Horrey and Wickens, 2006; Strayer et al., 2006), much more so than conversations with physically present, “situated” interlocutors (Drews et al., 2004). In-car dialogue systems typically resemble non-co-located conversations more, and share their negative impact (Strayer et al., 2013). We implemented and tested a simple strategy for making in-car dialogue systems aware of the driving situation, by giving them the capability to interrupt themselves when a dangerous situation is detected, and resume when over. We show that this improves both driving performance and recall of system-presented information, compared to a non-adaptive strategy.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121099334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DM@EACLPub Date : 2014-04-01DOI: 10.3115/v1/W14-0210
J. Vystrcil, I. Maly, Jan Balata, Z. Míkovec
{"title":"Navigation Dialog of Blind People: Recovery from Getting Lost","authors":"J. Vystrcil, I. Maly, Jan Balata, Z. Míkovec","doi":"10.3115/v1/W14-0210","DOIUrl":"https://doi.org/10.3115/v1/W14-0210","url":null,"abstract":"Navigation of blind people is different from the navigation of sighted people and there is also difference when the blind person is recovering from getting lost. In this paper we focus on qualitative analysis of dialogs between lost blind person and navigator, which is done through the mobile phone. The research was done in two outdoor and one indoor location. The analysis revealed several areas where the dialog model must focus on detailed information, like evaluation of instructions provided by blind person and his/her ability to reliably locate navigation points.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129705124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
DM@EACLPub Date : 2014-04-01DOI: 10.3115/v1/W14-0201
Sven Reichel, U. Ehrlich, A. Berton, M. Weber
{"title":"In-Car Multi-Domain Spoken Dialogs: A Wizard of Oz Study","authors":"Sven Reichel, U. Ehrlich, A. Berton, M. Weber","doi":"10.3115/v1/W14-0201","DOIUrl":"https://doi.org/10.3115/v1/W14-0201","url":null,"abstract":"Mobile Internet access via smartphones puts demands on in-car infotainment systems, as more and more drivers like to access the Internet while driving. Spoken dialog systems support the user by less distracting interaction than visual/hapticbased dialog systems. To develop an intuitive and usable spoken dialog system, an extensive analysis of the interaction concept is necessary. We conducted a Wizard of Oz study to investigate how users will carry out tasks which involve multiple applications in a speech-only, user-initiative infotainment system while driving. Results show that users are not aware of different applications and use anaphoric expressions in task switches. Speaking styles vary and depend on type of task and dialog state. Users interact efficiently and provide multiple semantic concepts in one utterance. This sets high demands for future spoken dialog systems.","PeriodicalId":198983,"journal":{"name":"DM@EACL","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130423831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}