{"title":"你没看见吗?:朝着基于注视的人机交互适应方向发展","authors":"Marcel Walch, David Lehr, Mark Colley, M. Weber","doi":"10.1145/3349263.3351338","DOIUrl":null,"url":null,"abstract":"Highly automated driving evolves steadily and even gradually enters public roads. Nevertheless, there remain driving-related tasks that can be handled more efficiently by humans. Cooperation with the human user on a higher abstraction level of the dynamic driving task has been suggested to overcome operational boundaries. This cooperation includes for example deciding whether pedestrians want to cross the road ahead. We suggest that systems should monitor their users when they have to make such decisions. Moreover, these systems can adapt the interaction to support their users. In particular, they can match gaze direction and objects in their environmental model like vulnerable road users to guide the focus of users towards overlooked objects. We conducted a pilot study to investigate the need and feasibility of this concept. Our preliminary analysis showed that some participants overlooked pedestrians that intended to cross the road which could be prevented with such systems.","PeriodicalId":237150,"journal":{"name":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Don't you see them?: towards gaze-based interaction adaptation for driver-vehicle cooperation\",\"authors\":\"Marcel Walch, David Lehr, Mark Colley, M. Weber\",\"doi\":\"10.1145/3349263.3351338\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Highly automated driving evolves steadily and even gradually enters public roads. Nevertheless, there remain driving-related tasks that can be handled more efficiently by humans. Cooperation with the human user on a higher abstraction level of the dynamic driving task has been suggested to overcome operational boundaries. This cooperation includes for example deciding whether pedestrians want to cross the road ahead. We suggest that systems should monitor their users when they have to make such decisions. Moreover, these systems can adapt the interaction to support their users. In particular, they can match gaze direction and objects in their environmental model like vulnerable road users to guide the focus of users towards overlooked objects. We conducted a pilot study to investigate the need and feasibility of this concept. Our preliminary analysis showed that some participants overlooked pedestrians that intended to cross the road which could be prevented with such systems.\",\"PeriodicalId\":237150,\"journal\":{\"name\":\"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-09-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3349263.3351338\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications: Adjunct Proceedings","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3349263.3351338","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Don't you see them?: towards gaze-based interaction adaptation for driver-vehicle cooperation
Highly automated driving evolves steadily and even gradually enters public roads. Nevertheless, there remain driving-related tasks that can be handled more efficiently by humans. Cooperation with the human user on a higher abstraction level of the dynamic driving task has been suggested to overcome operational boundaries. This cooperation includes for example deciding whether pedestrians want to cross the road ahead. We suggest that systems should monitor their users when they have to make such decisions. Moreover, these systems can adapt the interaction to support their users. In particular, they can match gaze direction and objects in their environmental model like vulnerable road users to guide the focus of users towards overlooked objects. We conducted a pilot study to investigate the need and feasibility of this concept. Our preliminary analysis showed that some participants overlooked pedestrians that intended to cross the road which could be prevented with such systems.