Xinyue Gui, Koki Toda, S. Seo, Chia-Ming Chang, T. Igarashi
{"title":"“I am going this way”: Gazing Eyes on Self-Driving Car Show Multiple Driving Directions","authors":"Xinyue Gui, Koki Toda, S. Seo, Chia-Ming Chang, T. Igarashi","doi":"10.1145/3543174.3545251","DOIUrl":null,"url":null,"abstract":"Modern cars express three moving directions (left, right, straight) using turn signals (i.e., blinkers), which is insufficient when multiple paths are toward the same side. As such, drivers give additional hints (e.g., gesture, eye contact) in the conventional car-to-pedestrian interaction. As more self-driving cars without drivers join the public roads, we need additional communication channels. In this work, we discussed the problem of self-driving cars expressing their fine-grained moving direction to pedestrians in addition to blinkers. We built anthropomorphic robotic eyes and mounted them on a real car. We applied the eye gazing technique with the common knowledge: I gaze at the direction I am heading to. We found that the eyes can convey fine-grained directions from our formal VR-based user study, where participants could distinguish five directions with a lower error rate and less time compared to the conventional turn signals.","PeriodicalId":284749,"journal":{"name":"Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"111 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543174.3545251","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Modern cars express three moving directions (left, right, straight) using turn signals (i.e., blinkers), which is insufficient when multiple paths are toward the same side. As such, drivers give additional hints (e.g., gesture, eye contact) in the conventional car-to-pedestrian interaction. As more self-driving cars without drivers join the public roads, we need additional communication channels. In this work, we discussed the problem of self-driving cars expressing their fine-grained moving direction to pedestrians in addition to blinkers. We built anthropomorphic robotic eyes and mounted them on a real car. We applied the eye gazing technique with the common knowledge: I gaze at the direction I am heading to. We found that the eyes can convey fine-grained directions from our formal VR-based user study, where participants could distinguish five directions with a lower error rate and less time compared to the conventional turn signals.