Thomas Grah, F. A. Epp, Martin Wuchse, Alexander Meschtscherjakov, Frank Gabler, A. Steinmetz, M. Tscheligi
{"title":"Dorsal haptic display: a shape-changing car seat for sensory augmentation of rear obstacles","authors":"Thomas Grah, F. A. Epp, Martin Wuchse, Alexander Meschtscherjakov, Frank Gabler, A. Steinmetz, M. Tscheligi","doi":"10.1145/2799250.2799281","DOIUrl":"https://doi.org/10.1145/2799250.2799281","url":null,"abstract":"For the automotive domain shape-changing interfaces hold potential as an interaction modality since they offer modalities that do not rely on the driver's vision such as traditional screen-based interfaces. In this paper we present a shape-changing car-seat backrest that lets the driver haptically sense obstacles in the rear periphery. We implemented a prototype with a 4 × 4 array of servomotors that changes the shape of the car-seat backrest. In comparison to other approaches that often use the seat for vibro-cues or warnings signals, our approach delivers a continuous dorsal sensory augmentation of the rear periphery. The prototype was studied for its capability on transporting spatial information haptically in a pilot study with 10 participants. The lab study revealed that the interface is able to induce sensory augmentation. Most of the participants were able to spatially locate the stimuli.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Missie Smith, J. Streeter, G. Burnett, Joseph L. Gabbard
{"title":"Visual search tasks: the effects of head-up displays on driving and task performance","authors":"Missie Smith, J. Streeter, G. Burnett, Joseph L. Gabbard","doi":"10.1145/2799250.2799291","DOIUrl":"https://doi.org/10.1145/2799250.2799291","url":null,"abstract":"This study investigated the differences between head-up display (HUD) and head-down display (HDD) systems in vehicles with the ultimate goal of increasing driver performance and safety. Our initial focus was on comparing drivers' visual task performance (and associated driving behavior) while using a HUD and HDD. In a medium-fidelity driving simulator, 16 experienced drivers performed two types of tasks, a structured (text) and a semi-structured (grid) visual search task while driving. They first completed a baseline drive using a driving simulator with no secondary tasks followed by four drives under various conditions: HUD-Text, HUD-Grid, HDD-Text, and HDD-Grid. The study returned mixed results. HUDs generally contributed to more secondary task errors, but overall, users still preferred to use HUDs over HDDs. The response time to complete tasks was unevenly affected by display type. While there was no difference between response times in the grid tasks (HDD vs. HUD), the HUD was associated with significantly faster task performance as compared to the text tasks. These results suggest that HUDs may offer advantages to drivers in some task scenarios but more research is needed to understand under what conditions they do not add value.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"245 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132709128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabius Steinberger, R. Schroeter, Verena Lindner, Zachary Fitz-Walter, Joshua V. Hall, Daniel M. Johnson
{"title":"Zombies on the road: a holistic design approach to balancing gamification and safe driving","authors":"Fabius Steinberger, R. Schroeter, Verena Lindner, Zachary Fitz-Walter, Joshua V. Hall, Daniel M. Johnson","doi":"10.1145/2799250.2799260","DOIUrl":"https://doi.org/10.1145/2799250.2799260","url":null,"abstract":"This paper explores novel driving experiences that make use of gamification and augmented reality in the car. We discuss our design considerations, which are grounded in road safety psychology and video game design theory. We aim to address the tension between safe driving practices and player engagement. Specifically, we propose a holistic, iterative approach inspired by game design cognition and share our insights generated through the application of this process. We present preliminary game concepts that blend digital components with physical elements from the driving environment. We further highlight how this design process helped us to iteratively evolve these concepts towards being safer while maintaining fun. These insights and game design cognition itself will be useful to the AutomotiveUI community investigating similar novel driving experiences.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133543819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating a new display format for CarPlay to decrease impact of mode change inputs","authors":"Thomas M. Gable, B. Walker, Andrew S. Amontree","doi":"10.1145/2799250.2799277","DOIUrl":"https://doi.org/10.1145/2799250.2799277","url":null,"abstract":"\"Mobile integration\" operating systems such as CarPlay, Android Auto, and Microsoft Auto allow a mobile phone user to leverage the displays and controls built into a vehicle. As these systems become more prevalent, user interfaces must be designed such that drivers can use them effectively and safely along with the \"native\" infotainment interface in the vehicle. Currently, the mobile operating system takes over all pixels in a vehicle's infotainment touch screen. This forces users to switch between two very different interfaces if they need to access both native and mobile-based applications. This \"mode change\" creates possible confusion and lack of visual momentum, which could affect driving and safety. The current paper compared the \"mode change\" interaction that mobile integration systems are using to a new, \"windowed\" design. We studied infotainment task and driving performance, visual behaviors, workload, and preferences. Results point towards the windowed design improving task performance, decreasing workload, and reducing negative impacts on driving performance. The impact of these findings and further research is discussed.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115608025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandra Trösterer, Martin Wuchse, Christine Döttlinger, Alexander Meschtscherjakov, M. Tscheligi
{"title":"Light my way: visualizing shared gaze in the car","authors":"Sandra Trösterer, Martin Wuchse, Christine Döttlinger, Alexander Meschtscherjakov, M. Tscheligi","doi":"10.1145/2799250.2799258","DOIUrl":"https://doi.org/10.1145/2799250.2799258","url":null,"abstract":"In demanding driving situations, the front-seat passenger can become a supporter of the driver by, e.g., monitoring the scene or providing hints about upcoming hazards or turning points. A fast and efficient communication of such spatial information can help the driver to react properly, with more foresight. As shown in previous research, this spatial referencing can be facilitated by providing the driver a visualization of the front-seat passenger's gaze. In this paper, we focus on the question how the gaze should be visualized for the driver, taking into account the feasibility of implementation in a real car. We present the results from a driving simulator study, where we compared an LED visualization (glowing LEDs on an LED stripe mounted at the bottom of the windshield, indicating the horizontal position of the gaze) with a visualization of the gaze as a dot in the simulated environment. Our results show that LED visualization comes with benefits with regard to driver distraction but also bears disadvantages with regard to accuracy and control for the front-seat passenger.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114760283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonia Baltodano, S. Sibi, Nikolas Martelaro, Nikhil Gowda, Wendy Ju
{"title":"The RRADS platform: a real road autonomous driving simulator","authors":"Sonia Baltodano, S. Sibi, Nikolas Martelaro, Nikhil Gowda, Wendy Ju","doi":"10.1145/2799250.2799288","DOIUrl":"https://doi.org/10.1145/2799250.2799288","url":null,"abstract":"This platform paper introduces a methodology for simulating an autonomous vehicle on open public roads. The paper outlines the technology and protocol needed for running these simulations, and describes an instance where the Real Road Autonomous Driving Simulator (RRADS) was used to evaluate 3 prototypes in a between-participant study design. 35 participants were interviewed at length before and after entering the RRADS. Although our study did not use overt deception---the consent form clearly states that a licensed driver is operating the vehicle---the protocol was designed to support suspension of disbelief. Several participants who did not read the consent form clearly strongly believed that they were interacting with a fully autonomous vehicle. The RRADS platform provides a lens onto the attitudes and concerns that people in real-world autonomous vehicles might have, and also points to ways that a protocol deliberately using misdirection can gain ecologically valid reactions from study participants.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114821571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole Perterer, Alexander Meschtscherjakov, M. Tscheligi
{"title":"Co-Navigator: an advanced navigation system for front-seat passengers","authors":"Nicole Perterer, Alexander Meschtscherjakov, M. Tscheligi","doi":"10.1145/2799250.2799265","DOIUrl":"https://doi.org/10.1145/2799250.2799265","url":null,"abstract":"In-car navigation systems typically have the purpose to support the driver in a navigation task. While the field of automotive HCI research abounds in driver-focused navigation design relatively little attention has been directed to the front-seat passengers as a support to the driver for e.g., navigation. Based on our ethnographic research, we have designed and prototyped a system called Co-Navigator. It was devised to be operated by the front-seat passenger, in order to support the driver in navigation and driving related tasks. The Co-Navigator is an interactive, tablet based navigation app that provides different kinds of information such as landmarks and upcoming hazard warnings (e.g., construction sites or potholes). In this paper, we describe the prototype, its elements, and an explorative in-situ evaluation. Results show that the Co-Navigator is a valuable in-car navigation device for the front-seat passenger. Especially the map overview and hazard warnings were appreciated.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128235422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Depth discrimination between augmented reality and real-world targets for vehicle head-up displays","authors":"M. Long, G. Burnett, Robert Hardy, H. Allen","doi":"10.1145/2799250.2799292","DOIUrl":"https://doi.org/10.1145/2799250.2799292","url":null,"abstract":"Augmented reality (AR) adds virtual graphics, sounds or data to a real-world environment. Future Head-Up Displays in vehicles will enable AR images to be presented at varying depths, potentially enabling additional cues to be provided to drivers to facilitate task performance. In order to correctly position such AR imagery, it is necessary to know at what point the virtual image is discriminable in depth from a real-world object. In a two-alternative forced-choice psychophysical depth judgment task, 40 observers judged if an AR image (a green diamond) appeared in front or behind a static 'pedestrian' target. Depth thresholds for the AR image were tested with the pedestrian target at 5m, 10m, 20m and 25m and at six locations relative to the pedestrian. The AR image was presented at different heights in the visual field, (above, middle and below the real-world target) and across the horizontal plane (left, middle, right of the real-world target). Participants were more likely to report that the AR image was presented in front of the target rather than behind. Inconsistent with previous findings, no overall effects of height or horizontal position were found. Depth thresholds scaled with distance, with larger thresholds at further distances. Findings also showed large individual differences and slow response times (above 2.5s average), suggesting of difficulties judging AR in depth. Recommendations are made regarding where a HUD image should be located in depth if a designer wishes users to reliably perceive the image to be in front/alongside or behind a real-world object.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127230078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ariel Telpaz, Brian Rhindress, I. Zelman, Omer Tsimhoni
{"title":"Haptic seat for automated driving: preparing the driver to take control effectively","authors":"Ariel Telpaz, Brian Rhindress, I. Zelman, Omer Tsimhoni","doi":"10.1145/2799250.2799267","DOIUrl":"https://doi.org/10.1145/2799250.2799267","url":null,"abstract":"Drivers' situation awareness is known to be remarkably low in the automated driving mode, which can result in a delayed and inefficient response when requested to resume control of the vehicle. The present study examined the usefulness of a haptic seat that projects spatial information on approaching vehicles to facilitate drivers' preparedness to take control of the vehicle. The results of a simulator study on 26 participants using behavioral and eye tracking techniques showed that when required to regain control, having haptic seat led to faster reactions in scenarios requiring lane changing. The haptic seat also reduced the probability that the participants would slow down below acceptable speeds on a freeway. Eye tracking showed that drivers had a more systematic scan of the environment in the first two seconds following the transition of control with a haptic seat. Overall, these findings suggest that the haptic seat can play a significant role in keeping drivers aware of surrounding traffic during automated driving, and consequently facilitate the control transitions between the vehicle and the driver.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127289415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rules of conduct for autonomous vehicles","authors":"Marin Sikkenk, J. Terken","doi":"10.1145/2799250.2799270","DOIUrl":"https://doi.org/10.1145/2799250.2799270","url":null,"abstract":"When developing autonomous vehicles, we should not only focus on technology development but also think about what behaviour to program into such vehicles. This paper addresses the question of whether we should enable users to influence the politeness of autonomous vehicles. A survey was conducted with 58 participants to probe people's willingness to display polite traffic behaviour. Scenarios were shown to people in which the weather conditions and the vulnerability of road users were manipulated, and people were asked to indicate whether they would give right of way or not. It was found that the willingness to give right of way was influenced by the weather conditions, the vulnerability of road users and the driving style of the participants. We conclude that traffic politeness varies across people and situations, and that this finding raises concerns for a development where autonomous vehicles are equipped with a standard uniform behaviour.","PeriodicalId":443866,"journal":{"name":"Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132504182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}