Kaavya Rekanar , John M. Joyce , Martin Hayes , Ciarán Eising
{"title":"DriVQA: A gaze-based dataset for visual question answering in driving scenarios","authors":"Kaavya Rekanar , John M. Joyce , Martin Hayes , Ciarán Eising","doi":"10.1016/j.dib.2025.111367","DOIUrl":null,"url":null,"abstract":"<div><div>This paper presents DriVQA, a novel dataset that combines gaze plots and heatmaps with visual question-answering (VQA) data from participants who were presented with driving scenarios. Visual Questioning Answering (VQA) is proposed as a part of the vehicle autonomy trustworthiness and interpretability solution in decision-making by autonomous vehicles. Collected using the Tobii Pro X3-120 eye-tracking device, the DriVQA dataset provides a comprehensive mapping of where participants direct their gaze when presented with images of driving scenes, followed by related questions and answers from every participant. For each scenario, the dataset contains: images of driving situations, associated questions, participant answers, gaze plots, and heatmaps. It is being used to study the subjectivity inherent in VQA. Its detailed gaze-tracking data offers a unique perspective on how individuals perceive and interpret visual scenes, making it an essential resource for training VQA models that rely on human-like attention. The dataset is a valuable tool for investigating human cognition and behaviour in dynamic, real-world scenarios. DriVQA is highly relevant for VQA models, as it allows the systems to learn from human-like attention behaviour when making decisions based on visual input when trained. The dataset has the potential to drive advancements in VQA research and development by improving the safety and intelligence of driving systems through enhanced visual understanding and interaction. DriVQA has significant potential for reuse in various research areas, including the development of advanced VQA models, attention analysis, and human-computer interaction studies. Its comprehensive gaze plots and heatmaps can also be leveraged to improve applications in autonomous driving, driver assistance systems, and cognitive science research, making it a versatile resource for both academic and industrial purposes.</div></div>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"59 ","pages":"Article 111367"},"PeriodicalIF":1.0000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S235234092500099X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
This paper presents DriVQA, a novel dataset that combines gaze plots and heatmaps with visual question-answering (VQA) data from participants who were presented with driving scenarios. Visual Questioning Answering (VQA) is proposed as a part of the vehicle autonomy trustworthiness and interpretability solution in decision-making by autonomous vehicles. Collected using the Tobii Pro X3-120 eye-tracking device, the DriVQA dataset provides a comprehensive mapping of where participants direct their gaze when presented with images of driving scenes, followed by related questions and answers from every participant. For each scenario, the dataset contains: images of driving situations, associated questions, participant answers, gaze plots, and heatmaps. It is being used to study the subjectivity inherent in VQA. Its detailed gaze-tracking data offers a unique perspective on how individuals perceive and interpret visual scenes, making it an essential resource for training VQA models that rely on human-like attention. The dataset is a valuable tool for investigating human cognition and behaviour in dynamic, real-world scenarios. DriVQA is highly relevant for VQA models, as it allows the systems to learn from human-like attention behaviour when making decisions based on visual input when trained. The dataset has the potential to drive advancements in VQA research and development by improving the safety and intelligence of driving systems through enhanced visual understanding and interaction. DriVQA has significant potential for reuse in various research areas, including the development of advanced VQA models, attention analysis, and human-computer interaction studies. Its comprehensive gaze plots and heatmaps can also be leveraged to improve applications in autonomous driving, driver assistance systems, and cognitive science research, making it a versatile resource for both academic and industrial purposes.
期刊介绍:
Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.