Efe Bozkir, Shahram Eivazi, Mete Akgün, Enkelejda Kasneci
{"title":"Eye Tracking Data Collection Protocol for VR for Remotely Located Subjects using Blockchain and Smart Contracts","authors":"Efe Bozkir, Shahram Eivazi, Mete Akgün, Enkelejda Kasneci","doi":"10.1109/AIVR50618.2020.00083","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00083","url":null,"abstract":"Eye tracking data collection in the virtual reality context is typically carried out in laboratory settings, which usually limits the number of participants or consumes at least several months of research time. In addition, under laboratory settings, subjects may not behave naturally due to being recorded in an uncomfortable environment. In this work, we propose a proof-of-concept eye tracking data collection protocol and its implementation to collect eye tracking data from remotely located subjects, particularly for virtual reality using Ethereum blockchain and smart contracts. With the proposed protocol, data collectors can collect high quality eye tracking data from a large number of human subjects with heterogeneous socio-demographic characteristics. The quality and the amount of data can be helpful for various tasks in datadriven human-computer interaction and artificial intelligence.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124996107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comfort of Aircraft Seats for Customers of Size using Digital Human Model in Virtual Reality","authors":"Sara Panicker, T. Huysmans","doi":"10.1109/AIVR50618.2020.00045","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00045","url":null,"abstract":"This paper looks at how Digital Human models can be used with Virtual Reality to understand the seat comfort in airplane economy class seats from the perspective of obese passengers. Participants were placed in a virtual environment similar to an economy class cabin and were asked to rate their perception of the space and other comfort parameters. The results showed that the participants experienced space crunch when they saw through the perspective of an obese person. This paper holds the future for a step towards ergonomics analyses using Digital Human Modeling and Virtual Reality.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116799640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Style-transfer GANs for bridging the domain gap in synthetic pose estimator training","authors":"Pavel Rojtberg, Thomas Pollabauer, Arjan Kuijper","doi":"10.1109/AIVR50618.2020.00039","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00039","url":null,"abstract":"Given the dependency of current CNN architectures on a large training set, the possibility of using synthetic data is alluring as it allows generating a virtually infinite amount of labeled training data. However, producing such data is a nontrivial task as current CNN architectures are sensitive to the domain gap between real and synthetic data.We propose to adopt general-purpose GAN models for pixellevel image translation, allowing to formulate the domain gap itself as a learning problem. The obtained models are then used either during training or inference to bridge the domain gap. Here, we focus on training the single-stage YOLO6D [20] object pose estimator on synthetic CAD geometry only, where not even approximate surface information is available. When employing paired GAN models, we use an edge-based intermediate domain and introduce different mappings to represent the unknown surface properties.Our evaluation shows a considerable improvement in model performance when compared to a model trained with the same degree of domain randomization, while requiring only very little additional effort.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierluigi Zama Ramirez, Claudio Paternesi, Daniele De Gregorio, L. D. Stefano
{"title":"Shooting Labels: 3D Semantic Labeling by Virtual Reality","authors":"Pierluigi Zama Ramirez, Claudio Paternesi, Daniele De Gregorio, L. D. Stefano","doi":"10.1109/AIVR50618.2020.00027","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00027","url":null,"abstract":"Availability of a few, large-size, annotated datasets, like ImageNet, Pascal VOC and COCO, has lead deep learning to revolutionize computer vision research by achieving astonishing results in several vision tasks. We argue that new tools to facilitate generation of annotated datasets may help spreading data-driven AI throughout applications and domains. In this work we propose Shooting Labels, the first 3D labeling tool for dense 3D semantic segmentation which exploits Virtual Reality to render the labeling task as easy and fun as playing a video-game. Our tool allows for semantically labeling large scale environments very expeditiously, whatever the nature of the 3D data at hand (e.g. point clouds, mesh). Furthermore, Shooting Labels efficiently integrates multiusers annotations to improve the labeling accuracy automatically and compute a label uncertainty map. Besides, within our framework the 3D annotations can be projected into 2D images, thereby speeding up also a notoriously slow and expensive task such as pixel-wise semantic labeling. We demonstrate the accuracy and efficiency of our tool in two different scenarios: an indoor workspace provided by Matterport3D and a large-scale outdoor environment reconstructed from 1000+ KITTI images.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125138750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}