{"title":"预测VIMS患者的视觉显著性","authors":"Jiawei Yang, Guangtao Zhai, Huiyu Duan","doi":"10.1109/VCIP47243.2019.8965925","DOIUrl":null,"url":null,"abstract":"As is known to us, visually induced motion sickness (VIMS) is often experienced in a virtual environment. Learning the visual attention of people with VIMS contributes to related research in the field of virtual reality (VR) content design and psychology. In this paper, we first construct a saliency prediction for people with VIMS (SPPV) database, which is the first of its kind. The database consists of 80 omnidirectional images and the corresponding eye tracking data collected from 30 individuals. We analyze the performance of five state-of-the-art deep neural networks (DNN)-based saliency prediction algorithms with their original networks and the fine-tuned networks on our database. We predict the atypical visual attention of people with VIMS for the first time and obtain relatively good saliency prediction results for VIMS controls so far.","PeriodicalId":388109,"journal":{"name":"2019 IEEE Visual Communications and Image Processing (VCIP)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Predicting the visual saliency of the people with VIMS\",\"authors\":\"Jiawei Yang, Guangtao Zhai, Huiyu Duan\",\"doi\":\"10.1109/VCIP47243.2019.8965925\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As is known to us, visually induced motion sickness (VIMS) is often experienced in a virtual environment. Learning the visual attention of people with VIMS contributes to related research in the field of virtual reality (VR) content design and psychology. In this paper, we first construct a saliency prediction for people with VIMS (SPPV) database, which is the first of its kind. The database consists of 80 omnidirectional images and the corresponding eye tracking data collected from 30 individuals. We analyze the performance of five state-of-the-art deep neural networks (DNN)-based saliency prediction algorithms with their original networks and the fine-tuned networks on our database. We predict the atypical visual attention of people with VIMS for the first time and obtain relatively good saliency prediction results for VIMS controls so far.\",\"PeriodicalId\":388109,\"journal\":{\"name\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP47243.2019.8965925\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP47243.2019.8965925","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Predicting the visual saliency of the people with VIMS
As is known to us, visually induced motion sickness (VIMS) is often experienced in a virtual environment. Learning the visual attention of people with VIMS contributes to related research in the field of virtual reality (VR) content design and psychology. In this paper, we first construct a saliency prediction for people with VIMS (SPPV) database, which is the first of its kind. The database consists of 80 omnidirectional images and the corresponding eye tracking data collected from 30 individuals. We analyze the performance of five state-of-the-art deep neural networks (DNN)-based saliency prediction algorithms with their original networks and the fine-tuned networks on our database. We predict the atypical visual attention of people with VIMS for the first time and obtain relatively good saliency prediction results for VIMS controls so far.