三模态虚拟环境的视听嗅觉资源分配。

IF 4.7 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING
E Doukakis, K Debattista, T Bashford-Rogers, A Dhokia, A Asadipour, A Chalmers, C Harvey
{"title":"三模态虚拟环境的视听嗅觉资源分配。","authors":"E Doukakis,&nbsp;K Debattista,&nbsp;T Bashford-Rogers,&nbsp;A Dhokia,&nbsp;A Asadipour,&nbsp;A Chalmers,&nbsp;C Harvey","doi":"10.1109/TVCG.2019.2898823","DOIUrl":null,"url":null,"abstract":"<p><p>Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants ( N=25) were asked, across a fixed number of budgets ( M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1865-1875"},"PeriodicalIF":4.7000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898823","citationCount":"8","resultStr":"{\"title\":\"Audio-Visual-Olfactory Resource Allocation for Tri-modal Virtual Environments.\",\"authors\":\"E Doukakis,&nbsp;K Debattista,&nbsp;T Bashford-Rogers,&nbsp;A Dhokia,&nbsp;A Asadipour,&nbsp;A Chalmers,&nbsp;C Harvey\",\"doi\":\"10.1109/TVCG.2019.2898823\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants ( N=25) were asked, across a fixed number of budgets ( M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.</p>\",\"PeriodicalId\":13376,\"journal\":{\"name\":\"IEEE Transactions on Visualization and Computer Graphics\",\"volume\":\" \",\"pages\":\"1865-1875\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2019-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TVCG.2019.2898823\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Visualization and Computer Graphics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TVCG.2019.2898823\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2019/2/14 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Visualization and Computer Graphics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TVCG.2019.2898823","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/2/14 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 8

摘要

虚拟环境(VEs)提供了以安全和可控的方式模拟从培训到娱乐的各种应用程序的机会。对于需要真实世界环境的逼真表示的应用,虚拟现实需要提供多种物理上准确的感官刺激。然而,模拟构成人类感觉系统(HSS)的所有感官是一项需要大量计算资源的任务。由于难以以最高质量提供所有感官,我们提出了一种资源分配方案,以便在给定的计算预算内实现最佳的感知体验。本文研究了由听觉、视觉和嗅觉刺激组成的多模态场景的资源平衡问题。进行了三项实验研究。第一个实验确定了嗅觉计算的感知边界。在第二个实验中,参与者(N=25)被要求在固定数量的预算(M=5)中确定他们认为在给定的计算预算中最好的视觉、听觉和嗅觉刺激质量。结果表明,与其他感官刺激相比,参与者倾向于优先考虑视觉质量。然而,随着预算规模的增加,用户更喜欢资源的平衡分配,更喜欢在VE中有气味冲动。基于收集到的数据,提出了一个质量预测模型,并在第三次和最后一次实验中针对先前未使用的预算和未测试的场景验证了其准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Audio-Visual-Olfactory Resource Allocation for Tri-modal Virtual Environments.

Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants ( N=25) were asked, across a fixed number of budgets ( M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Visualization and Computer Graphics
IEEE Transactions on Visualization and Computer Graphics 工程技术-计算机:软件工程
CiteScore
10.40
自引率
19.20%
发文量
946
审稿时长
4.5 months
期刊介绍: TVCG is a scholarly, archival journal published monthly. Its Editorial Board strives to publish papers that present important research results and state-of-the-art seminal papers in computer graphics, visualization, and virtual reality. Specific topics include, but are not limited to: rendering technologies; geometric modeling and processing; shape analysis; graphics hardware; animation and simulation; perception, interaction and user interfaces; haptics; computational photography; high-dynamic range imaging and display; user studies and evaluation; biomedical visualization; volume visualization and graphics; visual analytics for machine learning; topology-based visualization; visual programming and software visualization; visualization in data science; virtual reality, augmented reality and mixed reality; advanced display technology, (e.g., 3D, immersive and multi-modal displays); applications of computer graphics and visualization.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信