{"title":"An event architecture for distributed interactive multisensory rendering","authors":"T. Edmunds, D. Pai","doi":"10.1109/ISMAR.2006.297814","DOIUrl":null,"url":null,"abstract":"We describe an architecture for coping with latency and asynchrony of multisensory events in interactive virtual environments. We propose to decompose multisensory interactions into a series of discrete, perceptually significant events, and structure the application architecture within this event-based context. We analyze the sources of latency, and develop a framework for event prediction and scheduling. Our framework decouples synchronization from latency, and uses prediction to reduce latency when possible. We evaluate the performance of the architecture using vision-based motion sensing and multisensory rendering using haptics, sounds, and graphics. The architecture makes it easy to achieve good performance using commodity off-the-shelf hardware.","PeriodicalId":332844,"journal":{"name":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2006 IEEE/ACM International Symposium on Mixed and Augmented Reality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMAR.2006.297814","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
We describe an architecture for coping with latency and asynchrony of multisensory events in interactive virtual environments. We propose to decompose multisensory interactions into a series of discrete, perceptually significant events, and structure the application architecture within this event-based context. We analyze the sources of latency, and develop a framework for event prediction and scheduling. Our framework decouples synchronization from latency, and uses prediction to reduce latency when possible. We evaluate the performance of the architecture using vision-based motion sensing and multisensory rendering using haptics, sounds, and graphics. The architecture makes it easy to achieve good performance using commodity off-the-shelf hardware.