SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications最新文献

筛选
英文 中文
Shading language compiler implementation for mobile ray tracing accelerator 移动光线追踪加速器的着色语言编译器实现
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669068
S. Hwang, Ankur Deshwal, Donghoon Yoo, Won-Jong Lee, Youngsam Shin, J. D. Lee, Soojung Ryu, Jeongwook Kim
{"title":"Shading language compiler implementation for mobile ray tracing accelerator","authors":"S. Hwang, Ankur Deshwal, Donghoon Yoo, Won-Jong Lee, Youngsam Shin, J. D. Lee, Soojung Ryu, Jeongwook Kim","doi":"10.1145/2669062.2669068","DOIUrl":"https://doi.org/10.1145/2669062.2669068","url":null,"abstract":"This paper presents a shading language compiler optimized for a mobile ray tracing acclerator, i.e., SGRT (Samsung GPU Ray Tracing). By the compiler, application development productivity has been dramatically improved: 1) as an application-specific abstraction layer, a shading language makes it possible for application developers to implement ray generators and shaders much more easily and intuitively than with a general and low-level language, i.e., standard C language (up to 81.6% less source lines). 2) high performance is achievable without a deep understanding of the programmable shader architecture based on CGRA (Coarse-Grained Reconfigurable Array) which is complex to optimize especially in presence of many conditional branches (up to 1.58 times higher throughput).","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115386776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Magic lenses for revealing device interactions in smart environments (demo abstract) 展示智能环境中设备交互的神奇镜头(演示摘要)
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2684187
S. Mayer, Yassin Nasir Hassan, Gábor Sörös
{"title":"Magic lenses for revealing device interactions in smart environments (demo abstract)","authors":"S. Mayer, Yassin Nasir Hassan, Gábor Sörös","doi":"10.1145/2669062.2684187","DOIUrl":"https://doi.org/10.1145/2669062.2684187","url":null,"abstract":"We present a tool for visualizing device interactions in smart environments as a magic lens by augmenting the live camera view of a tablet with relevant connections between recognized devices in the camera's field of view.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124394109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaptControl: an adaptive mobile touch control for games AdaptControl:游戏的自适应移动触摸控制
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669081
Leonardo Torok, M. Pelegrino, Jefferson Lessa, D. Trevisan, E. Clua
{"title":"AdaptControl: an adaptive mobile touch control for games","authors":"Leonardo Torok, M. Pelegrino, Jefferson Lessa, D. Trevisan, E. Clua","doi":"10.1145/2669062.2669081","DOIUrl":"https://doi.org/10.1145/2669062.2669081","url":null,"abstract":"The key aspect that defines the experience when playing a video game is the effectiveness and intuitiveness of gameplay, allowing an unexperienced player to quickly learn the main aspects of an interactive game and start playing immediately. While older games were usually simple and could be operated with 1 or 2 buttons with a quick learning curve, modern games allow a wide variety of actions that demands a more complex control scheme, sometimes with 10 or 15 buttons, resulting in unintuitive controls. Other methods of interaction, like touch, motion controls and voice, presented a more intuitive way to play games, but never reached the same level of precision found in regular controllers. To create an easier way to interact with games but at the same time, maintain the precision and quick response, delivering the best from both worlds, this work proposes the AdaptControl: a virtual controller based on an Android touchscreen device that communicates to a PC and works as a regular joystick to control a game, that can display only the amount of buttons needed for a game in a simplified interface. But this flexibility creates another challenge: the lack of physical feedback to the user. To solve this issue, the AdaptControl uses machine learnings algorithms to detect when the user is missing buttons and correct its position and size to an optimal configuration. And this kind of intelligence applied to the controller will bring another benefit: despite starting with a generic configuration for one game, the controller will be capable of changing its own layout to match each users' ergonomic need, resulting in a personal controller that matches the player's needs.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120991349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Efficient interactive visualization of crowd scenes on mobile devices 人群场景在移动设备上的高效交互可视化
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669067
I. Cheng, Amirhossein Firouzmanesh, A. Basu
{"title":"Efficient interactive visualization of crowd scenes on mobile devices","authors":"I. Cheng, Amirhossein Firouzmanesh, A. Basu","doi":"10.1145/2669062.2669067","DOIUrl":"https://doi.org/10.1145/2669062.2669067","url":null,"abstract":"The ability to view crowded public spaces in real-time has a variety of applications including virtual tourism, surveillance and sports. In addition, given the prevalence of handheld wireless devices, visualization on mobile handheld devices is beneficial for a large proportion of users. In this work we summarize our research on crowd visualization on wireless and handheld devices combining compression, 3D modeling and interactive visualization. We also synthesize crowds to validate our preliminary implementations. In order to achieve very high compression rates we use a novel perceptually adaptive approach for motion capture data compression. Our visualization interface can be used for simulation of crowds, and evaluating the design and capacity tolerances to crowds for public transit systems.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121560767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Speech invaders & yak-man: retrogames for speech therapy 语言入侵者和牦牛人:语言治疗的复古游戏
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669078
Chek Tien Tan, Andrew Johnston, Andrew Bluff, Sam Ferguson, K. Ballard
{"title":"Speech invaders & yak-man: retrogames for speech therapy","authors":"Chek Tien Tan, Andrew Johnston, Andrew Bluff, Sam Ferguson, K. Ballard","doi":"10.1145/2669062.2669078","DOIUrl":"https://doi.org/10.1145/2669062.2669078","url":null,"abstract":"Speech therapy is used for the treatment of speech disorders and commonly involves a patient attending clinical sessions with a speech pathologist, as well as performing prescribed practice exercises at home [Ruggero et al. 2012]. Clinical sessions are very effective -- the speech pathologist can carefully guide and monitor the patient's speech exercises -- but they are also costly and timeconsuming. However, the more inexpensive and convenient home practice component is often not as effective, as it is hard to maintain sufficient motivation to perform the rigid repetitive exercises.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127651001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multimodal mobile-ambient transmedial twirling with environmental lighting to complement fluid perspective with phase-perturbed affordance projection 多模式移动-环境跨媒体旋转与环境照明,以补充流体视角与相位摄动功能投影
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669080
Michael Cohen, Rasika Ranaweera, Bektur Ryskeldiev, Tomohiro Oyama, A. Hashimoto, Naoki Tsukida, T. Miyaji
{"title":"Multimodal mobile-ambient transmedial twirling with environmental lighting to complement fluid perspective with phase-perturbed affordance projection","authors":"Michael Cohen, Rasika Ranaweera, Bektur Ryskeldiev, Tomohiro Oyama, A. Hashimoto, Naoki Tsukida, T. Miyaji","doi":"10.1145/2669062.2669080","DOIUrl":"https://doi.org/10.1145/2669062.2669080","url":null,"abstract":"To illuminate the alignment between mixed reality juggling toys and ambidextrous vactors twirling a projection of those toys, roomware lighting control is deployed to show the modeled position of a virtual camera spinning around each player, even while the affordances are whirled. \"Tworlds\" is a mixed reality multimodal toy using twirled juggling-style affordances built using mobile devices--- smartphones, phablets, & tablets--- to modulate various displays, including 3D models and, now, environmental lighting. A unique feature of the projection is the preservation of logical alignment even when the virtual camera moves continuously around an avatar between frontal and dorsal views in an \"inspection gesture,\" phase-locked rotation and revolution (like the face of the moon pointing at the Earth). For example, a right-handed user would prefer to see their self-identified puppet holding an affordance in the right hand for dorsal (tethered) views, but would rather see the puppet switch hands for a frontal (mirrored) perspective. Because the projected phase of the toy must be modulated in order to preserve such visual correspondence, even while the prop is being whirled, and to elucidate the inspection gesture, we use networked lighting (Philips Hue Wi-Fi networked bulbs) to indicate the position of the virtual camera. Even though a toy might be twirled too fast for such lights to track in the real world, so that only computer graphic \"eye candy\" effects are practical, the speed of the orbiting of the virtual camera can be adjusted to accommodate even sluggish lighting switching.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130325643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A feasibility study of ray tracing on mobile GPUs 移动gpu上光线追踪的可行性研究
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669071
Yunbo Wang, Chunfeng Liu, Yangdong Deng
{"title":"A feasibility study of ray tracing on mobile GPUs","authors":"Yunbo Wang, Chunfeng Liu, Yangdong Deng","doi":"10.1145/2669062.2669071","DOIUrl":"https://doi.org/10.1145/2669062.2669071","url":null,"abstract":"Ray tracing is considered to be a promising technology for enhancing visual experience of future graphics applications. This work investigates the feasibility of ray tracing on mobile GPUs. A ray tracer was developed by integrating state-of-the-art construction and traversal algorithms and implemented in both CUDA and OpenCL. We then performed a detailed characterization of the ray tracing workload in terms of runtime, memory usage, and power consumption on both NVIDIA Tegra K1 and PowerVR SGX 544-MP3 GPUs. The results are compared against mobile CPU and desktop GPU implementations. It is proved that the Tegra K1 GPU already allows constructing the acceleration structure of 1M-triangle scene in around 120ms and performing traversal at a throughput of 15 to 70 million rays per second.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"286 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122984228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Dynamic augmented reality X-Ray on Google Glass 谷歌眼镜上的动态增强现实x射线
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669087
D. Rompapas, Nicholas Sorokin, Arno in Wolde Lübke, Takafumi Taketomi, Goshiro Yamamoto, C. Sandor, H. Kato
{"title":"Dynamic augmented reality X-Ray on Google Glass","authors":"D. Rompapas, Nicholas Sorokin, Arno in Wolde Lübke, Takafumi Taketomi, Goshiro Yamamoto, C. Sandor, H. Kato","doi":"10.1145/2669062.2669087","DOIUrl":"https://doi.org/10.1145/2669062.2669087","url":null,"abstract":"Over the recent years, research in sophisticated Augmented Reality (AR) X-Ray visualization techniques such as [Dey and Sandor 2014] that permit the user to see through real-world objects have created the possibility for mobile applications to show occluded information in an innovative and intuitive fashion. A popular application is navigation assistance in inner city environments. We think that Google Glass, a hands-free, wearable mobile device, which allows immediate access to web services, is a promising platform for such X-Ray systems.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122725953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toe detection with leg model for wearable input/output interface 可穿戴输入/输出接口脚趾检测与腿模型
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2669086
Fumihiro Sato, Nobuchika Sakata, S. Nishida
{"title":"Toe detection with leg model for wearable input/output interface","authors":"Fumihiro Sato, Nobuchika Sakata, S. Nishida","doi":"10.1145/2669062.2669086","DOIUrl":"https://doi.org/10.1145/2669062.2669086","url":null,"abstract":"In recent years, mobile terminal such as smart phone has become widespread. According to this, we tend to use the information service at a glance and frequently. For example, we use information services to find a route, check e-mails or update of SNS. However, such hand-held mobile terminal needs to retrieve from pocket and hold the device itself by at-least single hand while using. Therefore it is difficult to use hand-held mobile terminal when both hands are occupied.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121251347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D mobile interactions for public displays 用于公共展示的3D移动交互
SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications Pub Date : 2014-11-24 DOI: 10.1145/2669062.2684184
Mayra Donaji Barrera Machuca, W. Chinthammit, Yi Yang, H. Duh
{"title":"3D mobile interactions for public displays","authors":"Mayra Donaji Barrera Machuca, W. Chinthammit, Yi Yang, H. Duh","doi":"10.1145/2669062.2684184","DOIUrl":"https://doi.org/10.1145/2669062.2684184","url":null,"abstract":"Public displays are becoming common in public spaces like bus stops, movie theatres and work places. Getting and maintaining the user attention is one of the main struggles of public displays [Müller, et al, 2010]. Therefore there is a need to investigate public display interactions to propose interactions that can help engage users with the public display more. In this paper we choose to investigate collaboration as a way to improve user engagement with public displays based in previous results where creating a shared experience for an audience made the participants engaged with the activity [Scheible and Ojala, 2005]. We proposed the use of 3D user interfaces to give users tools to collaborate with other users, because controllers that allow for natural movements have the potential to offer greater affordances for social interaction [Lindley, et al, 2008]. In order to solve users privacy concerns we utilize mobile interactions, where a mobile screen can be used to display private and personalized information. In the proposed 3D mobile interaction we utilize the mobile device screen to augment 3D content on the public display utilizing an Augmented Reality (AR) tracking approach. We propose that having 3D content gives public displays a new layer of depth for collaboration between users. With this new layer users can create novel interactions where they not only consider the content on the public display but also the position of the users around it.","PeriodicalId":114416,"journal":{"name":"SIGGRAPH Asia 2014 Mobile Graphics and Interactive Applications","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信