International Conference on Societal Automation最新文献

筛选
英文 中文
Fingers tale 手指的故事
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542398.2542472
Luca Schenato, Sinem Vardarli
{"title":"Fingers tale","authors":"Luca Schenato, Sinem Vardarli","doi":"10.1145/2542398.2542472","DOIUrl":"https://doi.org/10.1145/2542398.2542472","url":null,"abstract":"An unusual adventure of a team of toes.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social reverse geocoding studies: describing city images using geotagged social tagging 社会逆向地理编码研究:使用地理标记社会标记描述城市图像
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543679
Koh Sueda
{"title":"Social reverse geocoding studies: describing city images using geotagged social tagging","authors":"Koh Sueda","doi":"10.1145/2543651.2543679","DOIUrl":"https://doi.org/10.1145/2543651.2543679","url":null,"abstract":"Owing to the increasing use of social networking in the mobile environment, people today share more than a million geotagged objects that include objects with social tagging on a daily basis. In this paper, we propose social reverse geocoding (SRG). Social reverse geocoding (SRG) provides highly descriptive geographical information to mobile users. GPS provides the user with latitude and longitude values; however, these values are cumbersome for determining a precise location. A traditional reverse geocoding (conversion of the abovementioned values into street addresses) provides location information based on administrative labeling, but people often do not recognize locations or their surrounding environs from street addresses alone. To address this problem with location recognition, we have created SRG, a reverse geocoding system that enhances location data with user-generated information and provides assistance through a mobile interface [Sueda, et al. 2012]. Through a user study of SRG, we found a clear correlation between the number of tags and the locality of the residents. The obtained result indicates that the residents define the area of a city through SRG as closer than that defined by its street address. Further, the result reveals the potential of developing location-based services based on the image of the city obtained using social tagging on the Internet.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116887761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Twirled affordances, self-conscious avatars, & inspection gestures 旋转的启示,自我意识的化身,和检查手势
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543691
Michael Cohen, Rasika Ranaweera, Kensuke Nishimura, Y. Sasamoto, Tomohiro Oyama, Tetsunobu Ohashi, Anzu Nakada, J. Villegas, Yong Ping Chen, Sascha Holesch, Jun Yamadera, Hayato Ito, Yasuhiko Saito, Akira Sasaki
{"title":"Twirled affordances, self-conscious avatars, & inspection gestures","authors":"Michael Cohen, Rasika Ranaweera, Kensuke Nishimura, Y. Sasamoto, Tomohiro Oyama, Tetsunobu Ohashi, Anzu Nakada, J. Villegas, Yong Ping Chen, Sascha Holesch, Jun Yamadera, Hayato Ito, Yasuhiko Saito, Akira Sasaki","doi":"10.1145/2543651.2543691","DOIUrl":"https://doi.org/10.1145/2543651.2543691","url":null,"abstract":"Contemporary smartphones and tablets have magnetometers that can be used to detect yaw, which data can be distributed to adjust ambient media. We have built haptic interfaces featuring smartphones and tablets that use compass-derived orientation sensing to modulate virtual displays. Embedding mobile devices into pointing, swinging, and flailing affordances allows \"padiddle\"-style interfaces, finger spinning, and \"poi\"-style interfaces, whirling tethered devices, for novel interaction techniques [Cohen et al. 2013].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130706706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPU-based large-scale visualization 基于gpu的大规模可视化
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542266.2542273
M. Hadwiger, J. Krüger, J. Beyer, S. Bruckner
{"title":"GPU-based large-scale visualization","authors":"M. Hadwiger, J. Krüger, J. Beyer, S. Bruckner","doi":"10.1145/2542266.2542273","DOIUrl":"https://doi.org/10.1145/2542266.2542273","url":null,"abstract":"Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size.\u0000 The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections.\u0000 You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization.\u0000 We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous processing and data streaming on CPUs.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130832161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Human-engine: viewing 4D mesh captures on mobile devices 人类引擎:查看移动设备上的4D网格捕获
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543674
M. Poswal, K. Hecker, Debra Isaac Downing, G. Downing, V. Bohossian
{"title":"Human-engine: viewing 4D mesh captures on mobile devices","authors":"M. Poswal, K. Hecker, Debra Isaac Downing, G. Downing, V. Bohossian","doi":"10.1145/2543651.2543674","DOIUrl":"https://doi.org/10.1145/2543651.2543674","url":null,"abstract":"Human-Engine is an innovative new approach to 3D asset creation, using 4D scan data to create lifelike virtual humans, clothing or anything else you can put in front of a camera. Our goal is to bridge the gap between traditional video capture and existing CG technology by creating accurate scans of humans and objects in motion that combine the realism of video footage with the flexibility of CG models.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"469 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117215385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A haptic device based on an approximate plane 一种基于近似平面的触觉装置
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542302.2542337
Anzu Kawazoe, Kazuo Ikeshiro, H. Imamura
{"title":"A haptic device based on an approximate plane","authors":"Anzu Kawazoe, Kazuo Ikeshiro, H. Imamura","doi":"10.1145/2542302.2542337","DOIUrl":"https://doi.org/10.1145/2542302.2542337","url":null,"abstract":"In recent years, research of haptic interface has been attracting attentions of researchers. By using haptic devices, people can easily handle 3D objects. Therefore, it is expected to be used for applications such as simulation in medical fields or remote control of robots. Falcon [1] and PHANToM [2] are one of the famous haptic devices. These devices have controller or pen to let user touch virtual objects. Furthermore, these devices can also provide a sense of force by each point of virtual objects for user. These haptic devices are classified as one point contact type haptic device. Users can experience as if he pokes virtual objects by using these haptic devices. However, point contact type haptic devices cannot perform a sense of force and a sense of touch at the same time. We define this sense of touch as the sense of friction caused by different materials. In order to perform more realistic sense of touch, we attempt to perform both a sense of force and a sense of touch at the same time. To realize this purpose, we developed a novel haptic device.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117228396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generating abstract paintings in Kandinsky style 以康定斯基风格创作抽象画
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542256.2542257
Kang Zhang, Jinhui Yu
{"title":"Generating abstract paintings in Kandinsky style","authors":"Kang Zhang, Jinhui Yu","doi":"10.1145/2542256.2542257","DOIUrl":"https://doi.org/10.1145/2542256.2542257","url":null,"abstract":"This paper presents a recent project on automatic generation of Kandinsky style of abstract paintings using the programming language Processing. It first offers an analysis of Kandinsky's paintings based on his art theories and the author's own understanding and observation. The generation process is described in details and sample generated images styled on four of Kandinsky's paintings are also demonstrated and discussed. Our approach is highly scalable, limited only by the memory space set in Processing. Using random generation, every styled image generated can be unique. A selection of the images generated in the required resolution is also submitted and 70 images are made into a video companion.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133425481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Inside "elysium": from earth to the ring 在“极乐空间”内部:从地球到魔戒
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542398.2542424
A. Kaufman
{"title":"Inside \"elysium\": from earth to the ring","authors":"A. Kaufman","doi":"10.1145/2542398.2542424","DOIUrl":"https://doi.org/10.1145/2542398.2542424","url":null,"abstract":"The visual effects of Neill Blomkamp's latest sci-fi epic, \"Elysium\".","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127885778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyak-Ki Men: a study of framework for creating mixed reality entertainment Hyak-Ki Men:混合现实娱乐创作的框架研究
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2542302.2542335
Toshikazu Ohshima, Y. Shibata, K. Isshiki, Ko Hayami, Chiharu Tanaka
{"title":"Hyak-Ki Men: a study of framework for creating mixed reality entertainment","authors":"Toshikazu Ohshima, Y. Shibata, K. Isshiki, Ko Hayami, Chiharu Tanaka","doi":"10.1145/2542302.2542335","DOIUrl":"https://doi.org/10.1145/2542302.2542335","url":null,"abstract":"\"Hyak-Ki Men\" is one of our Mixed Reality (MR) Entertainment Project. The goal is to realize innovative and high quality entertainment which provides impressive experience for people with various interest and wide age by applying MR technology. \"Hyak-Ki Men\" is MR Ninja Entertainment in which a player becomes a Ninja (Figure 1a) and the mission is to defeat virtual Ogres in MR field. The player can enjoy exciting battle with the ogres using virtual Katana (Figure 1b) and Throwing Stars (Figure 1c) by natural gestural user interface with multisensory feedbacks.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114253143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real time reliability: mixed reality public transportation 实时可靠性:混合现实公共交通
International Conference on Societal Automation Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543689
Antti Nurminen, J. Järvi
{"title":"Real time reliability: mixed reality public transportation","authors":"Antti Nurminen, J. Järvi","doi":"10.1145/2543651.2543689","DOIUrl":"https://doi.org/10.1145/2543651.2543689","url":null,"abstract":"In public transportation, quality of service is of paramount importance. In a study on public transportation reliability, approximately half of the riders reduced their use of services due to unreliability, switching to other modes of transportation [Carrel et al. 2013]. It is also known that the perceived wait time at a bus stop is greater than the actual wait time and a real time information diminishes this difference [Mishalani et al. 2006]. However, when real time data itself is unreliable, this is felt particularly frustrating [Carrel et al. 2013].","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129542674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信