International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems最新文献

筛选
英文 中文
The design of artifacts for augmenting intellect 为增强智力而设计的人工制品
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459263
Cassandra Xia, P. Maes
{"title":"The design of artifacts for augmenting intellect","authors":"Cassandra Xia, P. Maes","doi":"10.1145/2459236.2459263","DOIUrl":"https://doi.org/10.1145/2459236.2459263","url":null,"abstract":"Fifty years ago, Doug Engelbart created a conceptual framework for augmenting human intellect in the context of problem-solving. We expand upon Engelbart's framework and use his concepts of process hierarchies and artifact augmentation for the design of personal intelligence augmentation (IA) systems within the domains of memory, motivation, decision making, and mood. This paper proposes a systematic design methodology for personal IA devices, organizes existing IA research within a logical framework, and uncovers underexplored areas of IA that could benefit from the invention of new artifacts.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124298432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Recovering 3-D gaze scan path and scene structure from inside-out camera 从内向外相机中恢复三维凝视扫描路径和场景结构
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459270
Yuto Goto, H. Fujiyoshi
{"title":"Recovering 3-D gaze scan path and scene structure from inside-out camera","authors":"Yuto Goto, H. Fujiyoshi","doi":"10.1145/2459236.2459270","DOIUrl":"https://doi.org/10.1145/2459236.2459270","url":null,"abstract":"First-Person Vision (FPV) is a wearable sensor that takes images from a user's visual field and interprets them, with available information about the user's head motion and gaze, through eye tracking [1]. Measuring the 3-D gaze trajectory of a user moving dynamically in 3-D space is interesting for understanding a user's intention and behavior. In this paper, we present a system for recovering 3-D scan path and scene structure in 3-D space on the basis of ego-motion computed from an inside-out camera. Experimental results show that the 3-D scan paths of a user moving in complex dynamic environments were recovered.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116485975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Geometrically consistent mobile AR for 3D interaction 几何上一致的移动AR 3D交互
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459275
Hikari Uchida, T. Komuro
{"title":"Geometrically consistent mobile AR for 3D interaction","authors":"Hikari Uchida, T. Komuro","doi":"10.1145/2459236.2459275","DOIUrl":"https://doi.org/10.1145/2459236.2459275","url":null,"abstract":"In this study, we propose a method to present an image that maintains geometric consistency between the actual scene outside the mobile display and the camera image. Thereby, we expect that interaction with the virtual object through the mobile display becomes more intuitive, and the operability is improved. By cameras mounted on the front and back of the mobile display, the user's face position and the distance to the subject are obtained. Using the information, it is possible to present an image that maintains geometric consistency between the inside and outside of the display depending on the user's viewpoint.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128653431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using RFID tags as reference for phone location and orientation in daily life 在日常生活中使用RFID标签作为手机定位和定位的参考
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459269
Florian Wahl, O. Amft
{"title":"Using RFID tags as reference for phone location and orientation in daily life","authors":"Florian Wahl, O. Amft","doi":"10.1145/2459236.2459269","DOIUrl":"https://doi.org/10.1145/2459236.2459269","url":null,"abstract":"This paper investigates a novel approach to obtain location and orientation annotation for smartphones in real-life recordings. We attached RFID tags to places where phones are located in daily life, such as pockets and backpacks. The RFID reader integrated in modern smartphones was used to continuously scan for registered tags. In a first evaluation across several full-day recordings and using nine locations, our approach achieved an accuracy of 80 % when compared to a manual diary. Only 5.3 % of all tags were missed. We conclude that RFID-based location and orientation tagging is a viable option to obtain ground truth reference for real-life activity recognition algorithm developments.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128625754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Tangential force sensing system on forearm 前臂切向力传感系统
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459242
Yasutoshi Makino, Yuta Sugiura, Masa Ogata, M. Inami
{"title":"Tangential force sensing system on forearm","authors":"Yasutoshi Makino, Yuta Sugiura, Masa Ogata, M. Inami","doi":"10.1145/2459236.2459242","DOIUrl":"https://doi.org/10.1145/2459236.2459242","url":null,"abstract":"In this paper, we propose a sensing system that can detect one dimensional tangential force on a forearm. There are some previous tactile sensors that can detect touch conditions when a user touches a human skin surface. Those sensors are usually attached on a fingernail, so therefore a user cannot touch the skin with two fingers or with their palm. In the field of cosmetics, for example, they want to measure contact forces when a customer puts their products onto their skin. In this case, it is preferable that the sensor can detect contact forces in many different contact ways. In this paper, we decided to restrict a target area to a forearm. Since the forearm has a cylindrical shape, its surface deformation propagates to neighboring areas around a wrist and an elbow. The deformation can be used to estimate tangential force on the forearm. Our system does not require any equipment for the active side (i.e. fingers or a palm). Thus a user can touch the forearm in arbitrary ways. We show basic numerical simulation and experimental results which indicate that the proposed system can detect tangential force on the forearm. Also we show some possible applications that use the forearm as a human-computer interface device.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"66 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120929835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
A system for practicing formations in dance performance supported by self-propelled screen 一种由自走式屏风支撑的舞蹈表演阵型练习系统
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459266
Shuhei Tsuchida, T. Terada, M. Tsukamoto
{"title":"A system for practicing formations in dance performance supported by self-propelled screen","authors":"Shuhei Tsuchida, T. Terada, M. Tsukamoto","doi":"10.1145/2459236.2459266","DOIUrl":"https://doi.org/10.1145/2459236.2459266","url":null,"abstract":"Collapsed formation in a group dance will greatly reduce the quality of the performance even if the dance in the group is synchronized with music. Therefore, learning the formation of a dance in a group is as important as learning its choreography. However, if someone cannot participate in practice, it is difficult for the rest of the members to gain a sense of the proper formation in practice. We propose a practice-support system for performing the formation smoothly using a self-propelled screen even if there is no dance partner. We developed a prototype of the system and investigated whether a sense of presence provided by both methods of practicing formations was close to the sense we really obtain when we dance with humans. The result verified that the sense of dancing with a projected video was closest to the sense of dancing with a dancer, and the trajectory information from dancing with a self-propelled robot was close to the trajectory information from dancing with a dancer. Practicing in situations similar to real ones is able to be done by combining these two methods. Furthermore, we investigated whether the self-propelled screen obtained the advantages of dancing with both methods and found that it only obtained advantages of dancing with projected video.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127834233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
EyeRing: a finger-worn input device for seamless interactions with our surroundings EyeRing:一种手指佩戴的输入设备,可以与周围环境无缝交互
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459240
Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, P. Maes
{"title":"EyeRing: a finger-worn input device for seamless interactions with our surroundings","authors":"Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, P. Maes","doi":"10.1145/2459236.2459240","DOIUrl":"https://doi.org/10.1145/2459236.2459240","url":null,"abstract":"Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually impaired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also report on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
A system for visualizing human behavior based on car metaphors 基于汽车隐喻的人类行为可视化系统
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459274
Hiroaki Sasaki, T. Terada, M. Tsukamoto
{"title":"A system for visualizing human behavior based on car metaphors","authors":"Hiroaki Sasaki, T. Terada, M. Tsukamoto","doi":"10.1145/2459236.2459274","DOIUrl":"https://doi.org/10.1145/2459236.2459274","url":null,"abstract":"There are many accidents such as bumping between walkers in crowded places. One of reasons for them is that it is difficult for each person to predict the behaviors of other people. On the other hand, cars implicitly communicate with other cars by presenting their contexts with equipments such as brake lights and turn signals. In this paper, we propose a system for visualizing the user context by using information presentation methods based on those found in cars, such as wearing LEDs as brake lights, which can be seen by surrounding people. The evaluation results when using our prototype system confirmed that our method visually and intuitively presented the user context. In addition, we evaluated the visibility effects of changing the mounting position of the wearable devices.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128511585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Communication pedometer: a discussion of gamified communication focused on frequency of smiles 交流计步器:关于微笑频率的游戏化交流的讨论
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459272
Yukari Hori, Yutaka Tokuda, Takahiro Miura, Atsushi Hiyama, M. Hirose
{"title":"Communication pedometer: a discussion of gamified communication focused on frequency of smiles","authors":"Yukari Hori, Yutaka Tokuda, Takahiro Miura, Atsushi Hiyama, M. Hirose","doi":"10.1145/2459236.2459272","DOIUrl":"https://doi.org/10.1145/2459236.2459272","url":null,"abstract":"Communication skills are essential in our everyday lives. Yet, it can be difficult for people with communication disorders to improve these skills without professional help. Quantifying communication and providing feedback advice in an automated manner would significantly improve that process. Therefore, we aim to propose a method to monitor communication that employs life-logging technology to evaluate parameters related to communication skills. In our study, we measured frequency of smiles as a metric for smooth communication. In addition, smiling can improve happiness even if a smile is mimicked. Ultimately, we provided feedback results to users in a gamified form and investigated the effects of feedback on communication.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117185058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Sonification of images for the visually impaired using a multi-level approach 使用多层次方法对视障人士的图像进行超声处理
International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems Pub Date : 2013-03-07 DOI: 10.1145/2459236.2459264
M. Banf, V. Blanz
{"title":"Sonification of images for the visually impaired using a multi-level approach","authors":"M. Banf, V. Blanz","doi":"10.1145/2459236.2459264","DOIUrl":"https://doi.org/10.1145/2459236.2459264","url":null,"abstract":"This paper presents a system that strives to give visually impaired persons direct perceptual access to images via an acoustic signal. The user explores the image actively on a touch screen and receives auditory feedback about the image content at the current position. The design of such a system involves two major challenges: what is the most useful and relevant image information, and how can as much information as possible be captured in an audio signal. We address both problems, and propose a general approach that combines low-level information, such as color, edges, and roughness, with mid- and high-level information obtained from Machine Learning algorithms. This includes object recognition and the classification of regions into the categories \"man made\" versus \"natural\". We argue that this multi-level approach gives users direct access to what is where in the image, yet it still exploits the potential of recent developments in Computer Vision and Machine Learning.","PeriodicalId":407457,"journal":{"name":"International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134274761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信