Dipyaman Banerjee, Sheetal K. Agarwal, Parikshit Sharma
{"title":"Improving floor localization accuracy in 3D spaces using barometer","authors":"Dipyaman Banerjee, Sheetal K. Agarwal, Parikshit Sharma","doi":"10.1145/2802083.2802089","DOIUrl":"https://doi.org/10.1145/2802083.2802089","url":null,"abstract":"Technologies such as Wifi and BLE have been proven to be effective for indoor localization in two dimensional spaces with sufficiently good accuracy but the same techniques have large margin of errors when it comes to three dimensional spaces. Popular 3D spaces such as malls or airports are marked by distinct structural features - atrium/hollow space and large corridors which reduces spatial variability of WiFi and BLE signal strengths leading to erroneous location prediction. A large fraction of these errors can be attributed to vertical jumps where the predicted location has same horizontal coordinate as the actual location but differs in the vertical coordinate. Smartphones now come equipped with barometer sensor which could be used to solve this problem and create 3D localization solution having better accuracy. Research shows that the barometer can be used to determine relative vertical movement and its direction with nearly 100% accuracy. However exact floor prediction requires repeated calibration of the barometer measurements as pressure values vary significantly across device, time and locations. In this paper we present a method of automatically calibrating smartphone embedded barometers to provide accurate 3D localization. Our method combines a probabilistic learning method with a pressure drift elimination algorithm. We also show that when the floor value is accurately predicted, Wifi localization accuracy improves by 25% for 3D spaces. We validate our techniques in a real shopping mall and provide valuable insights from practical experiences.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125531396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating gaze annotations in head mounted displays","authors":"D. Mardanbegi, Pernilla Qvarfordt","doi":"10.1145/2802083.2808404","DOIUrl":"https://doi.org/10.1145/2802083.2808404","url":null,"abstract":"To facilitate distributed communication in mobile settings, we developed GazeNote for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annotation, the user simply captures an image using the HMD's camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressiveness compared to annotations of the image as a whole.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122810182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating general model for activity recognition with minimum labelled data","authors":"Jiahui Wen, Mingyang Zhong, J. Indulska","doi":"10.1145/2802083.2808399","DOIUrl":"https://doi.org/10.1145/2802083.2808399","url":null,"abstract":"Since people perform activities differently, to avoid overfitting, creating a general model with activity data of various users is required before the deployment for personal use. However, annotating a large amount of activity data is expensive and time-consuming. In this paper, we create a general model for activity recognition with a limited amount of labelled data. We combine Latent Dirichlet Allocation (LDA) and AdaBoost to jointly train a general activity model with partially labelled data. After that, when AdaBoost is used for online prediction, we combine it with graphical models (such as HMM and CRF) to exploit the temporal information in human activities to smooth out accidental misclassifications. Experiments on publicly available datasets show that we are able to obtain the accuracy of more than 90% with 1% labelled data.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Addressing dresses: user interface allowing for interdisciplinary design and calibration of LED embedded garments","authors":"Z. Cochran, C. Zeagler, S. McCall","doi":"10.1145/2802083.2808403","DOIUrl":"https://doi.org/10.1145/2802083.2808403","url":null,"abstract":"Wearable technology projects afford the opportunity to work within interdisciplinary teams to create truly innovative solutions. Sometimes it is difficult for teams of designers and engineers to work together because of process differences and communication issues. Here we present a case study that describes how one team developed a system to overcome these obstacles and propose viewing interdisciplinary collaboration tools as boundary objects. The system described here allows designers to work with programmers to create full color light effects in real time, through a calibration process and interface that allows designers an easy entry into discussions about the placement of electronics in an LED-embedded garments.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123317867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolong Wu, Malcolm Haynes, Yixin Zhang, Ziyi Jiang, Zhengyang Shen, Anhong Guo, Thad Starner, Scott M. Gilliland
{"title":"Comparing order picking assisted by head-up display versus pick-by-light with explicit pick confirmation","authors":"Xiaolong Wu, Malcolm Haynes, Yixin Zhang, Ziyi Jiang, Zhengyang Shen, Anhong Guo, Thad Starner, Scott M. Gilliland","doi":"10.1145/2802083.2808408","DOIUrl":"https://doi.org/10.1145/2802083.2808408","url":null,"abstract":"Manual order picking is an important part of distribution. Many techniques have been proposed to improve pick efficiency and accuracy. Previous studies compared pick-by-HUD (Head-Up Display) with pick-by-light but without the explicit pick confirmation that is typical in industrial environments. We compare a pick-by-light system designed to emulate deployed systems with a pick-by-HUD system using Google Glass. The pick-by-light system tested 50% slower than pick-by-HUD and required a higher workload. The number of errors committed and picker preference showed no statistically significant difference.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122595944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Lukowicz, Andreas Poxrucker, Jens Weppner, B. Bischke, J. Kuhn, M. Hirth
{"title":"Glass-physics: using google glass to support high school physics experiments","authors":"P. Lukowicz, Andreas Poxrucker, Jens Weppner, B. Bischke, J. Kuhn, M. Hirth","doi":"10.1145/2802083.2808407","DOIUrl":"https://doi.org/10.1145/2802083.2808407","url":null,"abstract":"We demonstrate how Smart Glasses can support high school science experiments. The vision is to (1) reduce the \"technical\" effort involved in conducting the experiments (measuring, generating plots etc.) and to (2) allow the students to interactively see/manipulate the theoretical representation of the relevant phenomena while at the same time interacting with them in the real world. As a use case, we have implemented a Google Glass app for a standard high school acoustics school experiment (determining the relationship between tone frequency that hitting a glass filled with water generates and the amount of water in the glass). We evaluated the system with a group of 36 high grade students split into a group using our application and a control group using an existing tablet based system. We show a statistically significant advantage in experiment execution speed, cognitive load, and curiosity.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116560932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva Dorschky, D. Schuldhaus, Harald Koerger, B. Eskofier
{"title":"A framework for early event detection for wearable systems","authors":"Eva Dorschky, D. Schuldhaus, Harald Koerger, B. Eskofier","doi":"10.1145/2802083.2808389","DOIUrl":"https://doi.org/10.1145/2802083.2808389","url":null,"abstract":"A considerable number of wearable system applications necessitate early event detection (EED). EED is defined as the detection of an event with as much lead time as possible. Applications include physiological (e.g., epileptic seizure or heart stroke) or biomechanical (e.g., fall movement or sports event) monitoring systems. EED for wearable systems is under-investigated in literature. Therefore, we introduce a novel EED framework for wearable systems based on hybrid Hidden Markov Models. Our study specifically targets EED based on inertial measurement unit (IMU) signals in sports. We investigate the early detection of high intensive soccer kicks, with the possible pre-kick adaptation of a soccer shoe before the shoe-ball impact in mind. We conducted a study with ten subjects and recorded 226 kicks using a custom IMU placed in a soccer shoe cavity. We evaluated our framework in terms of EED accuracy and EED latency. In conclusion, our framework delivers the required accuracy and lead times for EED of soccer kicks and can be straightforwardly adapted to other wearable system applications that necessitate EED.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132782630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Mase, Marc Langheinrich, D. Gática-Pérez, Kristof Van Laerhoven, T. Terada
{"title":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","authors":"K. Mase, Marc Langheinrich, D. Gática-Pérez, Kristof Van Laerhoven, T. Terada","doi":"10.1145/2802083","DOIUrl":"https://doi.org/10.1145/2802083","url":null,"abstract":"Welcome to UbiComp/ISWC 2015, the premier international forum for pervasive, ubiquitous, and wearable computing, which takes place from September 9-11, 2015 in Osaka, Japan. \u0000 \u0000The 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp 2015) is the third installment of the merged format between \"Pervasive\" and \"Ubicomp\", the two most renowned conferences in the field. As in previous years, it is co-located with the 19th International Symposium on Wearable Computers (ISWC 2015) - the premier forum to present cutting-edge research results in the fields of wearable computing and on-body mobile technologies. ISWC and UbiComp have been co-located with big success since 2013 in Zurich. This year, the two conferences feature two independent technical programs, but offer a single adjunct track, featuring joint posters, demos, workshops, tutorials, and a common doctoral school. As in previous years, ISWC and UbiComp operate as a single event, i.e., attendees are free to attend events from both conferences interchangeably.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115325808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Magnetic input for mobile virtual reality","authors":"Boris Smus, Christopher J. Riederer","doi":"10.1145/2802083.2808395","DOIUrl":"https://doi.org/10.1145/2802083.2808395","url":null,"abstract":"Modern smartphones can create compelling virtual reality (VR) experiences through the use of VR enclosures, devices that encase the phone and project stereoscopic renderings through lenses into the user's eyes. Since the touch screen in such designs is typically hidden inside an enclosure, the main interaction mechanism of the device is not accessible. We present a new magnetic input mechanism for mobile VR devices which is wireless, unpowered, inexpensive, provides physical feedback, requires no calibration, and works reliably on the majority of modern smartphones. This is the main input mechanism for Google Cardboard, of which there are over one million units. We show robust gesture recognition, at an accuracy of greater than 95% across smartphones and assess the capabilities, accuracy and limitations of our technique through a user study.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121935691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust in-situ data reconstruction from poisson noise for low-cost, mobile, non-expert environmental sensing","authors":"M. Budde, M. Köpke, M. Beigl","doi":"10.1145/2802083.2808406","DOIUrl":"https://doi.org/10.1145/2802083.2808406","url":null,"abstract":"Personal and participatory environmental sensing, especially of air quality, is a topic of increasing importance. However, as the employed sensors are often cheap, they are prone to erroneous readings, e.g. due to sensor aging or low selectivity. Additionally, non-expert users make mistakes when handling equipment. We present an elegant approach that deals with such problems on the sensor level. Instead of characterizing systematic errors to remove them from the noisy signal, we reconstruct the true signal solely from its Poisson noise. Our approach can be applied to data from any phenomenon that can be modeled as particles and is robust against both offset and drift, as well to a certain extent against cross-sensitivity. We show its validity on two real-world datasets.","PeriodicalId":372395,"journal":{"name":"Proceedings of the 2015 ACM International Symposium on Wearable Computers","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123270744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}