Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers最新文献

筛选
英文 中文
Accurate trajectory prediction in a smart building using recurrent neural networks 基于递归神经网络的智能建筑精确轨迹预测
Anooshmita Das, Emil Stubbe Kolvig Raun, M. Kjærgaard
{"title":"Accurate trajectory prediction in a smart building using recurrent neural networks","authors":"Anooshmita Das, Emil Stubbe Kolvig Raun, M. Kjærgaard","doi":"10.1145/3410530.3414319","DOIUrl":"https://doi.org/10.1145/3410530.3414319","url":null,"abstract":"Occupant behavioral patterns, once extracted, could reveal cues about activities and space usage that could effectively get used for building systems to achieve energy savings. The ability to accurately predict the trajectories of occupants inside a room branched into different zones has many notable and compelling applications. For example - efficient space utilization and floor plans, intelligent building operations, crowd management, comfortable indoor environment, security, and evacuation or managing personnel. This paper proposes future occupant trajectory prediction using state-of-the-art time series prediction methods, i.e., Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models. These models are being implemented and compared to forecast occupant trajectories at a given time and location in a non-intrusive and reliable manner. The considered test-space for the collection of the dataset is a multi-utility area in an instrumented public building. The deployed 3D Stereo Vision Cameras capture the spatial location coordinates (x- and y- coordinates) from a bird's view angle without eliciting any other information that could reveal confidential data or uniquely identify a person. Our results showed that the GRU model forecasts were considerably more accurate than the LSTM model for the trajectory prediction. GRU prediction model achieved a Mean Squared Error (MSE) of 30.72 cm between actual and predicted location coordinates, and LSTM achieved an MSE of 47.13 cm, respectively, for multiple occupant trajectories within the monitored area. Another evaluation metric Mean Absolute Error (MAE) is used, and the GRU prediction model achieved an MAE of 3.14 cm, and the LSTM model achieved an MAE of 4.07 cm. The GRU model guarantees a high-fidelity occupant trajectory prediction for any given case with higher accuracy when compared to the baseline LSTM model.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91517829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Nurse care activity recognition based on convolution neural network for accelerometer data 基于卷积神经网络的加速度计数据护理活动识别
Md. Golam Rasul, Mashrur Hossain Khan, Lutfun Nahar Lota
{"title":"Nurse care activity recognition based on convolution neural network for accelerometer data","authors":"Md. Golam Rasul, Mashrur Hossain Khan, Lutfun Nahar Lota","doi":"10.1145/3410530.3414335","DOIUrl":"https://doi.org/10.1145/3410530.3414335","url":null,"abstract":"Human activity recognition on sensor data plays a vital role in health monitoring and elderly care service monitoring. Although tremendous progress has been noticed to the use of sensor technology to collect activity recognition data, recognition still remains challenging due to the pervasive nature of the activities. In this paper, we present a Convolution Neural Network (CNN) model by our team DataDrivers_BD in \"The 2nd Nurse Care Activity Recognition Challenge Using Lab and Field Data\" which is quite challenging because of the similarity among the tasks. On the other hand, the dissimilarity among the users patterns of working for a particular task. Since CNN can retrieve informative features automatically, it has become one of the most prominent methods in activity recognition. Our extensive experiment on nurse care activity recognition challenge dataset also achieved significant accuracy of 91.59% outperforming the existing state of the art algorithms.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"30 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89160905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Spinal curve assessment of idiopathic scoliosis with a small dataset via a multi-scale keypoint estimation approach 基于多尺度关键点估计方法的小数据集对特发性脊柱侧凸的脊柱曲线评估
Tianyun Liu, Yukang Yang, Yu Wang, Ming Sun, Wenhui Fan, Cheng Wu, C. Bunger
{"title":"Spinal curve assessment of idiopathic scoliosis with a small dataset via a multi-scale keypoint estimation approach","authors":"Tianyun Liu, Yukang Yang, Yu Wang, Ming Sun, Wenhui Fan, Cheng Wu, C. Bunger","doi":"10.1145/3410530.3414317","DOIUrl":"https://doi.org/10.1145/3410530.3414317","url":null,"abstract":"Idiopathic scoliosis (IS) is the most common type of spinal deformity, which leads to severe pain and potential heart and lung damage. The clinical diagnosis and treatment strategies for IS highly depend on the radiographic assessment of spinal curve. With improvements in image recognition via deep learning, learning-based methods can be applied to facilitate clinical decision-making. However, these methods usually require sufficiently large training datasets with precise annotation, which are very laborious and time-consuming especially for medical images. Moreover, the medical images of serious IS always contain the blurry and occlusive parts, which would make the strict annotation of the spinal curve more difficult. To address these challenges, we utilize the dot annotations approach to simply annotate the medical images instead of precise annotation. Then, we design a multi-scale keypoint estimation approach that incorporates Squeeze-and-Excitation(SE) blocks to improve the representational capacity of the model, achieving the assessment of spinal curve without large-size dataset. The proposed approach uses pose estimation framework to detect keypoints of spine with simple annotation and small-size dataset for the first time. Finally, we conduct experiments on a collected clinical dataset, and results illustrate that our approach outperforms the mainstream approaches.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83408918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
OfficeBP: noninvasive continuous blood pressure monitoring based on PPT in office environment OfficeBP:办公环境下基于PPT的无创血压连续监测
M. Guo, Hongbo Ni, Alex Q. Chen
{"title":"OfficeBP: noninvasive continuous blood pressure monitoring based on PPT in office environment","authors":"M. Guo, Hongbo Ni, Alex Q. Chen","doi":"10.1145/3410530.3414398","DOIUrl":"https://doi.org/10.1145/3410530.3414398","url":null,"abstract":"Blood pressure (BP), as a crucial vital sign of human beings, reflects the physical state of the cardiovascular system. Currently, blood pressure is mainly measured by collecting the changes in pressure in the vessel using cuff-sensors. It is a manual operation and cannot achieve continuous BP monitoring. In this work, we developed OfficeBP, a novel non-intrusive BP monitoring system for a typical office environment. OfficeBP relies on measuring the pulse transit time (PTT) between the pulse propagate from arterial proximal to the distal site on once heartbeat. For calculating the PTT, the user's face and thumb fingertip are regarded as the start and end points respectively. A twin-channel PPG sensing system is presented, that is, the fingertip pulse recording photoplethysmography (PPG) is obtained by a low-cost photoelectric sensor integrated with a mouse. Using image processing the face pulse is acquired by remote-PPG (rPPG) that based on a commercial off-the-shelf camera collecting facial video frames. OfficeBP was evaluated on 11 participants in different working conditions including the external illumination factor and personal internal factors, and achieved RMSE result of diastolic blood pressure 4.81 mmHg, systolic blood pressure 5.35 mmHg, demonstrate the feasibility of the system in an office environment.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90574005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Lifelog visualization based on social and physical activities 基于社交和体育活动的生活日志可视化
Akane Okuno, Y. Sumi
{"title":"Lifelog visualization based on social and physical activities","authors":"Akane Okuno, Y. Sumi","doi":"10.1145/3410530.3414377","DOIUrl":"https://doi.org/10.1145/3410530.3414377","url":null,"abstract":"This paper presents the visualization of lifelog based on the amount of social and physical activities for well-being. The motivation is that enables users to aware their social, physical, and moderate activities for behavioral change aiming a comfortable how to spend life for individuals. In this paper, three experiments were conducted to examine the feasibility of measuring and visualizing daily activities. We classified the one student's various daily activities to see the tendency of activity levels and classes. Also, we examined individual differences of three people in the same spatiotemporal space. Finally, we examined how the one student's activity changes of half-day can be visualized.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"107 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76846097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
CML-IOT 2020: the second workshop on continual and multimodal learning for internet of things CML-IOT 2020:第二次关于物联网持续和多模式学习的研讨会
Susu Xu, Shijia Pan, Tong Yu
{"title":"CML-IOT 2020: the second workshop on continual and multimodal learning for internet of things","authors":"Susu Xu, Shijia Pan, Tong Yu","doi":"10.1145/3410530.3414613","DOIUrl":"https://doi.org/10.1145/3410530.3414613","url":null,"abstract":"With the deployment of Internet of Things (IoT), large amount of sensors are connected into the Internet, providing large-amount, streaming, and multimodal data. These data have distinct statistical characteristics over time and sensing modalities, which are hardly captured by traditional learning methods. Continual and multimodal learning allows integration, adaptation, and generalization of the knowledge learned from experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to enable efficient ubiquitous computing on IoT devices. The major challenges to combine continual learning and multimodal learning with real-world data include 1) how to fuse and transfer knowledge between the multimodal data under constrained computational resources, 2) how to learn continually despite the missing, imbalanced or noisy data under constrained computational resources, 3) how to effectively reserve privacy and retain security when learning knowledge from streaming and multimodal data collected by multiple stakeholders, and 4) how to develop large-scale distributed learning systems to efficiently learn from continual and multimodal data. We organize this workshop to bring people working on different disciplines together to tackle these challenges in this topic. This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in the Internet of Things. The workshop welcomes works addressing these issues in different applications/domains as well as algorithmic and systematic approaches to leverage continual learning on multimodal data. We further seek to develop a community that systematically handles the streaming multimodal data widely available in real-world ubiquitous computing systems. In 2019, we held the First Workshop on Continual and Multimodal Learning for Internet of Things (https://cmliot2019.github.io/) with Ubicomp 2019, London, UK. The First workshop accepted 12 papers from 17 submissions. The one-day agenda included 3 sessions and attracted around 20 attendees from academia and industries to discuss and share visions.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78586611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Personal laughter archives: reflection through visualization and interaction 个人笑声档案:通过可视化和互动的反思
Kimiko Ryokai, Julia Park, Wesley Hanwen Deng
{"title":"Personal laughter archives: reflection through visualization and interaction","authors":"Kimiko Ryokai, Julia Park, Wesley Hanwen Deng","doi":"10.1145/3410530.3414419","DOIUrl":"https://doi.org/10.1145/3410530.3414419","url":null,"abstract":"We present our ongoing effort to capture, represent, and interact with the sounds of our loved ones' laughter in order to offer unique opportunities for us to celebrate the positive affect in our shared lived experiences. We present our informal evaluation of laughter visualizations and argue for applications in ubiquitous computing scenarios including Mobile Augmented Reality (MAR).","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"159 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75410916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Handwriting behavior as a self-confidence discriminator 书写行为作为自信的鉴别器
Takanori Maruichi, Taichi Uragami, Andrew W. Vargo, K. Kise
{"title":"Handwriting behavior as a self-confidence discriminator","authors":"Takanori Maruichi, Taichi Uragami, Andrew W. Vargo, K. Kise","doi":"10.1145/3410530.3414383","DOIUrl":"https://doi.org/10.1145/3410530.3414383","url":null,"abstract":"Receiving feedback based on the combination of self-confidence and correctness of an answer can help learners to improve learning efficiency. In this study, we propose a self-confidence estimation method using a simple touch up/move/down events that can be measured in a classroom environment. We recorded handwriting behavior during the answering vocabulary questions with a tablet and a stylus pen, estimating self-reported confidence. We successfully built a method that can predict the user's self-confidence with a maximum of 73% accuracy.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"79 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77888444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Pose evaluation for dance learning application using joint position and angular similarity 利用关节位置和角度相似度评价舞蹈学习中的姿势应用
Jae-Jun Lee, Jong-Hyeok Choi, Tserenpurev Chuluunsaikhan, A. Nasridinov
{"title":"Pose evaluation for dance learning application using joint position and angular similarity","authors":"Jae-Jun Lee, Jong-Hyeok Choi, Tserenpurev Chuluunsaikhan, A. Nasridinov","doi":"10.1145/3410530.3414402","DOIUrl":"https://doi.org/10.1145/3410530.3414402","url":null,"abstract":"In this paper, we propose a dance pose evaluation method for a dance learning application using a smartphone. In the past, methods for classifying and comparing dance gestures through 3-D joint information obtained through a 3-D camera have been proposed, but there is a problem in using them for accurate dance pose evaluation. That is, these methods simply compare the similarity between the dance gestures without evaluation of the exact dance pose. To solve this problem, we propose a new method that can be operated on a smartphone for exact dance pose evaluation that simultaneously performs an affine transformation and an evaluation method to compare the joint position and joint angle information. In addition, we prove that the proposed method is suitable for dance learning applications through comparative experiments on a smartphone with real-world datasets.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"83 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72897903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Blink rate variability: a marker of sustained attention during a visual task 眨眼频率可变性:在视觉任务中持续注意力的标志
R. Gavas, M. B. Sheshachala, D. Chatterjee, R. K. Ramakrishnan, V. Viraraghavan, Achanna Anil Kumar, M. Chandra
{"title":"Blink rate variability: a marker of sustained attention during a visual task","authors":"R. Gavas, M. B. Sheshachala, D. Chatterjee, R. K. Ramakrishnan, V. Viraraghavan, Achanna Anil Kumar, M. Chandra","doi":"10.1145/3410530.3414431","DOIUrl":"https://doi.org/10.1145/3410530.3414431","url":null,"abstract":"Eye blinks are vital components of human gaze which are used for assessing human behaviour. We have analyzed the variability of the inter-blink durations, termed as blink rate variability (BRV), for analysing sustained attention for visual tasks. Uniformly sampled BRV series is reconstructed from the gaze data recorded using an eye tracker. A number of features are extracted from this series. We proposed a new feature based on pareto principle. Results show that skewness, kurtosis, mean frequency and pareto frequency are good indicators of sustained attention. We observed that with increase in attention level, the power of BRV series tends to have a normal distribution whereas the mean and pareto frequency decreases. Results were generated on a small dataset as a proof of concept of our hypothesis that BRV is a potential bio-marker of sustained attention in a visual task.","PeriodicalId":7183,"journal":{"name":"Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73805769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信