Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction最新文献

筛选
英文 中文
Theodor: A Step Towards Smart Home Applications with Electronic Noses Theodor:迈向电子鼻智能家居应用的一步
C. Dang, A. Seiderer, E. André
{"title":"Theodor: A Step Towards Smart Home Applications with Electronic Noses","authors":"C. Dang, A. Seiderer, E. André","doi":"10.1145/3266157.3266215","DOIUrl":"https://doi.org/10.1145/3266157.3266215","url":null,"abstract":"This paper presents preliminary results of the ongoing project TheOdor which explores the potential of electronic noses that make use of commodity gas sensors (MOS, MEMS) for applications in the smarthome, for example, to classify human activities based on the odors generated by activities. We describe the system and its components and report on classification results from first validation experiments.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122809345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Activity Recognition using Head Worn Inertial Sensors 使用头戴式惯性传感器的活动识别
Johann-Peter Wolff, Florian Grützmacher, A. Wellnitz, C. Haubelt
{"title":"Activity Recognition using Head Worn Inertial Sensors","authors":"Johann-Peter Wolff, Florian Grützmacher, A. Wellnitz, C. Haubelt","doi":"10.1145/3266157.3266218","DOIUrl":"https://doi.org/10.1145/3266157.3266218","url":null,"abstract":"Human activity recognition using inertial sensors is an increasingly used feature in smartphones or smartwatches, providing information on sports and physical activities of each individual. But while the position a smartphone is worn in varies between persons and circumstances, a smartwatch moves constantly, in rhythm with its user's arms. Both problems make activity recognition less reliable. Attaching an inertial sensor to the head provides reliable information on the movements of the whole body while not being superimposed by many additional movements. This can be achieved by fixing sensors to glasses, helmets, or headphones. In this paper, we present a system using head-mounted inertial sensors for human activity recognition. We compare it to existing research work and show possible advantages or disadvantages of positioning a single sensor on the head to recognize physical activities. Furthermore we evaluate the benefits of using different sensor configurations on activity recognition.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114098304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Combining off-the-shelf Image Classifiers with Transfer Learning for Activity Recognition 结合现成的图像分类器和活动识别的迁移学习
Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar
{"title":"Combining off-the-shelf Image Classifiers with Transfer Learning for Activity Recognition","authors":"Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar","doi":"10.1145/3266157.3266219","DOIUrl":"https://doi.org/10.1145/3266157.3266219","url":null,"abstract":"Human Activity Recognition (HAR) plays an important role in many real world applications. Currently, various techniques have been proposed for sensor-based \"HAR\" in daily health monitoring, rehabilitative training and disease prevention. However, non-visual sensors in general and wearable sensors in specific have several limitations: acceptability and willingness to use wearable sensors; battery life; ease of use; size and effectiveness of the sensors. Therefore, adopting vision-based human activity recognition approach is more viable option since its diversity would enable the application to be deployed in wide range of domains. The most popular technique of vision based activity recognition, Deep Learning, however, requires huge domain-specific datasets for training which, is time consuming and expensive. To address this problem this paper proposes a Transfer Learning technique by adopting vision-based approach to \"HAR\" by using already trained Deep Learning models. A new stochastic model is developed by borrowing the concept of \"Dirichlet Alloaction\" from Latent Dirichlet Allocation (LDA) for an inference of the posterior distribution of the variables relating the deep learning classifiers predicted labels with the corresponding activities. Results show that an average accuracy of 95.43% is achieved during training the model as compared to 74.88 and 61.4% of Decision Tree and SVM respectively.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124911195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fewer Samples for a Longer Life Span: Towards Long-Term Wearable PPG Analysis 更少的样品,更长的寿命:迈向长期可穿戴的PPG分析
Florian Wolling, Kristof Van Laerhoven
{"title":"Fewer Samples for a Longer Life Span: Towards Long-Term Wearable PPG Analysis","authors":"Florian Wolling, Kristof Van Laerhoven","doi":"10.1145/3266157.3266209","DOIUrl":"https://doi.org/10.1145/3266157.3266209","url":null,"abstract":"Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and size of current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampling rates down to 10 Hz, within a power-efficient platform. This approach enables miniature sensor designs that monitor average heart rate, respiration rate, and sympathetic nerve activity over longer stretches of time.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122145697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Real-Time Joint Axes Estimation of the Hip and Knee Joint during Gait using Inertial Sensors 基于惯性传感器的步态中髋关节和膝关节关节轴的实时估计
Markus Nordén, Philipp Müller, T. Schauer
{"title":"Real-Time Joint Axes Estimation of the Hip and Knee Joint during Gait using Inertial Sensors","authors":"Markus Nordén, Philipp Müller, T. Schauer","doi":"10.1145/3266157.3266213","DOIUrl":"https://doi.org/10.1145/3266157.3266213","url":null,"abstract":"Inertial Measurement Units (IMUs) have proven to be a promising candidate for joint kinematics assessment during human locomotion. The benefits associated with IMU-based joint angle measurements are ease of handling, flexibility and low cost. However, a known limitation is that the joint axes in terms of the attached IMUs need to be identified in order to decompose IMU measurements into joint angles. Conventionally, careful alignment of the IMUs with respect to the body segments and/or calibration motions are required. In this paper, a novel approach is proposed to estimate the joint axes of the hip and knee joint during gait. Our method is easy to use, self-calibrating and real-time capable using the obtained IMU data during gait. In addition to prior methods, the algorithm profits from the periodicity during gait in order to deal with three (rotational) degrees of freedom (3-DoF) motions. Experiments with 8 healthy subjects walking on a motor-driven treadmill have been conducted. The joint axes converged onto the expected axes in all trials and the convergence times averaged less than 15 seconds.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Machine Learning Approach to Violin Bow Technique Classification: a Comparison Between IMU and MOCAP systems 小提琴琴弓技术分类的机器学习方法:IMU和MOCAP系统的比较
D. Dalmazzo, S. Tassani, R. Ramírez
{"title":"A Machine Learning Approach to Violin Bow Technique Classification: a Comparison Between IMU and MOCAP systems","authors":"D. Dalmazzo, S. Tassani, R. Ramírez","doi":"10.1145/3266157.3266216","DOIUrl":"https://doi.org/10.1145/3266157.3266216","url":null,"abstract":"Motion Capture (MOCAP) Systems have been used to analyze body motion and postures in biomedicine, sports, rehabilitation, and music. With the aim to compare the precision of low-cost devices for motion tracking (e.g. Myo) with the precision of MOCAP systems in the context of music performance, we recorded MOCAP and Myo data of a top professional violinist executing four fundamental bowing techniques (i.e. Détaché, Martelé, Spiccato and Ricochet). Using the recorded data we applied machine learning techniques to train models to classify the four bowing techniques. Despite intrinsic differences between the MOCAP and low-cost data, the Myo-based classifier resulted in slightly higher accuracy than the MOCAP-based classifier. This result shows that it is possible to develop music-gesture learning applications based on low-cost technology which can be used in home environments for self-learning practitioners.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114248878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters 呼吸速率估计与深度相机:参数的评估
Jochen Kempfle, Kristof Van Laerhoven
{"title":"Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters","authors":"Jochen Kempfle, Kristof Van Laerhoven","doi":"10.1145/3266157.3266208","DOIUrl":"https://doi.org/10.1145/3266157.3266208","url":null,"abstract":"Depth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pin-point the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129421910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Dense 3D Optical Flow Co-occurrence Matrices for Human Activity Recognition 用于人体活动识别的密集三维光流共现矩阵
Rawya Al-Akam, D. Paulus
{"title":"Dense 3D Optical Flow Co-occurrence Matrices for Human Activity Recognition","authors":"Rawya Al-Akam, D. Paulus","doi":"10.1145/3266157.3266220","DOIUrl":"https://doi.org/10.1145/3266157.3266220","url":null,"abstract":"In this paper, a new activity recognition technique is introduced based on the gray level co-occurrence matrices (GLCM) from a 3D dense optical flow of the input RGB and Depth videos. These matrices are one of the earliest techniques used for image texture analysis which are representing the distribution of the intensities and information about relative positions of neighboring pixels of an image. In this work, we propose a new method to extract feature vector values using the well-known Haralick features from GLCM matrices to describe the flow pattern by measuring meaningful properties such as energy, contrast, homogeneity, entropy, correlation and sum average to capture local spatial and temporal characteristics of the motion through the neighboring optical flow orientation and magnitude. To evaluate the proposed method and improve the activity recognition problem, we apply a recognition pipeline that involves the bag of local spatial and temporal features and three types of machine learning classifiers are used for comparing the recognition accuracy rate of our method. These classifiers are random forest, support vector machine and K-nearest neighbor. The experimental results carried on two well-known datasets (Gaming datasets (G3D) and Cornell Activity Datasets (CAD-60)), which demonstrate that our method outperforms the results achieved by several widely employed spatial and temporal feature descriptors methods.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132077285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Exploring Accelerometer-based Step Detection by using a Wheeled Walking Frame 基于轮式行走架的加速度计步长检测研究
G. Bieber, Marian Haescher, Paul Hanschmann, Denys J. C. Matthies
{"title":"Exploring Accelerometer-based Step Detection by using a Wheeled Walking Frame","authors":"G. Bieber, Marian Haescher, Paul Hanschmann, Denys J. C. Matthies","doi":"10.1145/3266157.3266212","DOIUrl":"https://doi.org/10.1145/3266157.3266212","url":null,"abstract":"Step detection with accelerometers is a very common feature that smart wearables already include. However, when using a wheeled walking frame / rollator, current algorithms may be of limited use, since a different type of motion is being excreted. In this paper, we uncover these limitations of current wearables by a pilot study. Furthermore, we investigated an accelerometer-based step detection for using a wheeled walking frame, when mounting an accelerometer to the frame and at the user's wrist. Our findings include knowledge on signal propagation of each axis, knowledge on the required sensor quality and knowledge on the impact of different surfaces and floor types. In conclusion, we outline a new step detection algorithm based on accelerometer input data. Our algorithm can significantly empower future off-the-shelf wearables with the capability to sufficiently detect steps with elderly people using a wheeled walking frame. This can help to evaluate the state of health with regard to the human behavior and motor system and even to determine the progress of certain diseases.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121771892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks 在物理和认知任务过程中对多模态疲劳分析的任务驱动框架
K. Tsiakas, Michalis Papakostas, J. Ford, F. Makedon
{"title":"Towards a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks","authors":"K. Tsiakas, Michalis Papakostas, J. Ford, F. Makedon","doi":"10.1145/3266157.3266222","DOIUrl":"https://doi.org/10.1145/3266157.3266222","url":null,"abstract":"This paper outlines the development of a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks. While fatigue is a common symptom across several neurological chronic diseases, such as multiple sclerosis (MS), traumatic brain injury (TBI), cerebral palsy (CP) and others, it remains poorly understood, due to various reasons, including subjectivity and variability amongst individuals. Towards this end, we propose a task-driven data collection framework for multimodal fatigue analysis, in the domain of MS, combining behavioral, sensory and subjective measures, while users perform a set of both physical and cognitive tasks, including assessment tests and Activities of Daily Living (ADLs).","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117266617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信