Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction最新文献

筛选
英文 中文
Co-Creating Emotionally Aligned Smart Homes Using Social Psychological Modeling 利用社会心理学模型共同创造情感一致的智能家居
Julie M. Robillard, Aaron W. Li, Shilpa Jacob, Dan Wang, Xin Zou, J. Hoey
{"title":"Co-Creating Emotionally Aligned Smart Homes Using Social Psychological Modeling","authors":"Julie M. Robillard, Aaron W. Li, Shilpa Jacob, Dan Wang, Xin Zou, J. Hoey","doi":"10.1145/3134230.3134242","DOIUrl":"https://doi.org/10.1145/3134230.3134242","url":null,"abstract":"Smart homes have long been proposed as a viable mechanism to promote independent living for older adults in the home environment. Despite tremendous progress on the technology front, there has been limited uptake by end-users. A critical barrier to the adoption of smart home technology by older adults is the lack of engagement of end-users in the development process and the resulting one-size-fits-all solutions that fail to recognize the specific needs of the older adult demographic. In this paper, we propose a novel online platform aimed at closing the gap between older adults and technology developers: ASPIRE (Alignment of Social Personas in Inclusive Research Engagement). ASPIRE is an online collaborative network (OCN) that allows older adults, care partners, and developers to engage in the design and development of a joint shared product: the smart-home solution. To promote the adoption of the OCN and the alignment of this collaborative network with the values and emotional needs of its end-users, ASPIRE harnesses a social-psychological theory of identity. This paper presents ASPIRE as a conceptual model, with a preliminary implementation.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134619408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The SPHERE Experience SPHERE体验
I. Craddock
{"title":"The SPHERE Experience","authors":"I. Craddock","doi":"10.1145/3134230.3135629","DOIUrl":"https://doi.org/10.1145/3134230.3135629","url":null,"abstract":"The talk will describe the experience for researchers and the public alike in co-producing and deploying at scale a bespoke wearable, video and environmental sensor system for activity monitoring at home. It will consider the health requirements that drove the development, the design constraints imposed by users, technology and budgets, and how the initial design, production and installation has progressed. Data from a number of local family homes will be presented, along with an early view of what can be seen in the analysed data.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117276738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary Evaluation of a Framework for Overhead Skeleton Tracking in Factory Environments using Kinect 工厂环境中使用Kinect的架空骨架跟踪框架的初步评估
M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka
{"title":"Preliminary Evaluation of a Framework for Overhead Skeleton Tracking in Factory Environments using Kinect","authors":"M. M. Marinho, Yuki Yatsushima, T. Maekawa, Y. Namioka","doi":"10.1145/3134230.3134232","DOIUrl":"https://doi.org/10.1145/3134230.3134232","url":null,"abstract":"This paper presents a preliminary evaluation of a framework that allows an overhead RGBD camera to segment and track workers skeleton in an unstructured factory environment. The default Kinect skeleton tracking algorithm was developed using front-view artificial depth images generated from a 3D model of a person in an empty room. The proposed framework is inspired in this concept, and works by capturing motion data of worker movements performing a real factory task. That motion data is matched to the 3D model of the worker. In a novel approach, the largest elements in the workspace (e.g. desks, racks) are modeled with simple shapes, and the artificial depth images are generated in a \"simplified workspace\" in contrast with an \"empty workspace\". We show in preliminary experiments that the addition of the simplified models during training can increase, ceteris paribus, the segmentation accuracy by over 3 times and the recall by about one and a half times when the workspace is highly cluttered. Evaluation is made using real depth images obtained in a factory environment, and as ground-truth manually segmented images are used.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123567360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bottom-up Investigation: Human Activity Recognition Based on Feet Movement and Posture Information 自下而上的调查:基于足部运动和姿态信息的人类活动识别
Rafael de Pinho André, Pedro Diniz, H. Fuks
{"title":"Bottom-up Investigation: Human Activity Recognition Based on Feet Movement and Posture Information","authors":"Rafael de Pinho André, Pedro Diniz, H. Fuks","doi":"10.1145/3134230.3134240","DOIUrl":"https://doi.org/10.1145/3134230.3134240","url":null,"abstract":"Human Activity Recognition (HAR) research on feet posture and movement information has seen an intense growth during the last five years, drawing attention of fields such as healthcare systems and context inference. In this work, we tested our 6-activity classes machine learning HAR classifier using a foot-based wearable device in an experiment involving 11 volunteers. The classifier uses a Random Forest algorithm with Leave-one-out Cross Validation, achieving an average of 93.34% accuracy. Targeting at a replicable research, we provide full hardware information, system source code and a public domain dataset consisting of 800,000 samples.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130545270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Knowledge Extraction from Task Narratives 从任务叙述中提取知识
Kristina Yordanova, Carlos Monserrat Aranda, David Nieves, J. Hernández-Orallo
{"title":"Knowledge Extraction from Task Narratives","authors":"Kristina Yordanova, Carlos Monserrat Aranda, David Nieves, J. Hernández-Orallo","doi":"10.1145/3134230.3134234","DOIUrl":"https://doi.org/10.1145/3134230.3134234","url":null,"abstract":"One of the major difficulties in activity recognition stems from the lack of a model of the world where activities and events are to be recognised. When the domain is fixed and repetitive we can manually include this information using some kind of ontology or set of constraints. On many occasions, however, there are many new situations for which only some knowledge is common and many other domain-specific relations have to be inferred. Humans are able to do this from short descriptions in natural language, describing the scene or the particular task to be performed. In this paper we apply a tool that extracts situation models and rules from natural language description to a series of exercises in a surgical domain, in which we want to identify the sequence of events that are not possible, those that are possible (but incorrect according to the exercise) and those that correspond to the exercise or plan expressed by the description in natural language. The preliminary results show that a large amount of valuable knowledge can be extracted automatically, which could be used to express domain knowledge and exercises description in languages such as event calculus that could help bridge these high-level descriptions with the low-level events that are recognised from videos.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130378376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exercise Monitoring On Consumer Smart Phones Using Ultrasonic Sensing 利用超声波感应在消费者智能手机上进行运动监测
Biying Fu, Dinesh Vaithyalingam Gangatharan, Arjan Kuijper, Florian Kirchbuchner, Andreas Braun
{"title":"Exercise Monitoring On Consumer Smart Phones Using Ultrasonic Sensing","authors":"Biying Fu, Dinesh Vaithyalingam Gangatharan, Arjan Kuijper, Florian Kirchbuchner, Andreas Braun","doi":"10.1145/3134230.3134238","DOIUrl":"https://doi.org/10.1145/3134230.3134238","url":null,"abstract":"Quantified self has been a trend over the last several years. An increasing number of people use devices, such as smartwatches or smartphones to log activities of daily life, including step count or vital information. However, most of these devices have to be worn by the user during the activities, as they rely on integrated motion sensors. Our goal is to create a technology that enables similar precision with remote sensing, based on common sensors installed in every smartphone, in order to enable ubiquitous application. We have created a system that uses the Doppler effect in ultrasound frequencies to detect motion around the smartphone. We propose a novel use case to track exercises, based on several feature extraction methods and machine learning classification. We conducted a study with 14 users, achieving an accuracy between 73 % and 92% for the different exercises.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114630861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Smartwatch based Respiratory Rate and Breathing Pattern Recognition in an End-consumer Environment 终端消费者环境中基于智能手表的呼吸频率和呼吸模式识别
John Trimpop, Hannes Schenk, G. Bieber, Friedrich Lämmel, Paul Burggraf
{"title":"Smartwatch based Respiratory Rate and Breathing Pattern Recognition in an End-consumer Environment","authors":"John Trimpop, Hannes Schenk, G. Bieber, Friedrich Lämmel, Paul Burggraf","doi":"10.1145/3134230.3134235","DOIUrl":"https://doi.org/10.1145/3134230.3134235","url":null,"abstract":"Smartwatches as wearables became part of social life and practically and technically offer the possibility to collect medical body parameters next to usual fitness data. In this paper, we present an evaluation of the respiratory rate detection of the &gesund system. &gesund is a health assistance system, which automatically records detailed long-term health data with end-consumer smartwatches. The &gesund core is based on technology exclusively licensed from the Fraunhofer Institute of applied research. In our study, we compare the &gesund algorithms for respiration parameter detection in low-amplitude activities against data recorded from actual sleep laboratory patients. The results show accuracies of up to 89%. We are confident that wearable technologies will be used for medical health assistance in the near future.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Detecting Process Transitions from Wearable Sensors: An Unsupervised Labeling Approach 从可穿戴传感器检测过程转换:一种无监督标记方法
S. Böttcher, P. Scholl, Kristof Van Laerhoven
{"title":"Detecting Process Transitions from Wearable Sensors: An Unsupervised Labeling Approach","authors":"S. Böttcher, P. Scholl, Kristof Van Laerhoven","doi":"10.1145/3134230.3134233","DOIUrl":"https://doi.org/10.1145/3134230.3134233","url":null,"abstract":"Authoring protocols for manual tasks such as following recipes, manufacturing processes, or laboratory experiments requires a significant effort. This paper presents a system that estimates individual procedure transitions from the user's physical movement and gestures recorded with inertial motion sensors. Combined with egocentric or external video recordings this facilitates efficient review and annotation of video databases. We investigate different clustering algorithms on wearable inertial sensor data recorded on par with video data, to automatically create transition marks between task steps. The goal is to match these marks to the transitions given in a description of the workflow, thus creating navigation cues to browse video repositories of manual work. To evaluate the performance of unsupervised clustering algorithms, the automatically generated marks are compared to human-expert created labels on publicly available datasets. Additionally, we tested the approach on a novel data set in a manufacturing lab environment, describing an existing sequential manufacturing process.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133609569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Smarter Smart Homes with Social and Emotional Intelligence 拥有社交和情商的智能家居
J. Hoey
{"title":"Smarter Smart Homes with Social and Emotional Intelligence","authors":"J. Hoey","doi":"10.1145/3134230.3134243","DOIUrl":"https://doi.org/10.1145/3134230.3134243","url":null,"abstract":"Pervasive intelligent assistive technologies promise to alleviate some of the increasing burden of care for persons with age-related cognitive disabilities, such as Alzheimer's disease. However, despite tremendous progress, many attempts to develop and implement real world applications have failed to become widely adopted. In this talk, I will argue that a key barrier to the adoption of these technologies is a lack of alignment, on a social and emotional level, between the technology and its users. I argue that products which do not deeply embed social and emotional intelligence will fail to align with the needs and values of target end-users, and will thereby have only limited utility. I will then introduce a socio-cultural reasoning engine called \"BayesACT\" that can be used to provide this level of affective reasoning. BayesACT is arises from the symbolic interactionist tradition in sociological social psychology, in which culturally shared affective and cognitive meanings provide powerful predictive insights into human action. BayesACT can learn these shared meanings during an interaction, and can tailor interventions to specific individuals in a way that ensures smoother and more effective uptake and response. I will give an introduction to this reasoning engine, and will discuss how affective reasoning could be used to create truly adaptive assistive technologies.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116013914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time Embedded Recognition of Sign Language Alphabet Fingerspelling in an IMU-Based Glove 基于imu手套的手语字母拼写实时嵌入式识别
Chaithanya Kumar Mummadi, Frederic Philips Peter Leo, Keshav Deep Verma, Shivaji Kasireddy, P. Scholl, Kristof Van Laerhoven
{"title":"Real-time Embedded Recognition of Sign Language Alphabet Fingerspelling in an IMU-Based Glove","authors":"Chaithanya Kumar Mummadi, Frederic Philips Peter Leo, Keshav Deep Verma, Shivaji Kasireddy, P. Scholl, Kristof Van Laerhoven","doi":"10.1145/3134230.3134236","DOIUrl":"https://doi.org/10.1145/3134230.3134236","url":null,"abstract":"Data gloves have numerous applications, including enabling novel human-computer interaction and automated recognition of large sets of gestures, such as those used for sign language. For most of these applications, it is important to build mobile and self-contained applications that run without the need for frequent communication with additional services on a back-end server. We present in this paper a data glove prototype, based on multiple small Inertial Measurement Units (IMUs), with a glove-embedded classifier for the french sign language. In an extensive set of experiments with 57 participants, our system was tested by repeatedly fingerspelling the French Sign Language (LSF) alphabet. Results show that our system is capable of detecting the LSF alphabet with a mean accuracy score of 92% and an F1 score of 91%, with all detections performed on the glove within 63 milliseconds.","PeriodicalId":209424,"journal":{"name":"Proceedings of the 4th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129832871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信