N. Anyfantis, Evangelos Kalligiannakis, Achilleas Tsiolkas, A. Leonidis, Maria Korozi, Prodromos Lilitsis, M. Antona, C. Stephanidis
{"title":"AmITV","authors":"N. Anyfantis, Evangelos Kalligiannakis, Achilleas Tsiolkas, A. Leonidis, Maria Korozi, Prodromos Lilitsis, M. Antona, C. Stephanidis","doi":"10.1145/3197768.3201548","DOIUrl":"https://doi.org/10.1145/3197768.3201548","url":null,"abstract":"The proliferation of Internet of Things (IoT) devices and services and their integration in Ambient Intelligence (AmI) Environments revealed a new range of roles that TVs are expected to play so as to improve quality of life. This work introduces AmITV, an integrated multimodal system that permits end-users to use the TV not only as a traditional entertainment center, but also as (i) a control center for manipulating any intelligent device, (ii) an intervention host that presents appropriate content when they need help or support, (iii) an intelligent agent that communicates with the users in a natural manner and assists them throughout their daily activities, (iv) a notification medium that informs them about interesting or urgent events, and (v) a communication hub that permits them to exchange messages in real-time or asynchronously. This paper presents two motivational scenarios inspired from Home and Hotel Intelligent Environments and the infrastructure behind AmITV. Additionally, it describes how it realizes the newly emerged roles of TVs as a multimodal, intelligent and versatile interaction hub with the ambient facilities of the entire technologically-augmented environment.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126157457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User Evaluation of Unobtrusive Methods for Heart-Rate Measurement","authors":"Eirini Mathe, E. Spyrou","doi":"10.1145/3197768.3197784","DOIUrl":"https://doi.org/10.1145/3197768.3197784","url":null,"abstract":"We present a user evaluation of 3 unobtrusive methods for heart-rate measurement. More specifically, we implement a state-of-the-art method that uses the web camera of a typical computer, we use a low-cost bracelet with an integrated photoplethysmography sensor and also a freely available Android mobile app which uses the phone's camera and flash. All methods are thoroughly evaluated a) for their accuracy; and b) by real users using a questionnaire.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126886990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Ceccacci, Andrea Generosi, Luca Giraldi, M. Mengoni
{"title":"An Emotion Recognition System for monitoring Shopping Experience","authors":"S. Ceccacci, Andrea Generosi, Luca Giraldi, M. Mengoni","doi":"10.1145/3197768.3201518","DOIUrl":"https://doi.org/10.1145/3197768.3201518","url":null,"abstract":"The present work introduces an emotional tracking system to monitor Shopping Experience at different touchpoints in a store, based on the elaboration of the information extracted from biometric data and facial expressions. Preliminary tests suggest that the proposed system can be effectively used in a retail context.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116981492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bicycles and Wheelchairs for Locomotion Control of a Simulated Telerobot Supported by Gaze- and Head-Interaction","authors":"Katsumi Minakata, Martin Thomsen, J. P. Hansen","doi":"10.1145/3197768.3201573","DOIUrl":"https://doi.org/10.1145/3197768.3201573","url":null,"abstract":"We present an interface for control of a telerobot that supports field-of-view panning, mode selections and keyboard typing by head- and gaze-interaction. The utility of the interface was tested by 19 able-bodied participants controlling a virtual telerobot from a wheelchair mounted on rollers which measure its wheel rotations, and by 14 able-bodied participants controlling the telerobot with a exercise bike. Both groups tried the interface twice: with head- and with gaze-interaction. Comparing wheelchair and bike locomotion control, the wheelchair simulator was faster and more manoeuvrable. Comparing gaze- and head-interaction, the two input methods were preferred by an equal number of participants. However, participants made more errors typing with gaze than with head. We conclude that virtual reality is a viable way of specifying and testing interfaces for telerobots and an effective probe for eliciting peoples subjective experiences.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114516180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Bouchard, S. Gaboury, K. Bouchard, Yannick Francillette
{"title":"Modeling Human Activities Using Behaviour Trees in Smart Homes","authors":"B. Bouchard, S. Gaboury, K. Bouchard, Yannick Francillette","doi":"10.1145/3197768.3201522","DOIUrl":"https://doi.org/10.1145/3197768.3201522","url":null,"abstract":"With the aging population, researchers around the world are investigating technological solutions to help seniors stay at home as long as possible. One of them is the concept of smart home, which is an intelligent house equipped with sensors and actuators. Aging people often suffers from physical and cognitive impairments, which limit their abilities to perform their Activities of Daily Living (ADL). Therefore, the smart home needs to be able to assist its resident in carrying out their ADL, when it is required. Recognising the ongoing ADL constitutes then a key challenge of the assistive services. Being able to simulate users' behaviour is also an important issue, as well as being able to find an assistive step-by-step solution when something goes wrong. However, all theses challenges need to rely on a knowledge base of activities' models. In the past, many researchers tried to make use of some logical encoding of the activities by exploiting, for instance, first order logic. These approaches work fine for the inferential process but they are very rigid, complex and time consuming. More recently, scientists in the field tried to represent the activities using stochastic models, such as Bayesian Networks or Markov Model. These probabilistic methods do not represent activities very naturally and are very static state-transition models. In this paper, we propose the use of Behaviour Trees (BT) as a means to represent the user's ADL in a smart home. BTs are mainly used in the video game industry as a powerful tool to model the behaviour of non-player characters. BTs allow the modelling of activities with a flexible, well-defined approach. We will present a first exploitation of the behaviour trees in a smart home simulator.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124235793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting the Open Innovation Model in Assistive Technologies","authors":"B. Bouchard, S. Gaboury, K. Bouchard","doi":"10.1145/3197768.3201519","DOIUrl":"https://doi.org/10.1145/3197768.3201519","url":null,"abstract":"The recent marriage of the fields of artificial intelligence, sensors' technology, big data and health sciences allowed the emergence of a new exciting research area called health assistive technology. With the aging population around the world and the diminishing resources for healthcare, this field is growing in importance. In United States, assistive technology now represents a market of about 60 billion dollars. In the last decade, many academic research teams worldwide addressed the issue and produced great scientific contributions to the field. However, despite the huge potential of the market, many of the research done by academic teams seems to fail to produce real useful assistive devices available for customers. This observation leads to the fundamental question of this paper: how can we improve the impact and the exploitation of research's output? In this paper, we will try to investigate this problem and propose some guidelines based on our experience. More specifically, we propose the exploitation of an adapted version of the Open Innovation Model, developed at the Harvard School of Business, as a potential avenue of solution to help addressing this important issue.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"322 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132336686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dario Ortega Anderez, Ahmad Lotfi, C. Langensiepen
{"title":"A Hierarchical Approach in Food and Drink Intake Recognition Using Wearable Inertial Sensors","authors":"Dario Ortega Anderez, Ahmad Lotfi, C. Langensiepen","doi":"10.1145/3197768.3201542","DOIUrl":"https://doi.org/10.1145/3197768.3201542","url":null,"abstract":"Despite the increasing attention given to inertial sensors for Human Activity Recognition (HAR), efforts are principally focused on fitness applications where quasi-periodic activities like walking or running are studied. In contrast, activities like eating or drinking cannot be considered periodic or quasi-periodic. Instead, they are composed of sporadic occurring gestures in continuous data streams. This paper presents an approach to gesture recognition for an Ambient Assisted Living (AAL) environment. Specifically, food and drink intake gestures are studied. To do so, firstly, waist-worn tri-axial accelerometer data is used to develop a low computational model to recognize whether a person is at moving, sitting or standing estate. With this information, data from a wrist-worn tri-axial Micro-Electro-Mechanical (MEM) system was used to recognize a set of similar eating and drinking gestures. The promising preliminary results show that states can be recognized with 100% classification accuracy with the use of a low computational model on a reduced 4-dimensional feature vector. Additionally, the recognition rate achieved for eating and drinking gestures was above 99%. Altogether suggests that it is possible to develop a continuous monitoring system based on a bi-nodal inertial unit. This work is part of a bigger project that aims at developing a self-neglect detection continuous monitoring system for older adults living independently.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134621322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for Programming a Swarm of UAVs","authors":"Dimitris Dedousis, V. Kalogeraki","doi":"10.1145/3197768.3197772","DOIUrl":"https://doi.org/10.1145/3197768.3197772","url":null,"abstract":"In recent years, sensing systems in urban environments are being replaced by Unmanned Aerial Vehicles (UAVs). UAVs, also known as drones, have shown great potential in executing different kinds of sensing missions, such as search and rescue, object tracking, inspection, etc. The UAVs' sensing capabilities and their agile mobility can replace existing complex solutions for such missions. However, coordinating a swarm of drones for mission accomplishment is not a trivial task. Existing works in the literature focus solely on managing the swarm and do not provide options for automating entire missions. In this paper, we present PaROS (PROgramming Swarm), a novel framework for programming a swarm of UAVs. PaROS provides a set of programming primitives for orchestrating a swarm of drones along with automating certain types of missions. These primitives, referred as abstract swarms, control every drone in the swarm, hiding the complexity of low level details from a programmer such as assigning flight plans, task partitioning, failure recovery and area division. Our experimental evaluation proves that our approach is stable, time-efficient and practical.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131324265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Efthimiou, Stavroula-Evita Fotinea, Theodore Goulas, A. Vacalopoulou, Kiki Vasilaki, Athanasia-Lida Dimou
{"title":"Sign Language technologies in view of Future Internet accessibility services","authors":"E. Efthimiou, Stavroula-Evita Fotinea, Theodore Goulas, A. Vacalopoulou, Kiki Vasilaki, Athanasia-Lida Dimou","doi":"10.1145/3197768.3201546","DOIUrl":"https://doi.org/10.1145/3197768.3201546","url":null,"abstract":"In this paper, we touch upon the requirement for accessibility via Sign Language as regards dynamic composition and exchange of new content in the context of natural language based human interaction, and also accessibility of web services and electronic content in written text by deaf and hard-of-hearing individuals. In this framework, one key issue remains the option for composition of signed \"text\", along with the ability for reuse of pre-existing signed \"text\" by exploiting basic editing facilities similar to those available for written text serving vocal language representation. An equally critical related issue is accessibility of vocal language text by born or early deaf signers, as well as the use of web based facilities via sign language supported interfaces, taking into account that the majority of native signers present limited reading skills. It is, thus, demonstrated how sign language technologies and resources may be integrated in human-centered applications enabling web services and content accessibility in the education and everyday communication context, in order to facilitate integration of signer populations in a societal environment that is strongly defined by smart life style conditions. This potential is also demonstrated by end user-evaluation results.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128086909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social Telepresence Robots: The role of gesture for collaboration over a distance","authors":"Christoph Stahl, D. Anastasiou, T. Latour","doi":"10.1145/3197768.3203180","DOIUrl":"https://doi.org/10.1145/3197768.3203180","url":null,"abstract":"In this position paper, we refer to the concept of telepresence and give an overview about current solutions that provide a sense of being in a different place. We focus on Mobile Robotic Telepresence and summarize arguments from literature regarding the importance of formal and informal gesture and body language for communication and collaboration. We argue that humanoid telepresence robots with a capability to express gestures could play an important role in teleworking and collaboration over a distance, which will gain importance in reducing the need for mobility and traffic. Based on current work in our research group on collaborative problem solving on tangible user interfaces, we sketch a scenario where avatar robots represent remote team members and mirror their actions, gaze and gestures. Our aim is to foster a discussion about technical and social questions related to the acceptance of avatar robots at work: which properties should such a robot have; to what extend is the current state of the art in social robotics applicable, and which additional components need to be developed in the future.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125001691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}