{"title":"GPS Trace Mining for Discovering Behaviour Patterns","authors":"Weijun Qiu, A. Bandara","doi":"10.1109/IE.2015.17","DOIUrl":"https://doi.org/10.1109/IE.2015.17","url":null,"abstract":"There are diverse sensor applications built into different personal devices, which have the ability to record data related to various aspects of the user. With the ever increasing popularity and lowering costs of such personal devices such as Smart Phones, collecting data from the mobile sensors available in these devices becomes feasible. A wealth of information can be gleaned from such data collected from these sensors which reveals various aspects of the individual's behaviour and activity. Existing approaches for analyzing such data mainly focuses on inferring semantic context and detecting associations from such data. For example, GPS enabled devices allow users to record their movements in the form of spatio-temporal stream points, and meaningful information can be extracted based on different research objectives. In this paper, we have investigated a computation framework in order to identify users' activity categories and their event's associations from GPS trajectory data. This framework has several progressive stages and is designed based on different approaches in each stage, which will facilitate to analyse people's everyday lifestyles that are related to outdoor behaviours. Moreover, we have proposed an approach to improve the performance of the semantic annotation process of this framework, by combining different sources of mobile sensor data (i.e. GPS and audio data). The proposed framework and approaches have been validated on actual data sets which include the Microsoft's Geolife data set and a data set collected by ourselves.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122917664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Andres Lopez Nuevo, D. Royo, Esunly Medina, Roc Meseguer
{"title":"OIoT: A Platform to Manage Opportunistic IoT Communities","authors":"David Andres Lopez Nuevo, D. Royo, Esunly Medina, Roc Meseguer","doi":"10.1109/IE.2015.22","DOIUrl":"https://doi.org/10.1109/IE.2015.22","url":null,"abstract":"Opportunistic Internet of Things (IoT) extends the concept of opportunistic networking combining human users carrying mobile devices and smart things. It explores the relationships between humans and the opportunistic connection of smart objects. This paper presents a software infrastructure, named Opportunistic IoT Platform (OIoT), which helps developers to create and manage opportunistic IoT communities between smart devices. The platform enables the creation of opportunistic IoT communities that support the AllJoyn communications framework, for IoT devices and applications. Results from a preliminary evaluation of the OIoT platform indicate that this infrastructure is useful to manage and share data across opportunistic IoT communities.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123098799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Georg Layher, Stephan Tschechne, R. Niese, A. Al-Hamadi, H. Neumann
{"title":"Towards the Separation of Rigid and Non-rigid Motions for Facial Expression Analysis","authors":"Georg Layher, Stephan Tschechne, R. Niese, A. Al-Hamadi, H. Neumann","doi":"10.1109/IE.2015.38","DOIUrl":"https://doi.org/10.1109/IE.2015.38","url":null,"abstract":"In intelligent environments, computer systems not solely serve as passive input devices waiting for user interaction but actively analyze their environment and adapt their behaviour according to changes in environmental parameters. One essential ability to achieve this goal is to analyze the mood, emotions and dispositions a user experiences while interacting with such intelligent systems. Features allowing to infer such parameters can be extracted from auditive, as well as visual sensory input streams. For the visual feature domain, in particular facial expressions are known to contain rich information about a user's emotional state and can be detected by using either static and/or dynamic image features. During interaction facial expressions are rarely performed in isolation, but most of the time co-occur with movements of the head. Thus, optical flow based facial features are often compromised by additional motions. Parts of the optical flow may be caused by rigid head motions, while other parts reflect deformations resulting from facial expressivity (non-rigid motions). In this work, we propose the first steps towards an optical flow based separation of rigid head motions from non-rigid motions caused by facial expressions. We suggest that after their separation, both, head movements and facial expressions can be used as a basis for the recognition of a user's emotions and dispositions and thus allow a technical system to effectively adapt to the user's state.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133085212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automating the Generation of Privacy Policies for Context-Sharing Applications","authors":"Wolfgang Apolinarski, M. Handte, P. Marrón","doi":"10.1109/IE.2015.18","DOIUrl":"https://doi.org/10.1109/IE.2015.18","url":null,"abstract":"Enabling the automated recognition and sharing of a user's context is a primary motivation for many pervasive computing applications. In the past, a significant amount of research has been focusing on the aspect of effective and efficient recognition. Yet, when context is shared with others, the resulting disclosure of personal information can have undesirable privacy implications. A common solution to this problem is the manual creation of an application-specific privacy policy that defines which information may be shared with whom. However, as the number of applications increases, such a manual approach becomes increasingly cumbersome and over time, it is likely to lead to incomplete or even inconsistent policies. In this paper, we discuss how a privacy policy can be derived automatically by analyzing the user's sharing behaviour when using online collaboration tools. Our approach retrieves shared content and the associated sharing settings, detects context types and automatically derives a privacy policy that reflects the user's past sharing behaviour. To validate our approach, we have implemented it as an extensible software library for the Android platform and we have developed plug-ins for two popular collaboration tools, namely Google Calendar and Facebook.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123152060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SparkXS: Efficient Access Control for Intelligent and Large-Scale Streaming Data Applications","authors":"D. Preuveneers, W. Joosen","doi":"10.1109/IE.2015.21","DOIUrl":"https://doi.org/10.1109/IE.2015.21","url":null,"abstract":"The exponential data growth in intelligent environments fuelled by the Internet of Things is not only a major push behind distributed programming frameworks for big data, it also magnifies security and privacy concerns about unauthorized access to data. The huge diversity and the streaming nature of data raises the demand for new enabling technologies for scalable access control that can deal with the growing velocity, volume and variety of volatile data. This paper presents SparkXS, an attribute-based access control solution with the ability to define access control policies on streaming latent data, i.e. hidden information made explicit through data analytics, such as aggregation, transformation and filtering. Experimental results show that SparkXS can enforce access control in a horizontally scalable way with minimal performance overheads.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133657770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SmartWalker: Towards an Intelligent Robotic Walker for the Elderly","authors":"Jiwon Shin, David Itten, A. Rusakov, B. Meyer","doi":"10.1109/IE.2015.10","DOIUrl":"https://doi.org/10.1109/IE.2015.10","url":null,"abstract":"This paper presents SmartWalker and evaluates the appropriateness and usefulness of the walker and its gesture-based interface for the elderly. As a high-tech extension of a regular walker, the SmartWalker aims to assist its user intelligently and navigate around its environment autonomously. Equipped with sensors and actuators, the prototype accepts gesture commands and navigates around accordingly. The gesture-based interface uses a k-nearest neighbours classifier with dynamic time warping to recognize gestures and the Viola and Jones face detector to locate the user. We evaluated the walker with 23 residents and eight staff members at five different retirement homes in Zurich. The elderly found the SmartWalker useful and exciting, but few were willing to replace their walkers by robotic walkers. Their reluctance may stem from the walker's size and weight and their unfamiliarity with technology.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123291157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization of Decision-Making in Artificial Life Model Based on Fuzzy Cognitive Maps","authors":"Tomáš Nacházel","doi":"10.1109/IE.2015.28","DOIUrl":"https://doi.org/10.1109/IE.2015.28","url":null,"abstract":"The paper describes a new approach to the modelling of the individual-based artificial life model based on fuzzy cognitive maps (FCM). The proposed concept focuses on the optimization of artificial intelligence of individuals in multi-agent models and their adaptation to environment. In this process of optimization, emphasis is put on the decision-making method. FCM offers great complexity and learning through evolutionary algorithms. However, too large FCMs suffer from performance issues. Therefore, this paper presents a possibility to replace a decision-making part of large FCM with the analytic hierarchy process (AHP) method, which is widely used, especially for decision support. In comparison with the large FCM model, a combination with AHP provides a model with lower computational demands while keeping nearly the same complexity.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124625773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Detection of Perceived Stress in Campus Students Using Smartphones","authors":"M. Gjoreski, H. Gjoreski, M. Luštrek, M. Gams","doi":"10.1109/IE.2015.27","DOIUrl":"https://doi.org/10.1109/IE.2015.27","url":null,"abstract":"This paper presents an approach to detecting perceived stress in students using data collected with smartphones. The goal is to develop a machine-learning model that can unobtrusively detect the stress level in students using data from several smartphone sources: accelerometers, audio recorder, GPS, Wi-Fi, call log and light sensor. From these, features were constructed describing the students' deviation from usual behaviour. As ground truth, we used the data obtained from stress level questionnaires with three possible stress levels: \"Not stressed\", \"Slightly stressed\" and \"Stressed\". Several machine learning approaches were tested: a general models for all the students, models for cluster of similar students, and student-specific models. Our findings show that the perceived stress is highly subjective and that only person-specific models are substantially better than the baseline.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123271975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development of an Intelligent Fisheye Camera","authors":"M. Bassford, B. Painter","doi":"10.1109/IE.2015.34","DOIUrl":"https://doi.org/10.1109/IE.2015.34","url":null,"abstract":"Intelligent cameras, or smart cameras as they are often referred to, appear ubiquitously in our everyday lives, both at home and at work. In our pockets, bags, cars and homes we are now experiencing and interacting with a new generation of smart cameras that far surpass the ability to merely capture images - they can provide high-level descriptions of the environment and analyse what they see. Recently, researchers at De Montfort University have developed a low-cost, low-power intelligent camera system that incorporates a 2.8\" touchscreen and a fisheye lens, is capable of capturing visual environment data, performing algorithms to extract metrics and wirelessly transmit them to the cloud for further analysis. Having the advantage of a short focal length and wide field of view, our fisheye camera system can provide information for a multidisciplinary audience including image processing, camera technology and embedded systems, and support a wide variety of applications such as surveillance, home monitoring, motion analysis, facial identification and intelligent transportation systems. This paper describes a first generation prototype camera system that gathers luminance data by capturing High Dynamic Range (HDR) imagery that is comparable to the data acquired with costly and cumbersome research equipment. The paper also explores the requirements for, benefits of, and challenges faced when developing an even smarter intelligent camera system.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126365937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monika Mitrevska, M. Moniri, Robert Neßelrath, Tim Schwartz, M. Feld, Yannick Körber, Matthieu Deru, Christian A. Müller
{"title":"SiAM - Situation-Adaptive Multimodal Interaction for Innovative Mobility Concepts of the Future","authors":"Monika Mitrevska, M. Moniri, Robert Neßelrath, Tim Schwartz, M. Feld, Yannick Körber, Matthieu Deru, Christian A. Müller","doi":"10.1109/IE.2015.39","DOIUrl":"https://doi.org/10.1109/IE.2015.39","url":null,"abstract":"What does situation-adaptive technology mean for car drivers and how can it improve their lives? Why is multimodal interaction in the cockpit a critical ingredient? This contribution summarizes several important technological results of the three-year research project SiAM, which investigated these questions. Motivated by the story of an urban commuter, we illustrate three use cases for situation adaptivity: multimodal control of car functions, cognitive load aware interaction with the environment, and a persuasive intermodal travel assistant.","PeriodicalId":228285,"journal":{"name":"2015 International Conference on Intelligent Environments","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128889661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}