{"title":"Digits Recognition with Quadrant Photodiode and Convolutional Neural Network","authors":"Kamil Janczyk, Krzysztof Czuszyński, J. Rumiński","doi":"10.1109/HSI.2018.8431246","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431246","url":null,"abstract":"In this paper we have investigated the capabilities of a quadrant photodiode based gesture sensor in the recognition of digits drawn in the air. The sensor consisting of 4 active elements, 4 LEDs and a pinhole was considered as input interface for both discrete and continuous gestures. Index finger and a round pointer were used as navigating mediums for the sensor. Experiments performed with 5 volunteers allowed to record 300 examples of each digit from 0 to 9, which were drawn in the air. Digits were converted from a list of recorded coordinates into images processed as in the MNIST database. Three approaches for recognition of digits recorded by quadrant photodiode were considered: convolutional neural network trained only on examples from the MNIST database, network trained on mixed data of MNIST with examples recorded using quadrant photodiode (4/1 proportions) and trained on the MNIST with examples recorded using the elaborated sensor but after the arbitral rejection of 20% of worst quality data (4/1 proportions preserved). The application of the third approach in comparison to the first one allowed to increase the overall accuracy of digits classification from 34.4% to 86% for testing data recorded with the use of the pointer and from 32% to 81.2% for data recorded with the use of a finger (for 50Hz sampling frequency).","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123868699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration in Multichannel Emotion Recognition","authors":"Grzegorz Brodny, A. Landowska","doi":"10.1109/HSI.2018.8431343","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431343","url":null,"abstract":"The paper concerns integration of results provided by automatic emotion recognition algorithms. It presents both the challenges and the approaches to solve them. Paper shows experimental results of integration. The paper might be of interest to researchers and practitioners who deal with automatic emotion recognition and use more than one solution or multichannel observation.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124267113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hirofusa Ogasawara, S. Yokota, A. Matsumoto, D. Chugo, H. Hashimoto
{"title":"Concept Verification of Antagonistic Pneumatic Driven and Inflatable Arm Joint","authors":"Hirofusa Ogasawara, S. Yokota, A. Matsumoto, D. Chugo, H. Hashimoto","doi":"10.1109/HSI.2018.8431215","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431215","url":null,"abstract":"The purpose of this research is to develop the soft robotic arm by using a pneumatic system. To activate the arm, the joint is the key component. Therefore, this paper focuses on the structure of the joint and angle control. This joint has a helical tube outside to change the stiffness, and two pairs of air bags made of polyethylene inside for configuration antagonistic driven system. The features of the proposed joint are that: totally made with soft material, inflatable structure and can be change the stiffness of the joint by controlling the pressure in the helical tube. By controlling the air pressures in the air bags, the joint angle can be controlled. By the experimental result, it was confirmed that the driving concept of the joint was well functioned, and the joint angle control was also fairlv realized by open loop control.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122958660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agata Lis-Marciniak, Jan Tomiakowski, Paweł Kapusta
{"title":"Design Rules, Implementation and Testing of User Interfaces for Mixed Reality Applications","authors":"Agata Lis-Marciniak, Jan Tomiakowski, Paweł Kapusta","doi":"10.1109/HSI.2018.8431347","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431347","url":null,"abstract":"Mixed Reality is a rapidly growing branch of Computer Science. In this article we will show how important it is to test innovative functionalities early in development phase as it might not be what end-users expect. This will be done based on game prepared for Mixed Reality Platform.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129698148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Plechawska-Wójcik, M. Borys, Mikhail Tokovarov, Monika Kaczorowska, Kinga Wesolowska, Martyna Wawrzyk
{"title":"Classifying Cognitive Workload Based on Brain Waves Signal in the Arithmetic Tasks' Study","authors":"M. Plechawska-Wójcik, M. Borys, Mikhail Tokovarov, Monika Kaczorowska, Kinga Wesolowska, Martyna Wawrzyk","doi":"10.1109/HSI.2018.8431105","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431105","url":null,"abstract":"Cognitive workload is a quantitative usage measure of the limited amount of working memory. Its measuring is of great importance for understanding human mental effort processing, evaluating information systems or supporting diagnosis and treatment of patients. The paper presents the results of cognitive workload classification of electroencephalographic (EEG) data. The performed study covered arithmetic tasks realised in several intervals with the increasing difficulty level. Brain waves data in the form of EEG signal were gathered and processed in the form of frequency spectra. The paper discusses the process of features selection performed with several methods including ranking methods (K-Fisher), Feature Selection By Eigenvector Centrality (ECFS) and Mitinffs mutual information-based approach. What is more, the paper presents results of participant cognitive workload classification based on such methods as Support Vector Machines (SVM), boosted trees and k-nearest neighbours (KNN) algorithm. The paper discusses the efficiency of features selection methods and accuracy of applied classification methods.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124055289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jakub Dabros, M. Iwaniec, M. Patyk, Xavier Sulkowski, Jacek Wesol
{"title":"ANFIS Post-Processing for Real Time Gait Detection and Classification","authors":"Jakub Dabros, M. Iwaniec, M. Patyk, Xavier Sulkowski, Jacek Wesol","doi":"10.1109/HSI.2018.8431095","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431095","url":null,"abstract":"Gait detection and distinction from other movement patterns like descending the stairs is a crucial task for an exoskeleton supporting user movement. Active or quasi-passive exoskeletons should enhance wearer's limbs only in a manner of not interfering with natural gait patterns. Common solutions for this problem are numerous gait detection algorithms that among other sensors use force sensing resistors. In this paper, we propose using an adaptive neuro-fuzzy inference system (ANFIS) classifier that can be trained on a stationary computer and only evaluated in a real time microprocessor control system. What is more, we propose altering the ANFIS outcome with five post-processing algorithms. Each network and algorithm combination is evaluated, results are compared and the best combined classifier is chosen.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127577577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kouji Yamamoto, Hideki Takahashi, T. Sugimachi, Kimihiko Nakano, Y. Suda, Toshinori Kato
{"title":"The Study of Driver's Brain Activity and Behavior Using fNIRS During Actual Car Driving","authors":"Kouji Yamamoto, Hideki Takahashi, T. Sugimachi, Kimihiko Nakano, Y. Suda, Toshinori Kato","doi":"10.1109/HSI.2018.8431026","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431026","url":null,"abstract":"In this study, based on the measurement of brain activity and the change of accelerator and brake stroke, we tried to grasp the interaction between the driver's reaction and the driver's following behavior at the time when driver watched Variable Message Sign on actual car driving. Specifically, using fNIRS, we analyzed same driver's brain activity and driving behavior during actual car driving, and then we evaluated the interaction of both items. As a result, we confirmed that parietal association cortex and prefrontal area activated in the case of driving with recognition and judgment for the information which a driver collected from the environment during driving. Then it was suggested that it was needed to expand parietal association cortex in order to measure the brain activity. Furthermore, it was suggested that both on and off accelerator stroke were connected with the activity of prefrontal area. As a result, it was suggested that it was valid to confirm the driver's reaction at every steps, such as “recognition”, “judgment”, “behavior”, by means of being approached from a neuroscience.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125629372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel L. Marino, Matthew Anderson, K. Kenney, M. Manic
{"title":"Interpretable Data-Driven Modeling in Biomass Preprocessing","authors":"Daniel L. Marino, Matthew Anderson, K. Kenney, M. Manic","doi":"10.1109/HSI.2018.8431156","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431156","url":null,"abstract":"Data-driven models provide a powerful and flexible modeling framework for decision making and controls in industry. However, extracting knowledge from these models requires development of easily interpretable visualizations. In this paper, we present a data-driven methodology for modeling and visualization of relative equipment workload in a biomass feedstock preprocessing plant. The methodology is designed to serve in two main fronts: (1) knowledge discovery and data-mining from instrumentation data, (2) improving situational awareness during monitoring and control of the plant. We used Gaussian Processes to create a model of the expected current overload rate of for each of the electric motors involved in the plant. The expected number of overloads on each equipment was used to quantify and visualize the relative workload of the different components of the system. The visualization is presented in the form of an intuitive directed graph, whose properties (node size, position, colors) are driven by overload rates estimations.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"63 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132338099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional Neural Network Based Femur Stabilization for X-Ray Image Sequences","authors":"Marta Drążkowska, Tomasz Gawron, K. Kozlowski","doi":"10.1109/HSI.2018.8430840","DOIUrl":"https://doi.org/10.1109/HSI.2018.8430840","url":null,"abstract":"Sequence stabilization of medical images is an important aspect of diagnosis, therapy, joint movement kinematic analysis, and cancer detection. Typically, when image frames are recorded, the body is not rigidly fixed as a result of e.g. respiration, thus the position of its segments may vary. Simple image analysis methods (e.g. gradient based, scale-space based) tend to have problems with discerning the key-points in this specific task, due to large diversity of bone structure and highly visible soft tissue. In this paper, we propose a specialized algorithm for stabilization of femur in a sequence of single plane fluoroscopic images. The method estimates the positions of several easily-detectable femur key-points using gradient-based image analysis methods. For other key-points, which are located in the regions of bone with saliency prohibiting effective detection, we use feedforward Convolutional Neural Network as a position estimator. All the key-point positions are used in a stabilization process performed with the ICP (Iterative Closest Point) algorithm. The overall stabilization accuracy is evaluated for two uncorrelated X-ray image sequences, where manual stabilization (i.e., the results for image alignment performed by a human operator without access to key-points) constitutes the ground truth.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134151942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paula Viana, Tiago Ferreira, L. Castro, Márcio Soares, José P. Pinto, M. T. Andrade, Pedro Carvalho
{"title":"GymApp: A Real Time Physical Activity Trainner on Wearable Devices","authors":"Paula Viana, Tiago Ferreira, L. Castro, Márcio Soares, José P. Pinto, M. T. Andrade, Pedro Carvalho","doi":"10.1109/HSI.2018.8431358","DOIUrl":"https://doi.org/10.1109/HSI.2018.8431358","url":null,"abstract":"Technological advances are pushing into the mass market innovative wearable devices featuring increasing processing and sensing capacity, non-intrusiveness and ubiquitous use. Sensors built-in those devices, enable acquiring different types of data and by taking advantage of the available processing power, it is possible to run intelligent applications that process the sensed data to offer added-value to the user in multiple domains. Although not new to the modern society, it is unquestionable that the present exercise boom is rapidly spreading across all age groups. However, in a great majority of cases, people perform their physical activity on their own, either due to time or budget constraints and may easily get discouraged if they do not see results or perform exercises inadequately. This paper presents an application, running on a wearable device, aiming at operating as a personal trainer that validates a set of proposed exercises in a sports session. The developed solution uses inertial sensors of an Android Wear smartwatch and, based on a set of pattern recognition algorithms, detects the rate of success in the execution of a planned workout. The fact that all processing can be executed on the device is a differentiator factor to other existing solutions.","PeriodicalId":441117,"journal":{"name":"2018 11th International Conference on Human System Interaction (HSI)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114833000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}