{"title":"Effects of the Size of Mixed-Reality Person Representations on Stress and Presence in Telecommunication","authors":"M. Joachimczak, Juan Liu, H. Ando","doi":"10.1142/s1793351x19400130","DOIUrl":"https://doi.org/10.1142/s1793351x19400130","url":null,"abstract":"We study how mixed reality (MR) telepresence can enhance long-distance human interaction and how altering 3D representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft’s Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using HoloLens. In this study, we used mock job interview paradigm to induce stress in human–subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants’ stress levels and their sense of presence. The questionnaire data showed that NR condition induced more stress and presence than SR condition and was significantly different from LCD condition. We also found consistent patterns in the biophysical data.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129839830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khoa Pho, Muhamad Kamal Mohammed Amin, A. Yoshitaka
{"title":"Segmentation-driven Hierarchical RetinaNet for Detecting Protozoa in Micrograph","authors":"Khoa Pho, Muhamad Kamal Mohammed Amin, A. Yoshitaka","doi":"10.1142/s1793351x19400178","DOIUrl":"https://doi.org/10.1142/s1793351x19400178","url":null,"abstract":"Protozoa detection and identification play important roles in many practical domains such as parasitology, scientific research, biological treatment processes, and environmental quality evaluation. Traditional laboratory methods for protozoan identification are time-consuming and require expert knowledge and expensive equipment. Another approach is using micrographs to identify the species of protozoans that can save a lot of time and reduce the cost. However, the existing methods in this approach only identify the species when the protozoan are already segmented. These methods study features of shapes and sizes. In this work, we detect and identify the images of cysts and oocysts of various species such as: Giardia lamblia, Iodamoeba butschilii, Toxoplasma gondi, Cyclospora cayetanensis, Balantidium coli, Sarcocystis, Cystoisospora belli and Acanthamoeba, which have round shapes in common and affect human and animal health seriously. We propose Segmentation-driven Hierarchical RetinaNet to automatically detect, segment, and identify protozoans in their micrographs. By applying multiple techniques such as transfer learning, and data augmentation techniques, and dividing training samples into life-cycle stages of protozoans, we successfully overcome the lack of data issue in applying deep learning for this problem. Even though there are at most 5 samples per life-cycle category in the training data, our proposed method still achieves promising results and outperforms the original RetinaNet on our protozoa dataset.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123529006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learners' Technological Acceptance of VR Content Development: A Sequential 3-Part Use Case Study of Diverse Post-Secondary Students","authors":"V. Nguyen, R. Hite, Tommy Dang","doi":"10.1142/s1793351x19400154","DOIUrl":"https://doi.org/10.1142/s1793351x19400154","url":null,"abstract":"Web-based virtual reality (VR) development tools are in ubiquitous use by software developers, and now, university (undergraduate) students, to move beyond using, to creating new and energizing VR content. Web-based VR (WebVR), among other libraries and frameworks, have risen as a low-cost platform for users to create rich and intuitive VR content and applications. However, the success of WebVR as an instructional tool relies on post-secondary students technological acceptance (TA), the intersectionality of a user’s perceived utility (PU) and perceived ease of use (PEOU, or convenience) with said technological tool. Yet, there is a dearth of exploratory studies of students’ experiences with the AR/VR development technologies to infer their TA. To ascertain the viability of WebVR tools for software engineering undergraduates in the classroom, this paper presents a 3-case contextual investigation of 38 undergraduate students tasked with creating VR content. In each use case, students were provided increasing freedom in their VR content development parameters. Results indicated that students demonstrated elements of technological acceptance in their selection of webVR and other platforms, and not only successfully creating rich and robust VR content (PU), but also executing these projects in a short period (PEOU). Other positive externalities observed were students exhibitions of soft skills (e.g. creativity, critical thinking) and different modes of demonstrating coding knowledge, which suggest further study. Discussed are the lessons learned from the WebVR and VR/AR interventions and recommendations for WebVR instruction. This work may be helpful for both learners and teachers using VR/AR in selecting, designing, and developing coursework materials, tools, and libraries.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114760418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seyedamin Pouriyeh, M. Allahyari, Qingxia Liu, Gong Cheng, H. Arabnia, M. Atzori, F. Mohammadi, K. Kochut
{"title":"Ontology Summarization: Graph-Based Methods and Beyond","authors":"Seyedamin Pouriyeh, M. Allahyari, Qingxia Liu, Gong Cheng, H. Arabnia, M. Atzori, F. Mohammadi, K. Kochut","doi":"10.1142/S1793351X19300012","DOIUrl":"https://doi.org/10.1142/S1793351X19300012","url":null,"abstract":"Ontologies have been widely used in numerous and varied applications, e.g. to support data modeling, information integration, and knowledge management. With the increasing size of ontologies, ontology understanding, which is playing an important role in different tasks, is becoming more difficult. Consequently, ontology summarization, as a way to distill key information from an ontology and generate an abridged version to facilitate a better understanding, is getting growing attention. In this survey paper we review existing ontology summarization techniques and focus mainly on graph-based methods, which represent an ontology as a graph and apply centrality-based and other measures to identify the most important elements of an ontology as its summary. After analyzing their strengths and weaknesses, we highlight a few potential directions for future research.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133611127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ASIC STA Path Verification Using Semi-Supervised Learning","authors":"James Obert, T. Mannos","doi":"10.1142/S1793351X19400105","DOIUrl":"https://doi.org/10.1142/S1793351X19400105","url":null,"abstract":"To counter manufacturing irregularities and ensure ASIC design integrity, it is essential that robust design verification methods are employed. It is possible to ensure such integrity using ASIC static timing analysis (STA) and machine learning. In this research, uniquely devised machine and statistical learning methods which quantify anomalous variations in Register Transfer Level (RTL) or Graphic Design System II (GDSII) format are discussed. To measure the variations in ASIC analysis data, the timing delays in relation to path electrical characteristics are explored. It is shown that semi-supervised learning techniques are powerful tools in characterizing variations within STA path data and have much potential for identifying anomalies in ASIC RTL and GDSII design data.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128428226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Athanasios I. Kyritsis, G. Willems, Michel Deriaz, D. Konstantas
{"title":"Gait Pattern Recognition Using a Smartwatch Assisting Postoperative Physiotherapy","authors":"Athanasios I. Kyritsis, G. Willems, Michel Deriaz, D. Konstantas","doi":"10.1142/S1793351X19400117","DOIUrl":"https://doi.org/10.1142/S1793351X19400117","url":null,"abstract":"Postoperative rehabilitation is led by physiotherapists and is a vital program that re-establishes joint motion and strengthens the muscles around the joint after an orthopedic surgery. Modern smart devices have affected every aspect of human life. Newly developed technologies have disrupted the way various industries operate, including the healthcare one. Extensive research has been carried out on how smartphone inertial sensors can be used for activity recognition. However, there are very few studies on systems that monitor patients and detect different gait patterns in order to assist the work of physiotherapists during the said rehabilitation phase, even outside the time-limited physiotherapy sessions. In this paper, we are presenting a gait recognition system that was developed to detect different gait patterns. The proposed system was trained, tested and validated with data of people who have undergone lower body orthopedic surgery, recorded by Hirslanden Clinique La Colline, an orthopedic clinic in Geneva, Switzerland. Nine different gait classes were labeled by professional physiotherapists. After extracting both time and frequency domain features from the time series data, several machine learning models were tested including a fully connected neural network. Raw time series data were also fed into a convolutional neural network.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125531163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of a Model-driven Knowledge Storage and Retrieval IDE for Interactive HRI Systems","authors":"N. Köster, S. Wrede, P. Cimiano","doi":"10.1142/S1793351X19400099","DOIUrl":"https://doi.org/10.1142/S1793351X19400099","url":null,"abstract":"Efficient storage and querying of long-term human–robot interaction data requires application developers to have an in-depth understanding of the involved domains. Creating syntactically and semantically correct queries in the development process is an error prone task which can immensely impact the interaction experience of humans with robots and artificial agents. To address this issue, we present and evaluate a model-driven software development approach to create a long-term storage system to be used in highly interactive HRI scenarios. We created multiple domain-specific languages that allow us to model the domain and seamlessly embed its concepts into a query language. Along with corresponding model-to-model and model-to-text transformations, we generate a fully integrated workbench facilitating data storage and retrieval. It supports developers in the query design process and allows in-tool query execution without the need to have prior in-depth knowledge of the domain. We evaluated our work in an extensive user study and can show that the generated tool yields multiple advantages compared to the usual query design approach.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"55 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114060206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lianjun Li, Yizhe Zhang, M. Ripperger, J. Nicho, M. Veeraraghavan, A. Fumagalli
{"title":"Autonomous Object Pick-and-Sort Procedure for Industrial Robotics Application","authors":"Lianjun Li, Yizhe Zhang, M. Ripperger, J. Nicho, M. Veeraraghavan, A. Fumagalli","doi":"10.1142/S1793351X19400075","DOIUrl":"https://doi.org/10.1142/S1793351X19400075","url":null,"abstract":"This paper describes an industrial robotics application, named Gilbreth, for autonomously picking up objects of different types from a moving conveyor belt and sorting the objects into bins according to their type. The environment, which consists of a moving conveyor belt, a break beam sensor, a 3D camera Kinect sensor, a UR10 industrial robot arm with a vacuum gripper, and different object types such as pulleys, disks, gears, and piston rods, is inspired by the NIST ARIAC competition. A first version of the Gilbreth application is implemented leveraging a number of Robot Operating System (ROS) and ROS-Industrial (ROS-I) packages. The Gazebo package is used to simulate the environment, and six external ROS nodes have been implemented to execute the required functions. Experimental measurements of CPU usage and processing times of the ROS nodes are discussed. In particular, the object recognition ROS package requires the highest processing times and offers an opportunity for designing an iterative method with the aim to fasten completion time. Its processing time is found to be on par with the time required by the robot arm to execute its movement between four poses: pick approach, pick, pick retreat and place.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"476 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134438821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation of Subjective Closed Captioning Quality Assessment Using Prediction Models","authors":"Somang Nam, D. Fels","doi":"10.1142/S1793351X19400038","DOIUrl":"https://doi.org/10.1142/S1793351X19400038","url":null,"abstract":"As a primary user group, Deaf or Hard of Hearing (D/HOH) audiences use Closed Captioning (CC) service to enjoy the TV programs with audio by reading text. However, the D/HOH communities are not completely satisfied with the quality of CC even though the government regulators entail certain rules in the CC quality factors. The measure of the CC quality is often interpreted as an accuracy on translation and regulators use the empirical models to assess. The need of a subjective quality scale comes from the gap in between current empirical assessment models and the audience perceived quality. It is possible to fill the gap by including the subjective assessment by D/HOH audiences. This research proposes a design of an automatic quality assessment system for CC which can predict the D/HOH audience subjective ratings. A simulated rater is implemented based on literature and the CC quality factor representative value extraction algorithm is developed. Three prediction models are trained with a set of CC quality values and corresponding rating scores, then they are compared to find the feasible prediction model.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"53 56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129862852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Knowledge Extraction of Adaptive Structural Learning of Deep Belief Network for Medical Examination Data","authors":"Shin Kamada, T. Ichimura, T. Harada","doi":"10.1142/S1793351X1940004X","DOIUrl":"https://doi.org/10.1142/S1793351X1940004X","url":null,"abstract":"Deep learning has a hierarchical network structure to represent multiple features of input data. The adaptive structural learning method of Deep Belief Network (DBN) can reach the high classification capability while searching the optimal network structure during the training. The method can find the optimal number of hidden neurons for given input data in a Restricted Boltzmann Machine (RBM) by neuron generation–annihilation algorithm, and generate a new hidden layer in DBN by the extension of the algorithm. In this paper, the proposed adaptive structural learning of DBN (Adaptive DBN) was applied to the comprehensive medical examination data for cancer prediction. The developed prediction system showed higher classification accuracy for test data (99.5% for the lung cancer and 94.3% for the stomach cancer) than the several learning methods such as traditional RBM, DBN, Non-Linear Support Vector Machine (SVM), and Convolutional Neural Network (CNN). Moreover, the explicit knowledge that makes the inference process of the trained DBN is required in deep learning. The binary patterns of activated neurons for given input in RBM and the hierarchical structure of DBN can represent the relation between input and output signals. These binary patterns were classified by C4.5 for knowledge extraction. Although the extracted knowledge showed slightly lower classification accuracy than the trained DBN network, it was able to improve inference speed by about 1/40. We report that the extracted IF-THEN rules from the trained DBN for medical examination data showed some interesting features related to initial condition of cancer.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130074628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}