{"title":"Quantifying the Impact of the Physical Setup of Stereo Camera Systems on Distance Estimations","authors":"A. J. Golkowski, M. Handte, Peter Roch, P. Marrón","doi":"10.1109/IRC.2020.00041","DOIUrl":"https://doi.org/10.1109/IRC.2020.00041","url":null,"abstract":"The ability to perceive the environment accurately is a core requirement for autonomous navigation. In the past, researchers and practitioners have explored a broad spectrum of sensors that can be used to detect obstacles or to recognize navigation targets. Due to their low hardware cost and high fidelity, stereo camera systems are often considered to be a particularly versatile sensing technology. Consequently, there has been a lot of work on integrating them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view. Based on the results, we derive several guidelines on how to choose the parameters for an application.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128926880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Sugiarto, Chun-Lung Hsu, Chi-Tien Sun, Shu-Hao Ye, Kuan-Ting Lu, W. Hsu
{"title":"Head-Orientation-Prediction Based on Deep Learning on sEMG for Low-Latency Virtual Reality Application","authors":"T. Sugiarto, Chun-Lung Hsu, Chi-Tien Sun, Shu-Hao Ye, Kuan-Ting Lu, W. Hsu","doi":"10.1109/IRC.2020.00036","DOIUrl":"https://doi.org/10.1109/IRC.2020.00036","url":null,"abstract":"Reducing end-to-end latency on virtual reality system is important since it can remove several negative effects like motion-sickness and head orientation prediction is one of the solution to do that. On this study, signal from surface Electromyography (sEMG) was utilized to predict future head orientation with model trained from various deep learning algorithms. Total 20 subjects were participated with 6 muscles on neck were recorded for training purpose. The result showed that for both intra-subject and inter-subject method pre-processed sEMG signal + IMU input outperformed model with input from sEMG features + IMU. The result of inter-subject testing method on this study extended opportunity for real-world application in which the user data has never been include in training database.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Partner Selection for Agents: A Utility Theory Approach","authors":"Hyeonae Jang, E. Matson","doi":"10.1109/IRC.2020.00087","DOIUrl":"https://doi.org/10.1109/IRC.2020.00087","url":null,"abstract":"This paper proposes an idea for modeling of matching between agents based on utility theory. The basic idea comes from the stable matching problem, and this focuses on how an agent makes an ultimate decision when it comes to finding a matching agent based on utilities that other agents have as well as the agent's preferences for each utility option. This paper begins with introductions of agent organization, partner selection in Agent- Based Model, and utility theories. The proposed approach of this paper includes aspects of directions and applications. In terms of the aspect of directions, pre-defined concepts such as utility options, perceptual and actual utility, and deal-breakers are introduced to understand applications better. Also, two possible scenarios with basic and high complexity were handled as applications for partner matching between agents based on utilities. There are three merits of the proposed approach, which are: (1) both the agent's benefit and its potential partner agent's benefits are considered for mate selection; (2) by employing Congregation Organizational Paradigm, an agent's partner selection approach can be applied in dynamic environments by seeking its optimal partner in another networking cluster; and (3) by employing threshold and deal-breakers, a rational agent can avoid making a decision which lowers its level of happiness or benefits.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116774535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shan-Ling Chen, Shang-Chih lin, Yennun Huang, Chia-Wei Jen, Zheng-Long Lin, S. Su
{"title":"A Vision-based Dual-axis Positioning System with YOLOv4 and Improved Genetic Algorithms","authors":"Shan-Ling Chen, Shang-Chih lin, Yennun Huang, Chia-Wei Jen, Zheng-Long Lin, S. Su","doi":"10.1109/IRC.2020.00027","DOIUrl":"https://doi.org/10.1109/IRC.2020.00027","url":null,"abstract":"This research aims to construct a vision-based dual-axis positioning system and make quality inspection more automated. The main tasks of the system are object detection and path planning. First, the latest object detection algorithms are applied to metal object detection and the definition of feasible inspection regions. Then, any object may have multiple feasible inspection regions, and the center point of each feasible inspection regions becomes the target of the path planning algorithm. The experimental results show that You Only Look Once (YOLO) and Improved Genetic Algorithms (IGA) realize object detection and path planning respectively, and have the best performance. The former will not be affected by the angle and distance of the object. The latter used optimization methods to reduce computational costs and optimal path planning. In future research, we will expand the scope of research. Let the algorithm handle more complex situations, such as considering the in-depth information of metal objects and performing quality inspection based on deep learning. In this way, the results of this article can help industrial technology upgrade and move towards smart manufacturing.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enrico Schröder, Mirko Mählisch, Julien Vitay, F. Hamker
{"title":"Monocular 3D Object Detection Using Feature Map Transformation: Towards Learning Perspective-Invariant Scene Representations","authors":"Enrico Schröder, Mirko Mählisch, Julien Vitay, F. Hamker","doi":"10.1109/IRC.2020.00066","DOIUrl":"https://doi.org/10.1109/IRC.2020.00066","url":null,"abstract":"In this paper we propose to use a feature map transformation network for the task of monocular 3D object detection. Given a monocular camera image, the transformation network encodes features of the scene in an abstract, perspective-invariant latent representation. This latent representation can then be decoded into a bird's-eye view representation to estimate objects' position and rotation in 3D space. In our experiments on the Kitti object detection dataset we show that our model is able to learn to estimate objects' 3D position from a monocular camera image alone without having any explicit geometric model or other prior information on how to perform the transformation. While performing slightly worse than networks which are purpose-built for this task, our approach allows feeding the same bird's-eye view object detection network with input data from different sensor modalities. This can increase redundancy in a safety-critical environment. We present additional experiments to gain insight into the properties of the learned perspective-invariant abstract scene representation.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126900798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cluster-based scan registration for vehicle localization in urban environments","authors":"Javier Guevara, F. A. Cheeín","doi":"10.1109/IRC.2020.00021","DOIUrl":"https://doi.org/10.1109/IRC.2020.00021","url":null,"abstract":"Scan registration can estimate the pose of the vehicle based on information acquired by range sensors. Those techniques could obtain optimal results when applying in indoor environments. Nevertheless, their performance decrease in unstructured environments because of the vast range of operating conditions. This work provides a computational approach to improve the results of the well-know iterative closes point (ICP) approach and its variants in an urban scenario. The proposed method describes a pre-processing approach where the point cloud information was divided into several groups. Then, the rigid matrix associated with vehicle motion was obtained by minimizing the sum squared registration error among the most significant groups. This methodology was validated using the Ford and Kitti datasets. The results showed that the proposal performed better in the long-term for the point-to-point version in comparison with the original implementation. Meanwhile, when applying the proposal with the point-to-plane version, similar results to the original implementation were obtained. Nevertheless, the consistency analysis of the Z-axis showed a better performance for the cluster-based proposal in all the point-to-plane implementations. These outcomes suggests that the proposed approach could improve the performance of localization techniques in urban scenarios based on separable groups of data.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"2016 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121487769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Runtime reconfiguration of robot control systems: a ROS-based case study","authors":"D. Brugali","doi":"10.1109/IRC.2020.00047","DOIUrl":"https://doi.org/10.1109/IRC.2020.00047","url":null,"abstract":"Autonomous robots are complex examples of context-aware self-adaptive systems, a type of software systems that exploit knowledge about the environment to trigger runtime adaptation. This paper analyses architectural issues related to runtime reconfiguration of robot control systems, presents an architectural model of a reconfigurable navigation system for service robots, and exemplifies its implementation in ROS.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132771252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data system model for easy human-machine interactions over communication interfaces","authors":"A. Zielonka, A. Sikora, M. Woźniak","doi":"10.1109/IRC.2020.00096","DOIUrl":"https://doi.org/10.1109/IRC.2020.00096","url":null,"abstract":"Advances in technology provide new possibilities for communication and data management in IoT systems. We are able to collect and analyze various information from our devices. However to increase efficiency of such system a humanized model of interactions can help. In this paper we present a model of data management system for web and mobile apps. The system is formed to support human like communication model by following operations on the data sets. The proposed model is tested on data visualization for sensor readings from smart interfaces.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132281563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Context Aware Feature Interaction based Recommendation System","authors":"Pengcheng Ma, Qian Gao, Jun Fan","doi":"10.1109/IRC.2020.00092","DOIUrl":"https://doi.org/10.1109/IRC.2020.00092","url":null,"abstract":"Additional context information has strong support for all kinds of recommendation systems, so context aware recommendation systems have been widely concerned in recent years. The existing mainstream context aware recommendation model adopts the neural network method, which combines the user's long-term and short-term preferences with the input query to carry out personalized product recommendation.Based on the matrix factorization model, this paper proposes a new interaction mode network model, which consists of three modules: context user / item interaction module, attention mechanism module and context environment overall role module. In this model, we use bilinear function to establish the interaction between context and user / item, and add attention mechanism to distinguish the importance of different context information. Finally, we add user score bias and item score bias which are changed by context environment to the traditional matrix factorization method. Combined with the above methods, we set up a matrix factorization recommendation model based on context aware feature interaction, named “feature interactive network model” (FINM). Through experiments on data sets, it is shown that the algorithm proposed in this paper is superior to the general recommendation algorithm.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"227 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134322996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human Emotional State Estimation Evaluation using Heart Rate Variability and Activity Data","authors":"F. Y. Setiono, A. Elibol, N. Chong","doi":"10.1109/IRC.2020.00035","DOIUrl":"https://doi.org/10.1109/IRC.2020.00035","url":null,"abstract":"Human-Robot Interaction (HRI) is one of the most rapidly emerging fields in robotic applications over the years. One direction of the improvements in the HRI field is by adding the capability of emotional understanding as a fundamental part of human-human interaction necessities. Human emotion understanding has been studied through the well-known Heart Rate Variability (HRV) analysis recently. In this paper, two different methods of classification are proposed to find the relations between activity, heart rate, and emotional states. Two individual k-Nearest Neighborhood (kNN)-based classifications used in the first method and implemented for each dataset of pre- processed accelerometer data and HRV data where both aim to estimate the user's emotion and activity data at the same time. The features of the frequency domain-based HRV data and the user's activity data are combined into a new dataset and two different classifiers of Multilayer Perceptron (MLP) and Support Vector Machines (SVM) were used in the experimental evaluations. Performance comparisons are presented to show the efficiency. Results from both methods are analyzed and reported in this paper.","PeriodicalId":232817,"journal":{"name":"2020 Fourth IEEE International Conference on Robotic Computing (IRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129050319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}