{"title":"Synchronous Dual-Arm Manipulation by Adult-Sized Humanoid Robot","authors":"Hanjaya Mandala, Saeed Saeedvand, J. Baltes","doi":"10.1109/ARIS50834.2020.9205783","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205783","url":null,"abstract":"This paper introduces a synchronous dual-arm manipulation with obstacle avoidance trajectory planning by an adult-size humanoid robot. In this regard, we propose a high precision 3D object coordinate tracking using LiDAR point cloud data and adopting Gaussian distribution into robot manipulation trajectory planning. We derived our 3D object detection into three methods included auto K-means clustering, deep learning object classification, and convex hull localization. Therefore, a lightweight 3D object classification based on a convolutional neural network (CNN) has been proposed that reached 91% accuracy with 0.34ms inference time on CPU. In empirical experiments, the Gaussian manipulation trajectory planning is applied adult-sized dual-arm robot, which shows efficient object placement with obstacle avoidance.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Landing Site Inspection and Autonomous Pose Correction for Unmanned Aerial Vehicles","authors":"Min-Fan Ricky Lee, A. J., K. Saurav, D. Anshuman","doi":"10.1109/ARIS50834.2020.9205773","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205773","url":null,"abstract":"Large number of disturbances and uncertainties in the environment makes landing one of the tricky maneuvers in all the phases of flying an unmanned aerial vehicle. The situation even worsens at the time of emergencies. To allow safe landing of the UAVs on rough terrains with a lot of ground objects, an automatic landing site inspection and real-time pose correction system while landing is in demand in current world situation. This paper presents a method of detection of designated landing sites and autonomously landing in a safe environment. The airborne vision system is utilized with fully convolution neural network to recognize the landing markers on the landing site and object detection. Automatic pose correction algorithm is developed to position the drone for landing in a safe zone and as near to the landing marker as possible. The information from the onboard visual sensors and Inertial Measurement Unit (IMU) is utilized to estimate pose for the perfect landing trajectory. A series of experiments are presented to test and optimize the proposed method.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122203061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Development and Implementation of Novel Six-Sided Automated Optical Inspection for Metallic Objects","authors":"Fauzy Satrio Wibowo, Y. R. Wahyudi, Hsien-I Lin","doi":"10.1109/ARIS50834.2020.9205786","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205786","url":null,"abstract":"This paper proposes an inspection system based on the Automated Optical Inspection System (AOI) to inspect six-sided metallic objects. The objective is to develop a system that can provide a good quality of images, with the objects moving on a production line. The proposed system comprises of an industrial robotic arm and a set of cameras. Also, the scanning system provides six-sided inspection, that divided into two stages, i.e., (1) a main-frame inspection (5-side) and (2) an external frame inspection (1-side). An industrial robotic arm is involved to pick-up the object from the production line. Then, the system detected the orientation, shifted the position of the picked object, and calibrated them to the reference orientation and position accordingly. To validate the quality of the images, we use pixel differences to analyze the repeatability of the object pose. According to the experimental results, the system not only provides clear, and it has good performance position repeatability of 4.95 mm.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128526636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CAD-based offline programming platform for welding applications using 6-DOF and 2-DOF robots","authors":"Amit Kumar Bedaka, Chyi-Yeu Lin","doi":"10.1109/ARIS50834.2020.9205784","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205784","url":null,"abstract":"The main objective of this research is to design and develop an offline programming (OLP) simulation platform for welding applications. The proposed platform was developed using OPEN CASCADE libraries in C++ integration environment to perform a given task on a 6-DOF and 2-DOF robots. In this paper, the welding path is generated autonomously using the CAD features and all the calculations are done within the platform. The OLP simulation environment consists of loading CAD files, kinematics analysis, welding path-planning, welding parameters, motion planning, simulation, and robot execution file. In addition, the proposed platform is capable of generating a collision avoidance path before mapping to a real site.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123288522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-model Fusion on Real-time Drowsiness Detection for Telemetric Robotics Tracking Applications","authors":"R. Luo, Chin-Hao Hsu, Yu-Cheng Wen","doi":"10.1109/ARIS50834.2020.9205780","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205780","url":null,"abstract":"Drowsiness of driver is one of the common causes resulting in road crashes. According to the research, there have been twenty percent of the road accidents which are related to the drowsiness of drivers. Nowadays, with the development technology, various approaches are introduced to detect the drowsiness of drivers. In this paper, we propose a multi-model fusion system which is composed of the three models to capture driver’s face and detect drowsiness in the real-time for telemetric robotics tracking applications. The sensor device we used is an RGB camera which is mounted in front of driver to obtain the facial image. Then, we combine the results based on the state of the eye blink, yawn and head deviation to determine whether the driver is drowsy. We test our models to obtain the weighting factors in drowsy value. In the experiment, we show that our system has the high accuracy of detection.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121976182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simulation and Control of a Robotic Arm Using MATLAB, Simulink and TwinCAT","authors":"Wei-chen Lee, Shih-an Kuo","doi":"10.1109/ARIS50834.2020.9205777","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205777","url":null,"abstract":"It is challenging to develop robot applications without viewing the robot movement. Besides, it is tedious to establish motion paths and adjust controller parameters using the real robot if there is no simulation program available. To resolve the issues for a low-cost robot, we developed a system that integrated kinematics and motion control simulation using MATLAB and Simulink. The system can then be connected to a real robot by using TwinCAT to verify the simulation results. Case studies were conducted to demonstrate that the system worked well and can be applied to those robotic arms without simulators.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120981866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Landing Area Recognition using Deep Learning for Unammaned Aerial Vehicles","authors":"Min-Fan Ricky Lee, Asep Nugroho, Tuan-Tang Le, Bahrudin, Saul Nieto Bastida","doi":"10.1109/ARIS50834.2020.9205793","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205793","url":null,"abstract":"The lack of an automated Unmanned Aerial Vehicles (UAV) landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace to develop tasks in the logistical transport scenario. This research proposes landing area localization and obstruction detection for UAVs that are based on deep learning faster R-CNN and feature matching algorithm. Which output decides if the landing area is safe or not. The final result has been deployed on the Aerial Mobile Robot Platform and was successfully performed effectively.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129477947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"One-stage Vehicle Engine Number Recognition System","authors":"Cheng-Hsiung Yang, Han-Shen Feng","doi":"10.1109/ARIS50834.2020.9205775","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205775","url":null,"abstract":"This study proposes a one-stage vehicle engine number recognition system which avoids using the traditional three-stage recognition procedures of positioning, segmentation, and then character recognition, without the needs of image preprocessing procedures, we directly locate and recognizes the text targets in the vehicle engine image. The experiment using 926 labeled images via transfer learning to train our prediction model and then using this prediction model to test another 2310 unlabeled images, the overall accuracy achieved 99.48% and the execution time for recognize a single image is 234ms.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133371301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Path Following for Autonomous Tractor under Various Soil Conditions and Unstable Lateral Dynamic","authors":"Min-Fan Ricky Lee, Asep Nugroho, W. Purbowaskito, Saul Nieto Bastida, Bahrudin","doi":"10.1109/ARIS50834.2020.9205792","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205792","url":null,"abstract":"Lighten the job of the agricultural vehicle operators by providing some autonomous functions is an important field of research, whose most important challenges are to keep the accuracy and optimize the yields. Autonomous navigation of a tractor involves the control of different kinematic and dynamic subsystems, such as the tractor positions, the yaw angle and the longitudinal speed dynamics. The dynamic behavior is highly correlated with the soil conditions of the agricultural field. This paper proposes a Lyapunov’s stability theorem (LST) based kinematic controller for path following in autonomous tractor. Moreover, a Fuzzy-PID controller is employed to control the longitudinal dynamic, and a linear quadratic regulator (LQR) based state-feedback controller to handle the lateral dynamic behavior. Numerical simulation results in MATLAB software show the proposed algorithms can handle the uncertainty of the soil conditions represented by the variations of the rolling friction coefficient.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130389613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ronnie S. Concepcion, Sandy C. Lauguico, Rogelio Ruzcko Tobias, E. Dadios, A. Bandala, E. Sybingco
{"title":"Estimation of Photosynthetic Growth Signature at the Canopy Scale Using New Genetic Algorithm-Modified Visible Band Triangular Greenness Index","authors":"Ronnie S. Concepcion, Sandy C. Lauguico, Rogelio Ruzcko Tobias, E. Dadios, A. Bandala, E. Sybingco","doi":"10.1109/ARIS50834.2020.9205787","DOIUrl":"https://doi.org/10.1109/ARIS50834.2020.9205787","url":null,"abstract":"Greenness index has been proven sensitive to vegetation properties for multispectral and hyperspectral imaging. However, most controlled microclimatic cultivation chambers are equipped with low-cost RGB camera for crop growth monitoring. The lack of camera credentials specially the wavelength sensitivity of visible band provides added challenge in materializing greenness index. The proposed method in this study compensates the unavailability of generic camera peak wavelength sensitivities by employing genetic algorithm (GA) to derive a visible band triangular greenness index (TGI) based on green waveband signal normalized TGI model called gvTGI. The selection, mutation and crossover rates used in configuring the GA model are 0.2, 0.01 and 0.8 respectively. Lettuce images are captured from an aquaponic cultivation chamber for 6-week crop life cycle. The annotated and extracted gvTGI channels are inputted to deep learning models of MobileNetV2, ResNetl01 and InceptionResNetV2 for estimation of photosynthetic growth signatures at canopy scale. In predicting cultivation period in weeks after germination, MobileNetV2 bested other image classification models with accuracy of 80.56%. In estimating canopy area, MobileNetV2 bested other image regression models with $mathrm{R}^{2}$ of 0.9805. The proposed gvTGI proved to be highly accurate on estimation of photosynthetic growth signatures by using generic RGB camera, thus, providing a low-cost alternative for crop phenotyping.","PeriodicalId":423389,"journal":{"name":"2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127331079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}