{"title":"Android-Based Smartphone Authentication System Using Biometric Techniques: A Review","authors":"Xinman Zhang, Tingting He, Xuebin Xu","doi":"10.1109/CRC.2019.00029","DOIUrl":"https://doi.org/10.1109/CRC.2019.00029","url":null,"abstract":"As the technological progress of mobile Internet, smartphone based on Android OS accounts for the vast majority of market share. The traditional encryption technology cannot resolve the dilemma in smartphone information leakage, and the Android-based authentication system in view of biometric recognition emerge to offer more reliable information assurance. In this paper, we summarize several biometrics providing their attributes. Furthermore, we also review the algorithmic framework and performance index acting on authentication techniques. Thus, typical identity authentication systems including their experimental results are concluded and analyzed in the survey. The article is written with an intention to provide an in-depth overview of Android-based biometric verification systems to the readers.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"2 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130244468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chokchaitam, Phakdee Sukpornsawan, Nutcha Pungpiboon, Saowalak Tharawut
{"title":"RGB Compensation Based on Background Shadow Subtraction for Low-Luminance Pill Recognition","authors":"S. Chokchaitam, Phakdee Sukpornsawan, Nutcha Pungpiboon, Saowalak Tharawut","doi":"10.1109/CRC.2019.00032","DOIUrl":"https://doi.org/10.1109/CRC.2019.00032","url":null,"abstract":"Pill color is applied as one of important features for pill recognition; however, most of pill color is white or cream. It's difficult to classify two similar color pills because pill's color is sensitive to luminance intensity. However, when its luminance intensity is increased, Y value of background is also increased but Y value of pill shadow is slightly decreased. Therefore, difference between Y value of background and its shadow is effectively represented luminance intensity. In this report, we propose RGB compensation based on background shadow subtraction to compensate luminance-intensity effect in RGB value. Experimental results confirm an effectiveness of our proposed compensation method.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128675682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuan Li, Zhijuan Liu, Huanqiang Chen, Chengshan Liu, Ke Zhu
{"title":"Research on Multi-Mode Control Strategy for Spaceborne Antenna Pointing Mechanism","authors":"Yuan Li, Zhijuan Liu, Huanqiang Chen, Chengshan Liu, Ke Zhu","doi":"10.1109/CRC.2019.00014","DOIUrl":"https://doi.org/10.1109/CRC.2019.00014","url":null,"abstract":"The key problem of the spaceborne antenna pointing mechanism is to achieve high static pointing accuracy and realize fast dynamic tracking within the pointing range. In order to improve the static pointing and dynamic tracking accuracy, this paper utilizes a single-position closed-loop control algorithm to control the spaceborne antenna pointing mechanism, which works in the preset mode and tracking mode, meanwhile introduces feedforward control strategy. Finally, based on Matlab/Simulink, the simulation platform of the pointing control system is built to verify the control performance of the algorithm. The simulation results show that the steady-state error can reach 10-4° when kp=0.03, and the tracking accuracy can reach 10-4° after stabilization. This algorithm can satisfy the requirement of control error which should be less than 0.02°. Furthermore it can shorten the adjusting time with feedforward control strategy.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132030300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time-Sensitive Network Profile Service for Enhanced In-Vehicle Stream Reservation","authors":"Juho Lee, Sungkwon Park","doi":"10.1109/CRC.2019.00035","DOIUrl":"https://doi.org/10.1109/CRC.2019.00035","url":null,"abstract":"Experiments and standardization work on time-sensitive networks have been actively performed as research for autonomous driving. In the in-vehicle network, layer 1 standard of IEEE 802.3 and layer 2 standard of IEEE 802.1 have been developed or are being published with the introduction of Ethernet for vehicles. However, as these standards are newly established, there are some parts where interconnection with existing standards is lacking and technical issues that need to be solved emerged. One of them is about using Stream Reservation Protocol in vehicle. In this paper, we propose a method to utilize the TSN profile to solve the problem. This enables mutual compatibility between the existing standard and the new standard, and makes it possible to efficiently perform the existing stream reservation. Furthermore, we also proposed ways to extend the TSN profile to outside the vehicle and utilize it in remote driving.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133922194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Vorapojpisut, Matus Lhongpol, Ratchagree Amornlikitsin, Tienake Phuapaiboon
{"title":"A Robot Augmented Environment Based on ROS Multi-Agent Structure","authors":"S. Vorapojpisut, Matus Lhongpol, Ratchagree Amornlikitsin, Tienake Phuapaiboon","doi":"10.1109/CRC.2019.00020","DOIUrl":"https://doi.org/10.1109/CRC.2019.00020","url":null,"abstract":"This paper presents how to construct an augmented environment for a robot controller software. First, a multiagent software architecture based on the Robot Operating System (ROS) platform is purposed as a framework to superimpose real-world, virtual and software environments. Then, key settings in the ROS framework that affect the robot-environment interaction are discussed. To resolve such issues, message aggregation/dissemination in the proposed framework are implemented using the Simulink-based time-triggered architecture. Finally, a collision detection problem is demonstrated a built robot interacts with the proposed augmented environment.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133506734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ikeda, Takafumi Tohyama, Daisuke Maki, Keisuke Sato, E. Nakano
{"title":"Autonomous Step Climbing Strategy Using a Wheelchair and Care Robot","authors":"H. Ikeda, Takafumi Tohyama, Daisuke Maki, Keisuke Sato, E. Nakano","doi":"10.1109/CRC.2019.00024","DOIUrl":"https://doi.org/10.1109/CRC.2019.00024","url":null,"abstract":"This report describes a cooperative step climbing strategy using an electric wheelchair and autonomous robot. The robot, which was developed by our research group, has a wheeled travel mechanism and dual manipulators. The wheelchair is a commercially available model modified with added sensors, circuits, and batteries. When the wheelchair and robot encounter a step, the robot grasps the wheelchair and they help each other to ascend the step. In the step climbing process, the wheelchair front wheels are lifted using the difference between the wheelchair and robot velocities, and the front wheels are placed on the step. To make the rear wheels of the wheelchair climb the step, the robot upper arms push against the back of the wheelchair, which is like the motion of man pushing a wheelchair up the step. Similarly, the robot front and rear wheels climb the step using assistance from the wheelchair. We developed an automatic control system that realizes the cooperative step climbing of the wheelchair and robot and also simplifies the operation of both vehicles. An experiment was conducted to demonstrate that the wheelchair and robot can successfully maneuver up a step.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129306190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tze How Dickson Neoh, K. Sahari, Yew Cheong Hou, Omar Gumaan Saleh Basubeit
{"title":"Recognizing Malaysia Traffic Signs with Pre-Trained Deep Convolutional Neural Networks","authors":"Tze How Dickson Neoh, K. Sahari, Yew Cheong Hou, Omar Gumaan Saleh Basubeit","doi":"10.1109/CRC.2019.00030","DOIUrl":"https://doi.org/10.1109/CRC.2019.00030","url":null,"abstract":"An essential component in the race towards the self-driving car is automatic traffic sign recognition. The capability to automatically recognize road signs allow self-driving cars to make prompt decisions such as adhering to speed limits, stopping at traffic junctions and so forth. Traditionally, feature-based computer vision techniques were employed to recognize traffic signs. However, recent advancements in deep learning techniques have shown to outperform traditional color and shape based detection methods. Deep convolutional neural network (DCNN) is a class of deep learning method that is most commonly applied to vision-related tasks such as traffic sign recognition. For DCNN to work well, it is imperative that the algorithm is given a vast amount of training data. However, due to the scarcity of a curated dataset of the Malaysian traffic signs, training DCNN to perform well can be very challenging. In this demonstrate that DCNN can be trained with little training data with excellent accuracy by using transfer learning. We retrain various pre-trained DCNN from other image recognition tasks by fine-tuning only the top layers on our dataset. Experiment results confirm that by using as little as 100 image samples for 5 different classes, we are able to classify hitherto traffic signs with above 90% accuracy for most pre-trained models and 98.33% for the DenseNet169 pre-trained model.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127585917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detection of Bughole on Concrete Surface with Convolutional Neural Network","authors":"G. Yao, Fujia Wei, Yang Yang, Yujia Sun","doi":"10.1109/CRC.2019.00045","DOIUrl":"https://doi.org/10.1109/CRC.2019.00045","url":null,"abstract":"Bugholes are surface imperfections found on the surface of concrete structures. The presence of bugholes not only affects the appearance of the concrete structure, but may even affect the durability of the structure. Traditional measurement methods are carried out by in-situ manual inspection, and the detection process is time-consuming and difficult. Although various image processing technologies (IPT) have been implemented to detect defects in the appearance quality of concrete to partially replace manual on-site inspections, the wide variety of realities may limit the widespread adoption of IPTs. In order to overcome these limitations, this paper proposes a detector based on Convolutional Neural Network (CNN) to recognizing bugholes on concrete surfaces. The proposed CNN was trained on 4,000 images and tested on 800 images which were not used for training and validation; the recognition accuracy reached 94.37%. The image test results and comparative study with traditional methods showed that the proposed method exhibits excellent performance and indeed can detect the bugholes on the concrete surfaces under actual conditions.","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116662241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Message from the Local Chair","authors":"","doi":"10.1109/crc.2018.00006","DOIUrl":"https://doi.org/10.1109/crc.2018.00006","url":null,"abstract":"","PeriodicalId":414946,"journal":{"name":"2019 4th International Conference on Control, Robotics and Cybernetics (CRC)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129151807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}