{"title":"An Integrated Classification Model for Massive Short Texts with Few Words","authors":"Xuetao Tang, Yi Zhu, Xuegang Hu, Peipei Li","doi":"10.1145/3366715.3366734","DOIUrl":"https://doi.org/10.1145/3366715.3366734","url":null,"abstract":"The excellent performance of short texts classification has emerged in the past few years. However, massive short texts with few words like invoice data are different with traditional short texts like tweets in its no contextual and less semantic information, which hinders the application of conventional classification algorithms. To address these problems, we propose an integrated classification model for massive short texts with few words. More specifically, the word embedding model is introduced to train the word vectors of massive short texts with few words to form the feature space, and then the vector representation of each instance in texts is trained based on sentence embedding. With this integrated model, higher level representations are learned from massive short texts with few words. It can boost the performance of the base subsequent classifiers such as K-Nearest Neighbor. Extensive experiments conducted on dataset including 16 million real data demonstrate the superior classification performance of our proposed model compared with all competing state-of-the-art models.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"15 44","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113963076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Chinese Character CAPTCHA Recognition Based on Convolutional Neural Network","authors":"Xiangyun Zhang, Jin Zhang, Shuiping Zhang","doi":"10.1145/3366715.3366724","DOIUrl":"https://doi.org/10.1145/3366715.3366724","url":null,"abstract":"The goal of this paper is to achieve effective recognition of Chinese character CAPTCHA, we propose a convolutional neural network model with reference to LeNet-5, the number of convolution kernels is increased to enable more efficient extraction of features, while adding dropout layers to prevent overfitting and adding normalized layers to prevent gradient explosions. The model takes the grayscale, binarization, and segmented CAPTCHA pictures as input, and outputs the vector of 3,500 dimensions which indicate the probability of each Chinese character. After training, the model can achieve a recognition rate of 99.6%. The experiment also compares the model with existing model, the results show that the model can identify Chinese character CAPTCHA more effectively.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127237515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SYG-Net: A New High-Precision Vehicle Detection Network","authors":"Zhang Yang, Dengfeng Yao","doi":"10.1145/3366715.3366726","DOIUrl":"https://doi.org/10.1145/3366715.3366726","url":null,"abstract":"To improve the accuracy of vehicle detection, a vehicle detection neural network SYG-Net with YOLOv3 network as the main body was proposed and combined with generalized Intersection over Union (GIoU) and spatial pyramid pooling (SPP) module. The backbone network of SYG-Net network is the basic network structure of YOLOv3. However, a layer of SPP was added before the main network structure of feature extraction, namely, darknet and YOLO layers. In this manner, the features before the input of YOLO layer can obtain spatial features. GIoU was used as the regression loss of BBox at the end of the network layer and tested on UA-DETRAC data set. Results showed that the map and recall values of SYG-Net network increased substantially. Meanwhile, loss and average GIoU converged quickly and had good effect. SYG-Net was 0.75% and 0.75% more accurate than YOLOv3 and 0.7 YOLOv3-SPP, respectively. Results showed that SYG-Net was effectively detects vehicle. This paper looks forward to the combination of SYG-Net and other modules.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132223032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Sensor Fusion Quadrotor Attitude and Altitude Estimation","authors":"Lei Zhang, Yajie Ma, Gouqing Liu, Yi Yu","doi":"10.1145/3366715.3366716","DOIUrl":"https://doi.org/10.1145/3366715.3366716","url":null,"abstract":"Attitude and altitude estimation are one of the core issues in quadrotor aircraft research, and attitude and height are mainly measured by sensors such as gyroscopes, acceleration, and barometers [1]. In view of the zero drift, noise interference and information delay of these sensors, two filters are designed, which are balance filtering of the gyroscope and acceleration data meter for horizontal attitude calculation, and accelerometer and barometer data for Karl. Man filtering performs a high degree of estimation. Finally, the algorithm is implemented on the physical experiment platform. The attitude data obtained by the balance filtering algorithm and the height data obtained by the Kalman filter algorithm are compared with the attitude and height obtained by the single sensor data, and the reliability of the algorithm is verified.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134532555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Visual and Inertia Fusion Odometry Based on PROSAC Mismatched Culling Algorithm","authors":"Lingxing Deng, Xun Li, Yanduo Zhang","doi":"10.1145/3366715.3366725","DOIUrl":"https://doi.org/10.1145/3366715.3366725","url":null,"abstract":"A method based on Progressive Sampling Consensus(PROSAC) combining Monocular visual and inertial navigation is proposed for localization, which focuses on solving the problem of self-positioning of low-cost devices in an unknown environment. This paper used the PROSAC algorithm, and the Inertial Measurement Unit (IMU) to calculate the relative motion distance of the camera by pre-integration to assist the positioning. the PROSAC mismatch culling algorithm is added to the visual inertial navigation odometry and compared its performance with traditional methods-VIORB, VINS in the EuRoC data sets. Proving the effectiveness of the method. The average error is 0.069m, which is 11.1% and 7.7% lower than the two algorithms.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115330451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaohui Hao, T. Lu, Yanduo Zhang, Zhongyuan Wang, Hui Chen
{"title":"Multi-Source Deep Residual Fusion Network for Depth Image Super-resolution","authors":"Xiaohui Hao, T. Lu, Yanduo Zhang, Zhongyuan Wang, Hui Chen","doi":"10.1145/3366715.3366731","DOIUrl":"https://doi.org/10.1145/3366715.3366731","url":null,"abstract":"Comparing with color images, depth images are often in lack of texture information in high quality. Depth image super-resolution provides an efficient solution to enhance the high frequency information of LR depth image. In this paper, we propose a novel multi-source residual fusion neural network named \"MSRFN\", which fully uses the fruitful texture information of color images to guide the depth images reconstruction. Initially, color and depth images are used to extract residual features in two-branch network. Then, color residual and depth residual are fused by the fusion network. Finally, the high-resolution (HR) depth map is reconstructed by fusing multi-source high-frequency information. Experimental results on MPI Sintel and Middlebury public databases show that MSRFN outperforms some state-of-the-art approaches in subjective and objective measures.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131547804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on Energy-Regeneration Simulation of An Electromechanical Active-Suspension System","authors":"Q. Lin, Zhicheng Wu, Yuzhuang Zhao","doi":"10.1145/3366715.3366720","DOIUrl":"https://doi.org/10.1145/3366715.3366720","url":null,"abstract":"Active suspension have not been used widely due to its magnificent energy consuming. A kind of electromechanical energy-regenerative suspension is proposed in this paper. Suspension dynamic equations are modeled by MATLAB/Simulink. In order to get an optimal performance of suspension, several indices are discussed in this paper with three control curve on a motor MAP diagram. The results indicated that optimal control strategy depends on subjective object; As for energy recovery, performance contour is the best choice. If the objective is the balance of all indices, middle torque curve is the ideal choose. Also an active control suspension and energy-regenerative suspension are compared in this paper. It can be concluded that the active control suspension can obtain good dynamic effects but consume considerable energy.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114845103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Data-driven Face Hallucination by Inverse Degradation Neural Network","authors":"Ruobo Xu, Jiaming Wang, T. Lu","doi":"10.1145/3366715.3366744","DOIUrl":"https://doi.org/10.1145/3366715.3366744","url":null,"abstract":"Face hallucination refers to the technology that inferring its potential corresponding high-resolution (HR) image from the input low-resolution (LR) facial image. At present, most face hallucination algorithms improve reconstruction performance by optimizing models. However, the common approach will out of operation when meeting more complex problem, etc, the input image contains degraded pixels (noise), their reconstruction performance will drop sharply. In order to solve the problem, we propose an inverse degradation neural network (IDNN), which can mine the essential features of the images under data-driven. In this network, we design different network structures in different task stages. Firstly, the more accurated face structure is generated by the denoising network in the LR space. But the details from the face image is lacked in this stage. In order to further enhance the face image details, we utilize the reconstruction network to restore the missing details. The experimental results on FEI face database show that IDNN outperforms some state-of-the-art approaches in subjective and objective measures.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127895375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Prediction of Lysine Succinylation Sites by SVR and Weighted Down-sampling","authors":"Kai Wang, P. Liang, Junda Hu","doi":"10.1145/3366715.3366735","DOIUrl":"https://doi.org/10.1145/3366715.3366735","url":null,"abstract":"Succinylation is a post-translational modification (PTM), which changes the chemical structure of lysine and results in significant changes in the structure and function of proteins. Lysine succinylation plays an important role in coordinating various biological processes, and it isalso associated with some diseases. Accurately identifying the lysine succinylation sites in proteins is of significant importance for basic research and drug development. Lysine succinylation sites prediction is a typical imbalanced and fragmentary learning problem. Directly applyingthe traditional machine learning approach for this task is not suitable. To circumvent this problem, based on extracting the features of protein sequences by sliding window and mirror-effect, weighted under-sampling is developed to make samples complete and balanced. Finally based on SVR prediction model and the corresponding suitable threshold, comparing with several state-of-art related methods, the effectiveness of the proposed method was validated by the experimental results.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131524925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Traffic Sign Recognition Model with Only 140 KB","authors":"Luo Dawei, Fang Jianjun, Yao Dengfeng","doi":"10.1145/3366715.3366723","DOIUrl":"https://doi.org/10.1145/3366715.3366723","url":null,"abstract":"To design a sign recognition model with low computational complexity and Low parameter quantity, we uses Group Convolution to compress the parameters, and designs extreme block to solve the problem that the number of input channels of Group Convolution must be equal to the number of output channels and that the feature can not be extracted across channels. In this paper, the number of convolution kernels is set according to the number of classifications. Finally, the original 30 MB CifarNet is compressed into a 140 KB classification model. And we tested it on the BelgiumTS Dataset. The experimental test results show that after the model size is compressed to the original 1/220, top1 is not reduced, but it is increased by 87.31%, and top5 is increased by 0.5%. Experiments prove that the compression strategy is effective. And the experiment also explored the relationship between the number of convolution kernels and the number of classifications.","PeriodicalId":425980,"journal":{"name":"Proceedings of the 2019 International Conference on Robotics Systems and Vehicle Technology - RSVT '19","volume":"3436 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127504745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}