International Conference on Artificial Intelligence and Pattern Recognition最新文献

筛选
英文 中文
Hybrid CNN-GRU model for high efficient handwritten digit recognition 高效手写体数字识别的CNN-GRU混合模型
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357276
Vantruong Nguyen, Jueping Cai, Jie Chu
{"title":"Hybrid CNN-GRU model for high efficient handwritten digit recognition","authors":"Vantruong Nguyen, Jueping Cai, Jie Chu","doi":"10.1145/3357254.3357276","DOIUrl":"https://doi.org/10.1145/3357254.3357276","url":null,"abstract":"Recognition of handwritten digits is a challenging research topic in Optical Character Recognition (OCR) in recent years. In this paper, a hybrid model combining convolutional neural network (CNN) and gate recurrent units (GRU) is proposed, in which GRU is used to replace the CNN fully connected layer part to achieve high recognition accuracy with lower running time. In this model, the features of the original image are firstly extracted by the CNN, and then they are dynamically classified by the GRU. Experiment performed on MNIST handwritten digit dataset suggests that the recognition accuracy of 99.21% while the training time and testing time is only 57.91s and 3.54s, respectively.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123071792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A correlation analysis of the sentiment analysis scores and numerical ratings of the students in the faculty evaluation 教师评价中学生情绪分析得分与数值评分的相关分析
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357287
Jay-ar P. Lalata, B. Gerardo, R. Medina
{"title":"A correlation analysis of the sentiment analysis scores and numerical ratings of the students in the faculty evaluation","authors":"Jay-ar P. Lalata, B. Gerardo, R. Medina","doi":"10.1145/3357254.3357287","DOIUrl":"https://doi.org/10.1145/3357254.3357287","url":null,"abstract":"This paper aims to analyze the relationship between the students' numerical rating and the qualitative measure of the students' written comments in the faculty evaluation using sentiment analysis. The dataset which consists of the numerical ratings and students' feedback obtained from the faculty evaluation system was used in the experiment. An ensemble model which consists of five machine learning algorithms was used to analyze and identify the polarity of the written comments of the students. The overall sentiment score was computed for each faculty and was compared to the numerical score using the statistical technique, Pearson's correlation coefficient. The result indicates that there is significance but very small relationship between the numerical rating and the overall sentiment scores. Based on the result, universities and colleges should exploit written comments since it is rich with observations and insights about the performance and effectiveness of a teacher. Moreover, sentiment analysis technique can be used to identify students' feeling towards teaching.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124462554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Face pose estimation with ensemble multi-scale representations 基于集成多尺度表示的人脸姿态估计
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357278
Zhaocui Han, Weiwei Song, X. Yang, Zongying Ou
{"title":"Face pose estimation with ensemble multi-scale representations","authors":"Zhaocui Han, Weiwei Song, X. Yang, Zongying Ou","doi":"10.1145/3357254.3357278","DOIUrl":"https://doi.org/10.1145/3357254.3357278","url":null,"abstract":"Face pose estimation plays important roles in broad applications such as visual based surveillance, face authentication, human-computer intelligent interactions, etc. However, face pose estimation is also a challenge issue, especially under complicated real application environments. In this paper, we proposed a novel face pose estimation approach with integrating two multi-scale representations. The first one is multi-scale VGG-Face representations, which using VGG-Face CNN as backbone three middle scale layer outputs are extracted and go through additional transfer learning. The second one is multi-scale Curvelet representations. These two sub multi-scale representations are integrated and then several dense layers processing are added to form the entire ensemble system which is used for the prediction of face pose. The experiment results show that the proposed approach achieved mean absolute errors (MAE) of 0.33° and 0.23° for yaw and pitch angle on CAS-PEAL pose database, and achieved mean absolute errors of 3.88° and 1.98° for yaw and pitch angle on Pointing'04 database.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"275 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115942875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic image inpainting with boundary equilibrium GAN 基于边界平衡的GAN语义图像绘制
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357260
Yuhang Jia, Yan Xing, Cheng Peng, Chao Jing, Congzhang Shao, Yifan Wang
{"title":"Semantic image inpainting with boundary equilibrium GAN","authors":"Yuhang Jia, Yan Xing, Cheng Peng, Chao Jing, Congzhang Shao, Yifan Wang","doi":"10.1145/3357254.3357260","DOIUrl":"https://doi.org/10.1145/3357254.3357260","url":null,"abstract":"Recently, due to the vigorous development of deep learning, many methods in the field of image inpainting have been proposed which are different from the traditional image inpainting methods. This paper uses the high-quality image generation technology of BEGAN to complete the image inpainting task. Firstly, the image generation model is obtained by pretraining the generator and discriminator of BEGAN. Then this paper redesigns the loss function and finds the generated image suitable for the image inpainting task via gradient descent algorithm. By using the information contained in the undamaged part of the original image to be repaired, the BEGAN model can generate an image that is closest to the original image. Finally, the generated image is used to fill the lost area of the original image to be repaired, and the image inpainting task is completed. This paper confirms the validity of the method through the experiments on the CelebA and LFW datasets.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134424691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An investigation of gradient as a feature cue for saliency detection 梯度作为显著性检测特征线索的研究
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357281
Christopher Cooley, S. Coleman, B. Gardiner, B. Scotney
{"title":"An investigation of gradient as a feature cue for saliency detection","authors":"Christopher Cooley, S. Coleman, B. Gardiner, B. Scotney","doi":"10.1145/3357254.3357281","DOIUrl":"https://doi.org/10.1145/3357254.3357281","url":null,"abstract":"Salient object detection is a prominent research topic, based on a human's ability to selectively process conspicuous objects/regions within a scene. With many low-level features being adopted into saliency models, gradient is often overlooked. We investigate the effectiveness of gradient as a feature, applying and evaluating multiple image gradient operators. Scale is also addressed via the use of different sizes of convolutional masks and by varying the neighbour region to calculate gradient contrast. Finally, we present and evaluate a single scale saliency model with the respective gradient cue from each operator, for the detection of salient objects. Each model is evaluated on the publicly available MSRA10K salient object dataset.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130896036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vehicle model recognition using SRGAN for low-resolution vehicle images 基于SRGAN的低分辨率车辆图像模型识别
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357284
Joo-Sung Kim, JoungWoo Lee, Kwangho Song, Yoo-Sung Kim
{"title":"Vehicle model recognition using SRGAN for low-resolution vehicle images","authors":"Joo-Sung Kim, JoungWoo Lee, Kwangho Song, Yoo-Sung Kim","doi":"10.1145/3357254.3357284","DOIUrl":"https://doi.org/10.1145/3357254.3357284","url":null,"abstract":"An enhanced vehicle model recognizer for low-resolution images is proposed in where SRGAN (Super Resolution Generative Adversarial Network) is used to enhance the image quality and CNN (Convolutional Neural Network) is used to classify the vehicle model from the enhanced images. Many previous vehicle model classifiers trained with only the high-resolution front-images of vehicles have low accuracy against the low-quality images captured by CCTV cameras in real environments. To correctly classify the vehicle model from the low-quality images of arbitrary directions, SRGAN is first used to transform the low-resolution image into the corresponding high-resolution image. Then the direction of the vehicle in the image is determined and the vehicle model is recognized based on the pre-determined direction. The accuracy of the proposed vehicle model classifier is evaluated as 78%, higher than that of the classification without SRGAN.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125042295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dangerous behaviors detection based on deep learning 基于深度学习的危险行为检测
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357267
Yue Chang, Zecheng Du, Jie Sun
{"title":"Dangerous behaviors detection based on deep learning","authors":"Yue Chang, Zecheng Du, Jie Sun","doi":"10.1145/3357254.3357267","DOIUrl":"https://doi.org/10.1145/3357254.3357267","url":null,"abstract":"Deep learning has a high degree of popularity in recent years. It is widely used in computer vision, artificial intelligence and other fields. Sites with high safety needs, such as gas stations, have a high demand for monitoring of dangerous behaviors such as smoking. Under normal circumstances, gas stations will employ corresponding personnel to inspect and supervise, but such labor costs are higher, and the monitoring effect is not good. This article is to use an object detection system based on deep learning technology to detect the dangerous behavior of gas stations. This article mainly solves several problems for gas stations to detect dangerous behaviors: first, what technology is used to achieve object detection; secondly, how to increase the speed of detection as much as possible; and thirdly, how to improve the accuracy of detecting dangerous behavior. To solve the above problems, this article will introduce how to implement an object detection system based on deep learning technology. First, a data set containing dangerous goods is established, then the convolutional neural network is trained, and finally the test results of the training results are checked and transplanted. The results prove that the gas station dangerous behavior detection system based on deep learning technology realized can accurately and quickly detect dangerous objects (cigarettes, etc.) in the image.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126561216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
L1 norm based double-prior learning for image denoising 基于L1范数的双先验学习图像去噪
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357286
Weinan Du, Yanfeng Sun, Yongli Hu
{"title":"L1 norm based double-prior learning for image denoising","authors":"Weinan Du, Yanfeng Sun, Yongli Hu","doi":"10.1145/3357254.3357286","DOIUrl":"https://doi.org/10.1145/3357254.3357286","url":null,"abstract":"Image denoising problem has attracted a large number of researchers. Generally speaking, there are two kinds of image priors considering the source of training sets, external priors and internal priors. The realistic image priors can be obtained from a large number of external example images or the corrupted internal image itself. However, external priors cannot give accurate image representations towards various corrupted images because the total number of example images is limited. While internal priors may bring too much noise along with useful information for denoising, which leads to unexpected denoising results. The most common assumption in denoising problem is that the image noise obeys Gaussian distribution, which is simple and ideal. If there are outliers in the corrupted images, Laplace distribution is more suitable to model the image noise. This paper proposes a denoising model towards image noise in Laplace distribution utilizing both external priors and internal priors. Gaussian Mixture Model (GMM) is used to model external priors and l1 norm is aimed to deal with outliers. Experiments on some publicly available databases show the performance of proposed method, resulting in denoised image of high quality.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133411850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic detection of intracranial aneurysm from digital subtraction angiography with cascade networks 基于级联网络的数字减影血管造影颅内动脉瘤自动检测
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3357258
Junhua Liao, Haihan Duan, Huming Dai, Yunzhi Huang, Lunxin Liu, Liangyin Chen, Liangxue Zhou
{"title":"Automatic detection of intracranial aneurysm from digital subtraction angiography with cascade networks","authors":"Junhua Liao, Haihan Duan, Huming Dai, Yunzhi Huang, Lunxin Liu, Liangyin Chen, Liangxue Zhou","doi":"10.1145/3357254.3357258","DOIUrl":"https://doi.org/10.1145/3357254.3357258","url":null,"abstract":"Automatic detection of intracranial aneurysm based on Digital Subtraction Angiography (DSA) images is a challenging task for the following reasons: 1) effectively leverage the temporal information of the DSA sequence; 2) effectively extract features by avoiding unnecessary interference in the raw DSA images of large resolution; 3) effectively distinguish the vascular overlap from intracranial aneurysm in DSA images. To better identify intracranial aneurysm from DSA images, this paper proposed an automatic detection framework with cascade networks. This framework is consisted of a region localization stage (RLS) and an intracranial aneurysm detection stage (IADS). The RLS stage can significantly reduce the interference from unrelated regions and determine the coarse effective region. The IADS stage fully employed the spatial and temporal features to accurately detect aneurysm from DSA sequence. This method was verified in the posterior communicating artery (PCoA) region of internal carotid artery (ICA). In clinical trials, the accuracy of the baseline method was 62.5% with area under curve (AUC) of 0.650, and the time cost of the detection was approximately 62.546s. However, the accuracy of this method was 85.5% with AUC of 0.918, and the time cost of detection was about 3.664s. The experimental results showed that the proposed method significantly improved the accuracy and speed of intracranial aneurysm automatic detection.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131847417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fault diagnosis of diesel generator set based on deep believe network 基于深度置信网络的柴油发电机组故障诊断
International Conference on Artificial Intelligence and Pattern Recognition Pub Date : 2019-08-16 DOI: 10.1145/3357254.3358601
Qinsheng Yun, Chuan-qing Zhang, Tianyuan Ma
{"title":"Fault diagnosis of diesel generator set based on deep believe network","authors":"Qinsheng Yun, Chuan-qing Zhang, Tianyuan Ma","doi":"10.1145/3357254.3358601","DOIUrl":"https://doi.org/10.1145/3357254.3358601","url":null,"abstract":"As a kind of power supply equipment, diesel generator set has the characteristics of good mobility, fast start, stable power supply, convenient operation and maintenance. Diesel generator set is very important for power supply applications. The research on automatic fault diagnosis of diesel generator set is of great significance for monitoring the operation status of diesel generator and timely maintenance. Compared with traditional neural networks, deep believe network improves the learning efficiency of multi-layer networks by introducing restricted Boltzmann machine. A deep believe network based fault diagnosis for diesel generator set is developed. The sensor data collected from diesel generator set are processed to form a training dataset, and deep believe network is designed. The experimental results show that the deep believe network based method has the best fault diagnosis performance in recall, precision, accuracy and F1-score than other learning based methods.","PeriodicalId":361892,"journal":{"name":"International Conference on Artificial Intelligence and Pattern Recognition","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信