R. Ahmad, Xiaoming Yao, S. Nawaz, U. Bhatti, Anum Mehmood, M. Bhatti, Mohammad Usman Shaukat
{"title":"Robust Image Watermarking Method in Wavelet Domain Based on SIFT Features","authors":"R. Ahmad, Xiaoming Yao, S. Nawaz, U. Bhatti, Anum Mehmood, M. Bhatti, Mohammad Usman Shaukat","doi":"10.1145/3430199.3430243","DOIUrl":"https://doi.org/10.1145/3430199.3430243","url":null,"abstract":"To protect the ownership of the digital content, the robustness of the watermarking algorithm is the most important metric to assess its affectiveness. However few state-of-the-art watermarking algorithms can resist the combinations of the conventional attacks such as jpeg compression and geometric transformation. In this paper, an improved robust image watermarking algorithm is thus proposed to address this issue. The watermark information is embedded in the low frequency domain of the wavelet transform by a quantization modulation method. When using watermark detection, use matching the position information of the SIFT key points is used to calculate the affine transformation parameters and the edge point parameters, and then inversely transform and reposition the detected image to recover the watermark synchronization information. Theoretical analysis and experimental results show that the proposed algorithm has high correlation accuracy and stable performance, and can effectively recover the watermark synchronization of watermark images subjected to rotation, scaling and translation attacks, so that the watermark algorithm can correctly detect or extract watermarks.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127269898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","authors":"","doi":"10.1145/3430199","DOIUrl":"https://doi.org/10.1145/3430199","url":null,"abstract":"","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128096203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Applying Social Network Extraction With Named Entity Recognition to the Examination of Political Bias Within Online News Articles","authors":"K. Lin, C. Tsai","doi":"10.1145/3430199.3430219","DOIUrl":"https://doi.org/10.1145/3430199.3430219","url":null,"abstract":"We aim to expand the application of social network extraction with NER tools, which to date is largely limited to fiction. With the premise that news articles resemble mini-stories, this study explores the extraction of social networks from online United States news articles to examine relationships between political bias and network features. We find statistical significance with most trends, and find no substantial differences between Liberal and Conservative bias, but bias and neutrality. Furthermore, this study identifies several issues with social network analysis, proposing a more rigorous examination of textual characteristics that affect network features.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134514668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"People Counting Based on Multi-scale Region Adaptive Segmentation and Depth Neural Network","authors":"Feng Min, Yansong Wang, Sicheng Zhu","doi":"10.1145/3430199.3430201","DOIUrl":"https://doi.org/10.1145/3430199.3430201","url":null,"abstract":"People counting based on surveillance camera is the basis of the important tasks, such as the analysis of crowd behavior, the optimal allocation of resources and public security. Aiming at the low accuracy of the people counting method based on object detection, a people counting method based on multi-scale region adaptive segmentation and deep neural network is proposed in this paper. The idea originates from the analysis and research of multi-scale objects, and it is found that the detection accuracy will be improved if the multi-scale objects match the size of multi-scale anchors. In this method, K-means is used to cluster the detection results of Faster-RCNN model. Then the image is segmented adaptively according to the clustered results. Finally, Faster-RCNN model is used to detect the segmented images. The experimental results show that the average accuracy of this method is 45.78% on mall dataset, which is higher than Faster-RCNN about 3.59%.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116901405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Single Shot Detector for Industrial Cracks by Feature Resolution Analysis","authors":"Shengxiang Qi, Yaming Dong, Qing Mao","doi":"10.1145/3430199.3430204","DOIUrl":"https://doi.org/10.1145/3430199.3430204","url":null,"abstract":"Although the single shot detector (SSD) is effective for object detection in natural images, it is not suitable for special tasks such as the industrial crack detection. The difficulty lies in the wide diversity of crack sizes and shapes that is usually unpredictable. To solve this problem, we improve the SSD model by feature resolution analysis. The classical SSD network extracts several convolutional feature layers with degressive scales, and then classifies and locates targets by a series of prior boxes with default sizes and aspect ratios regarding to each scale. Therefore, the key is whether the design of these prior boxes is consistent with the real target characteristics. In this paper, we improve the architecture of SSD network via statistically analyzing the distribution of sizes and shapes from our collected crack samples. According to the resolution analysis of the targets at each feature scale, only a fewer number of valid feature layers are carefully extracted, and some more accurate prior boxes are designed relative to each scale. Finally, experimental results demonstrate that the proposed method could not only achieve significantly better prediction accuracy, but also acquire higher computational efficiency, which outperform the state-of-the-art methods.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127067583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An LSTM-based Traffic Prediction Algorithm with Attention Mechanism for Satellite Network","authors":"Feiyu Zhu, Lixiang Liu, Teng Lin","doi":"10.1145/3430199.3430208","DOIUrl":"https://doi.org/10.1145/3430199.3430208","url":null,"abstract":"Due to the response to the topological time-varying of satellite network, the satellite management system puts forward higher requirements for the network traffic prediction algorithm. The traffic prediction algorithm of ground network is not suitable for satellite network. In this manuscript, a neural network model of long and short-term memory with attention mechanism is proposed. Considering that the input and output of traffic prediction is a sequence, the long short-term Memory (LSTM) model in this manuscript balances the effects of different parts of input on output by adding attention mechanism. The simulation results show that compared with ARIMA, random forest and traditional Recurrent Neural Network (RNN), the prediction accuracy of this model is significantly improved. Meanwhile, compared with the model after removing the attention network, the model verifies the effectiveness of the attention network.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129211041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mammography Registration for Unsupervised Learning Based on CC and MLO Views","authors":"Jiyun Li, Xiaomeng Wang, Chen Qian","doi":"10.1145/3430199.3430238","DOIUrl":"https://doi.org/10.1145/3430199.3430238","url":null,"abstract":"Mammography image usually contains two views in different orientations---Cranial Caudal (CC) and Mediolateral Oblique (MLO). In clinical decision making, the location of the lesions on the CC and MLO views are usually different. And the shape of breast varies greatly among patients, therefore, two views are necessary for evaluating the information in a comprehensively manner. In this paper, we propose an unsupervised registration algorithm based on CC and MLO views of mammography, which learns the deformation function through a Convolutional Neural Network (CNN). This function maps the input image to the corresponding deformation field and generates an image with the same shape as the template image after deformation, so that the doctor can better observe the two views. According to the radiologist's assessment, our work can contribute to medical image analysis and processing while providing novel guidance in learning-based registration and its applications.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128076091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Offline Handwritten Chinese Character Recognition Based on Improved Googlenet","authors":"Feng Min, Sicheng Zhu, Yansong Wang","doi":"10.1145/3430199.3430202","DOIUrl":"https://doi.org/10.1145/3430199.3430202","url":null,"abstract":"Aiming at the problem of misrecognition in offline handwritten Chinese character recognition, this paper proposed an improved shallow GoogLeNet and an error elimination algorithm. Compared with the shallow GoogLeNet, the improved shallow GoogLeNet not only reduced the number of training parameters, but also maintained the depth of the Inception structure. According to the error elimination algorithm, the confidence of the samples in the test results was calculated and the erroneous samples in the dataset were removed. Then the dataset was divided into multiple similar character sets and one dissimilar character set. When the recognition result was in the dissimilar character set, it can be used as the final result. Otherwise, the final result could be obtained by the secondary recognition on the corresponding similar character set. The training and testing of the experiment were carried out on the CISIA-HWDB1.1 dataset. The accuracy of the method was 97.48%, which was 6.68% higher than that of the GoogLeNet network.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129659316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey of Research on Image Data Sources Forensics","authors":"Xu Meng, Kun Meng, Wenbao Qiao","doi":"10.1145/3430199.3430241","DOIUrl":"https://doi.org/10.1145/3430199.3430241","url":null,"abstract":"The development of technologies such as smart terminals and mobile Internet has made image data one of the most important forms of data in the Internet and personal storage media, and has grown at an alarming rate. As the most effective expression of information, image data can record various information when image content appears, and it can play an unparalleled role in restoring the truth of things. Therefore, the aim of efficiently and accurately identify the source of image is to determine the device that generated the data. It is an effective means of clustering data from the same device, and become a key step in helping to understand the full content. It is one of the core technologies for conducting electronic data forensic evidence. On the basis of summarizing and analyzing the image generation process, this paper analyzes the data shape and acquisition steps of the potential image generation device information, and then obtains the method of image data source identification. It also summarizes the existing related technologies and methods, comparative analysis of their applicability and potential development direction.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image Segmentation Based on Finite IBL Mixture Model with a Dirichlet Compound Multinomial Prior","authors":"Z. Guo, Wentao Fan","doi":"10.1145/3430199.3430207","DOIUrl":"https://doi.org/10.1145/3430199.3430207","url":null,"abstract":"In this paper, we propose a novel image segmentation approach based on finite inverted Beta-Liouville (IBL) mixture model with a Dirichlet Compound Multinomial prior. The merits of this work can be summarized as follows: 1) Our image segmentation approach is based on a finite mixture model in which each mixture component can be responsible for interpreting a particular segment within a given image; 2) We adopt IBL distribution as the basic distribution in our mixture model, which has demonstrated better modeling capabilities than Gaussian distribution for non-Gaussian data in recent research works; 3) The contextual mixing proportions (i.e., the probabilities of class labels) of our model are assumed to have a Dirichlet Compound Multinomial prior, which makes our model more robust against noise; 4) We develop a variational Bayes (VB) method that can effectively learn model parameters in closed form. The performance of the proposed image segmentation approach is compared with other related segmentation approaches to demonstrate its advantages.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127846337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}