International Conference on Image and Graphics Processing最新文献

筛选
英文 中文
Densely connected AutoEncoders for image compression 密集连接的自动编码器用于图像压缩
International Conference on Image and Graphics Processing Pub Date : 1900-01-01 DOI: 10.1145/3313950.3313965
Zebang Song, S. Kamata
{"title":"Densely connected AutoEncoders for image compression","authors":"Zebang Song, S. Kamata","doi":"10.1145/3313950.3313965","DOIUrl":"https://doi.org/10.1145/3313950.3313965","url":null,"abstract":"Image compression, which is a type of data compression applied to digital images, has been a fundamental research topic for many decades. Recent image techniques produce very large amounts of data, which may make it prohibitive to storage and communications of image data without the use of compression. However, the traditional compression methods, such as JPEG, may introduce the compression artefact problems. Recently, deep learning has achieved great success in many computer vision tasks and is gradually being used in image compression. To solve the compression atrefact problem, in this paper, we present a lossy image compression architecture, which utilizes the advantages of the existing deep learning methods to achieve a high coding efficiency. We design a densely connected autoencoder structure for lossy image compression. Firstly, we design a densely autoencoder structure to get richer feature information from image which can be helpful for compression. Secondly, we design a U-net like network to decrease the distortion caused by compression. Finally, an improved binarizer is adopted to quantize the output of encoder. In low bit rate image compression, experiments show that our method significantly outperforms JPEG and JPEG2000 and can produce a better visual result with sharp edges, rich textures, and fewer artifacts.","PeriodicalId":149627,"journal":{"name":"International Conference on Image and Graphics Processing","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115666913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Pedestrian Behavior Anomaly Detection Based on Dynamic Mode Decomposition and One-Class SVM 基于动态模式分解和一类支持向量机的行人行为异常检测
International Conference on Image and Graphics Processing Pub Date : 1900-01-01 DOI: 10.1145/3447587.3447589
Weixi Zhang, Shuai Dong, Kun Zou, Wensheng Li
{"title":"Pedestrian Behavior Anomaly Detection Based on Dynamic Mode Decomposition and One-Class SVM","authors":"Weixi Zhang, Shuai Dong, Kun Zou, Wensheng Li","doi":"10.1145/3447587.3447589","DOIUrl":"https://doi.org/10.1145/3447587.3447589","url":null,"abstract":"Anomaly detection of human behavior, which has been widely applied in video supervision in recent years, aims at recognizing the human behaviors that are out of the normal scope in high-level semantic. With the development of deep learning (DL), many DL-based methods for behavior recognition have been developed. However, the lack of negative samples hinders the application of DL-based methods. Thus, this work proposed a new framework based on dynamic mode decomposition (DMD)which is free from the requirements of negative samples. First, key points of the human body in each frame of a video are exacted by the -deep neural networks (DNN)which have been solved in recent studies. Second, the key points are projected into a mode space of much lower dimensions with DMD. At last, the detection results are obtained with One-class SVM, which is a classification method that does not require negative samples. The data set we use in the experiment is the CMU motion data set. Experiment show that the accuracy of DMD combined with support vector classification (SVC) achieve 85% while the accuracy of DMD combined with one-class SVM achieve 81%, and the proposed framework can distinguish anomaly behavior with a small number of samples.","PeriodicalId":149627,"journal":{"name":"International Conference on Image and Graphics Processing","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116207269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MGF: An Algorithm for Compressed Sensing MRI with Gradient Domain Guided Image Filtering 基于梯度域引导图像滤波的压缩感知MRI算法
International Conference on Image and Graphics Processing Pub Date : 1900-01-01 DOI: 10.1145/3447587.3447619
Peixian Zhuang
{"title":"MGF: An Algorithm for Compressed Sensing MRI with Gradient Domain Guided Image Filtering","authors":"Peixian Zhuang","doi":"10.1145/3447587.3447619","DOIUrl":"https://doi.org/10.1145/3447587.3447619","url":null,"abstract":"In this paper, we develop an algorithm for magnetic resonance imaging (MRI) reconstruction with gradient domain guided image filtering (called MGF). We first turn a MRI reconstruction problem into two-phase objective functions: in the first phase, a latent image is generated to be a guidance image for gradient domain guided image filtering (GGF), and in the second phase, GGF is used to integrate fine structures of the latent image into the ideal solution, meanwhile, thenorm prior is simply yet effectively imposed on the GGF constraint for the error between latent and ideal images in image gradient domain. Then an efficient optimization scheme is derived to address the proposed model by iteratively alternating latent image reconstruction, GGF andnorm approximations, and ideal image reconstruction. Final experiments on real-valued and complex-valued MR images demonstrate the satisfactory performance of MGF in MRI reconstructions, and our method outperforms several well-known compressed sensing (CS) reconstruction approaches in terms of subjective results and objective assessments.","PeriodicalId":149627,"journal":{"name":"International Conference on Image and Graphics Processing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127417039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Thoracic Disease Classification via Ensemble Networks 基于集成网络的胸部疾病准确分类
International Conference on Image and Graphics Processing Pub Date : 1900-01-01 DOI: 10.1145/3512388.3512417
Arren Matthew C. Antioquia
{"title":"Accurate Thoracic Disease Classification via Ensemble Networks","authors":"Arren Matthew C. Antioquia","doi":"10.1145/3512388.3512417","DOIUrl":"https://doi.org/10.1145/3512388.3512417","url":null,"abstract":"Convolutional Neural Networks (CNNs) are being adapted to various computer-aided diagnosis applications, including recognizing thoracic diseases. To improve classification performance, recent solutions alter the structure of existing networks or require additional prior information for training. In this paper, we propose an ensemble network to accurately recognize thoracic diseases, without additional layers nor extra input data for training. To this end, we perform an exhaustive set of experiments involving various single CNN models for thoracic disease classification. We then form an ensemble from the most accurate classifier based on these experiments. Our approach achieves state-of-the-art average AUROC score of 79.32% on the ChestX-ray14 dataset, which is 2.08% higher than the previous best result. Additionally, we also attain the highest AUROC for 12 of the 14 classes. Our code and trained models are publicly available at https://github.com/arvention/ChestXRay14-Classification-PyTorch.","PeriodicalId":149627,"journal":{"name":"International Conference on Image and Graphics Processing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115077157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Aircraft Detection of High-Resolution Remote Sensing Image Based on Faster R-CNN Model and SSD Model 基于Faster R-CNN模型和SSD模型的高分辨率遥感图像飞机检测
International Conference on Image and Graphics Processing Pub Date : 1900-01-01 DOI: 10.1145/3191442.3191443
Qifang Xie, Guoqing Yao, Ping Liu
{"title":"Aircraft Detection of High-Resolution Remote Sensing Image Based on Faster R-CNN Model and SSD Model","authors":"Qifang Xie, Guoqing Yao, Ping Liu","doi":"10.1145/3191442.3191443","DOIUrl":"https://doi.org/10.1145/3191442.3191443","url":null,"abstract":"With the continuous improvement of the space resolution in remote sensing images, the rapid and accurate detection in high-resolution remote sensing images has become a hotspot in the field of remote sensing application. For nearly 10 years, deep learning has made outstanding achievements in the feature extraction of original image and received attention of a large number of scholars. Among them, the convolutional neural network (CNN) has made breakthrough progress in the field of image classification and detection, and has overcome three shortcomings of the original remote sensing image detection method: low detection efficiency, redundant human resource input, and flawed feature selection. In this paper, Faster R-CNN model and SSD model are trained by high-resolution remote sensing images. The appropriate training time is determined by the detection results of verification set and the loss function. When we get trained models, it will be used to detect the test set images, and the accuracy rate and recall rate of two models were calculated by visual interpretation method. The experimental results show that both the Faster R-CNN model and the SSD model can be applied to aircraft detection in corresponding high-resolution remote sensing images. The SSD model can detect the single scene aircraft quickly and accurately. The Faster R-CNN model has a high accuracy but cannot reach the requirement of real-time detection. Besides, the accuracy rate and recall rate of Faster R-CNN model was significantly higher than the SSD model in the complex scenes, and the Faster R-CNN model has a great advantage for the detection of small aircraft.","PeriodicalId":149627,"journal":{"name":"International Conference on Image and Graphics Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125017764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信