2022 International Conference on Machine Vision and Image Processing (MVIP)最新文献

筛选
英文 中文
Optimized SVM using AdaBoost and PSO to Classify Brain Images of MR 利用AdaBoost和PSO对SVM进行脑核磁共振图像分类优化
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738549
Farzaneh Elahifasaee
{"title":"Optimized SVM using AdaBoost and PSO to Classify Brain Images of MR","authors":"Farzaneh Elahifasaee","doi":"10.1109/MVIP53647.2022.9738549","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738549","url":null,"abstract":"In current paper, it is suggested a technique for improving a model of feature detection based on AdaBoost, weighted support vector machine (WSVM) using particle swarm optimization (PSO) for selection of features and diagnose of Alzheimer disease (AD) classification problems. Our paper contributions can be stated as it was for the first time employing this method aimed at classification of brain magnetic resonance(MR) imaging with very good classification accuracy. Moreover, our suggested scheme is quite appropriate in dealing through high amount of data (sparse data) to classy of the brain image. The data used in this study consisted of 198 Alzheimer disease (AD) data and 229 normal control (NC), which used for learning and testing. The final results of this study displays that proposed method classification accuracy is 93% which is promising performance.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126277297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Underwater Image Enhancement using a Light Convolutional Neural Network and 2D Histogram Equalization 基于光卷积神经网络和二维直方图均衡化的水下图像增强
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738773
Ali Khandouzi, M. Ezoji
{"title":"Underwater Image Enhancement using a Light Convolutional Neural Network and 2D Histogram Equalization","authors":"Ali Khandouzi, M. Ezoji","doi":"10.1109/MVIP53647.2022.9738773","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738773","url":null,"abstract":"Underwater images usually have low contrast, blurring, and extreme color distortion because the light is refracted, scattered, and absorbed as it passes through the water. These features can lead to challenges in image-based processing and analysis. In this paper, a two-step method based on a deep convolution network is proposed for solving these problems and enhancing underwater images. First, through a light global-local structure, the initial image enhancement is performed and the color distortion and degradation of the images are partially covered. Then, two-dimensional histogram equalization is used as an appropriate complement to the network. Two-dimensional histogram equalizing is able to produce clear results and prevents excessive contrast. The results show that the proposed method performs better than other methods in this field in terms of qualitative and quantitative criteria.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121679912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARGAN: Fast Converging GAN for Animation Style Transfer 用于动画风格转换的快速收敛GAN
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738752
Amirhossein Douzandeh Zenoozi, K. Navi, Babak Majidi
{"title":"ARGAN: Fast Converging GAN for Animation Style Transfer","authors":"Amirhossein Douzandeh Zenoozi, K. Navi, Babak Majidi","doi":"10.1109/MVIP53647.2022.9738752","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738752","url":null,"abstract":"Transformation of real images to the animated image is one of the most challenging tasks in artistic style transfer. In this paper, using a novel architecture for Generative Adversarial Networks (GANs), a faster and more accurate result for style transfer is achieved. There are three common problems regarding animation style transfer. First, the original content of an image is lost during the generation of new images by the network. Second, the generated image does not have an apparent animated style. Finally, the networks are not fast enough, and they require a large amount of memory to process the images. In this paper, ARGAN, a lightweight and fast GAN network for animation style transfer, is proposed. To enhance the quality of the output images, three loss functions related to grayscale style, content style, and reconstruction of the color spectrum in each image are proposed. Also, the training phase of this method does not require paired data. The proposed method transforms real-world images into animated style images significantly faster than similar methods.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121773462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Building Detection Using Very High Resolution SAR Images With Multi-Direction Based on Weighted-Morphological Indexes 基于加权形态学指标的多方向超高分辨率SAR图像建筑物检测
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738776
Fateme Amjadipour, H. Ghassemian, M. Imani
{"title":"Building Detection Using Very High Resolution SAR Images With Multi-Direction Based on Weighted-Morphological Indexes","authors":"Fateme Amjadipour, H. Ghassemian, M. Imani","doi":"10.1109/MVIP53647.2022.9738776","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738776","url":null,"abstract":"Today, technological advancement in production of radar images can be seen with high spatial resolution and also the availability of these images’ significant growth in interpretation and processing of high-resolution radar images. The building extraction from urban areas is one of the most challenging applications in VHR SAR image, which is used to estimate the population and urban development. Detection of individual buildings in the urban context is highly considered by researchers due to complexity of interpreting radar images in these fields. On the other hand, one of the main issues in the complexity of the scatters received from buildings is change in direction of the building relative to the horizon, which is correlated with the look angle. Other influential parameters are geometric distortions, which include layover and shadow effects. In some cases, the effect of shadow is an auxiliary parameter in detection of these targets that increases accuracy of the detection. In this paper, we intend to extract the building from high spatial resolution SAR images using fuzzy fusion of two morphological indicators, SI and DI, which represent the shadow and bright area, respectively. Due to the effect of SAR imaging geometry on ground targets, different sizes and directions of structural elements were applied to the image. The use of indicators weights with different sizes is proposed in this work. The Detection Ratio of experiment of TerraSAR-X image has a result of 95.3%.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116891041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Facial Expression Recognition: a Comparison with Different Classical and Deep Learning Methods 面部表情识别:不同经典和深度学习方法的比较
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738553
Amir Mohammad Hemmatiyan-Larki, Fatemeh Rafiee-Karkevandi, M. Yazdian-Dehkordi
{"title":"Facial Expression Recognition: a Comparison with Different Classical and Deep Learning Methods","authors":"Amir Mohammad Hemmatiyan-Larki, Fatemeh Rafiee-Karkevandi, M. Yazdian-Dehkordi","doi":"10.1109/MVIP53647.2022.9738553","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738553","url":null,"abstract":"Facial Expression Recognition (FER), also known as Facial Emotion Recognition, is an active topic in computer vision and machine learning fields. This paper analyzes different feature extraction and classification methods to propose an efficient facial expression recognition system. We have studied several feature extraction methods, including Histogram of Oriented Gradients (HOG), face-encoding, and the features extracted by a VGG16 Network. For classification, different classical classifiers, including Support Vector Machines (SVM), Adaptive Boosting (AdaBoost), and Logistic Regression, are evaluated with these features. Besides, we have trained a ResNet50 model from scratch and also tuned a ResNet50 which is pre-trained on VGGFace2 dataset. Finally, a part-based ensemble classifier is also proposed by focusing on different parts of face images. The experimental results provided on FER-2013 Dataset show that the tuned model of ResNet50 with a complete image of face, achieves higher performance than the other methods.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116032574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive background subtraction approach based on frame differences in video surveillance 视频监控中基于帧差的自适应背景减法
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738762
Panteha Alipour, A. Shahbahrami
{"title":"An adaptive background subtraction approach based on frame differences in video surveillance","authors":"Panteha Alipour, A. Shahbahrami","doi":"10.1109/MVIP53647.2022.9738762","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738762","url":null,"abstract":"In the past decades, the insertion of cameras for the aim of surveillance is increased. Hence, a huge amount of data is produced by cameras. It is impossible to categorize and store all data. Therefore, algorithms that automatically process big data and track objects of interest are needed. Many methods are based on the opinion that the movement of objects causes differences in frames of a video, and the background would remain motionless during the video. Continuous dynamic behavior in the background deteriorates object detection performance.On the other hand, an excellent background extraction model can help to gain beneficent foreground detection results. The target of this paper is to model an algorithm that provides the pure background from video sequences. The idea of the proposed approach is to extract the background of complex and crowded scenes by using the differences of two consecutive frames’ pixels. Our experimental results show that the proposed approach provides significant performance in comparison with some previous techniques.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126007052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Diagnosis of COVID-19 Cases from Chest X-ray Images Using Deep Neural Network and LightGBM 基于深度神经网络和LightGBM的胸部x线图像诊断COVID-19病例
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738760
Mobina Ezzoddin, H. Nasiri, M. Dorrigiv
{"title":"Diagnosis of COVID-19 Cases from Chest X-ray Images Using Deep Neural Network and LightGBM","authors":"Mobina Ezzoddin, H. Nasiri, M. Dorrigiv","doi":"10.1109/MVIP53647.2022.9738760","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738760","url":null,"abstract":"The Coronavirus was detected in Wuhan, China in late 2019 and then led to a pandemic with a rapid worldwide outbreak. The number of infected people has been swiftly increasing since then. Therefore, in this study, an attempt was made to propose a new and efficient method for automatic diagnosis of Corona disease from X-ray images using Deep Neural Networks (DNNs). In the proposed method, the DensNet169 was used to extract the features of the patients' Chest X-Ray (CXR) images. The extracted features were given to a feature selection algorithm (i.e., ANOVA) to select a number of them. Finally, the selected features were classified by LightGBM algorithm. The proposed approach was evaluated on the ChestX-ray8 dataset and reached 99.20% and 94.22% accuracies in the two-class (i.e., COVID-19 and No-findings) and multi-class (i.e., COVID-19, Pneumonia, and No-findings) classification problems, respectively.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127591694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Face Detection from Blurred Images based on Convolutional Neural Networks 基于卷积神经网络的模糊图像人脸检测
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738783
Katayoon Mohseni Roozbahani, H. S. Zadeh
{"title":"Face Detection from Blurred Images based on Convolutional Neural Networks","authors":"Katayoon Mohseni Roozbahani, H. S. Zadeh","doi":"10.1109/MVIP53647.2022.9738783","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738783","url":null,"abstract":"This paper proposes face detection from blurry and noisy images robustly and efficiently using convolutional neural networks. It has also been demonstrated that the method of face detection from blurred images utilizing convolutional neural networks is superior to other methods under consideration concerning precision-recall and discontinuity and continuity scores. Face detection is the infrastructure of face recognition; also, it includes but is not limited to the following topics: traffic surveillance, stereo videos, finding a criminal from large crowds in terrorist accidents, calibrated stereo images, face alignment of images from sensors with heterologous wavelengths, driving license photos, and animations. Some difficulties of this topic include the lack of joint datasets, movements (displacements), changing expression, intensive illumination, the likelihood of overfitting in the case of employing high-dimensional data, and the presence of numerous blurs and various aspects. There are multiple face detection methods from blurry images, such as blur kernel estimation and complex Fourier coefficients of a trained neural network, exerting metrics that define arduous patches. This paper executes face detection in noisy and blurry images applying convolutional neural networks that make face detection more applicable. This is due to exploiting two techniques for blur elimination and face detection on the foundation of convolutional neural networks.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128394136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Effect of Perceptual Loss for Video Super-Resolution 感知损失对视频超分辨率的影响
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738742
Marzieh Hosseinkhani, Azadeh Mansouri
{"title":"The Effect of Perceptual Loss for Video Super-Resolution","authors":"Marzieh Hosseinkhani, Azadeh Mansouri","doi":"10.1109/MVIP53647.2022.9738742","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738742","url":null,"abstract":"Intensity-based loss which measures pixel-wise difference is commonly used for most of the learning-based super-resolution approaches. Since the error of different components has disparate impacts on human visual system, the structural error which calculates the error of the perceptually influential components is proposed for the loss function of the video super-resolution. The proposed loss function is presented based on the JPEG compression algorithm and the effect of using quantization matrix on resultant output. The proposed loss function can be employed instead of the traditional MSE loss function. In this paper, we explored the effect of using this perceptual loss for VESPCN method. The experimental results illustrate better outputs in terms of average PSNR, average SSIM, and VQM.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130889336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MVIP 2022 Cover Page MVIP 2022封面
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/mvip53647.2022.9738761
{"title":"MVIP 2022 Cover Page","authors":"","doi":"10.1109/mvip53647.2022.9738761","DOIUrl":"https://doi.org/10.1109/mvip53647.2022.9738761","url":null,"abstract":"","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132049726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信