2022 International Conference on Machine Vision and Image Processing (MVIP)最新文献

筛选
英文 中文
Hardware Implementation of Moving Object Detection using Adaptive Coefficient in Performing Background Subtraction Algorithm 基于自适应系数的背景减法运动目标检测硬件实现
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738764
Ali Rahiminezhad, Mohammad Reza Tavakoli, Sayed Masoud Sayedi
{"title":"Hardware Implementation of Moving Object Detection using Adaptive Coefficient in Performing Background Subtraction Algorithm","authors":"Ali Rahiminezhad, Mohammad Reza Tavakoli, Sayed Masoud Sayedi","doi":"10.1109/MVIP53647.2022.9738764","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738764","url":null,"abstract":"Moving object detection is an essential process in many surveillance systems, autonomous navigation systems, and computer vision applications. A hardware architecture for the motion detection process based on the background subtraction operation and with the introduction of an adaptive background update coefficient is proposed. The architecture is implemented on a Kintex 7 FPGA device. Its operating frequency is 250 MHz for 360*640 video frame size and average processing time for each frame is 2.304 ms with 130 fps processing rate and its power consumption is 140 mW. The architecture achieves high speed performance with relatively low resource utilization.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116660744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimized Quantum Circuits in Quantum Image Processing Using Qiskit 利用Qiskit优化量子图像处理中的量子电路
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738550
Zahra Boreiri, Alireza Norouzi Azad, N. Majd
{"title":"Optimized Quantum Circuits in Quantum Image Processing Using Qiskit","authors":"Zahra Boreiri, Alireza Norouzi Azad, N. Majd","doi":"10.1109/MVIP53647.2022.9738550","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738550","url":null,"abstract":"Quantum image representation is an essential component of quantum image processing and plays a critical role in quantum information processing. Flexible Representation of Quantum Images (FRQI) presents pixel colors and associated locations as a state to represent images on quantum computers. A fundamental part of the quantum image processing system is quantum image compression (QIC), which is utilized to maintain and retrieve binary images. This compression allows us to minimize the number of controlled rotation gates in the quantum circuits. This paper designed optimized quantum circuits and simulated them using Qiskit on a real quantum computer based on minimum boolean expressions to retrieve the 8×4 binary single-digit images. To demonstrate the feasibility and efficacy of quantum image representation, quantum circuits for images were developed using FRQI, and quantum image representation experiments were done on IBM Quantum Experience (IBMQ). We were able to visualize quantum information by doing the quantum measurement on the image information that we had prepared. Without utilizing this method, the number of controlled rotation gates is equal to the number of pixels in the image; however, we showed that by using the QIC algorithm, we could decrease the number of gates significantly. On these images, the maximum and minimum compression ratios of QIC are 90.63% and 68.75%, respectively.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125672596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Employing a Hybrid Technique to Detect Tumor in Medical Images 应用混合技术检测医学图像中的肿瘤
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738739
Leyla Aqhaei
{"title":"Employing a Hybrid Technique to Detect Tumor in Medical Images","authors":"Leyla Aqhaei","doi":"10.1109/MVIP53647.2022.9738739","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738739","url":null,"abstract":"In this article, a hybrid approach using watershed, genetic, and support vector machine algorithms is presented to detect brain tumors in medical images. Employing this method, the images are segmented properly and the brain tumor is detected with high accuracy. Accordingly, first, grayscale and median filters are used to pre-process the images for noise removal. Then, the watershed algorithm is applied for segmentation of the image and then with using genetic features are explored. Finally, the SVM algorithm is applied to learn extracted features and diagnose brain tumors with high accuracy. Considering the accuracy, precision, and recall, the evaluation results indicate that the proposed method can segment and classify the images well, and it outperforms conventional algorithms with an accuracy of 95% and precision of 97%.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128030966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Learning Based Contrast Specific no Reference Image Quality Assessment Algorithm 一种基于学习的对比度特定无参考图像质量评估算法
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738784
Moliamadali Mahmoodpour, Abdolah Amirany, M. H. Moaiyeri, Kian Jafari
{"title":"A Learning Based Contrast Specific no Reference Image Quality Assessment Algorithm","authors":"Moliamadali Mahmoodpour, Abdolah Amirany, M. H. Moaiyeri, Kian Jafari","doi":"10.1109/MVIP53647.2022.9738784","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738784","url":null,"abstract":"Contrast is one of the most important visual characteristics of an image that has a significant effect in understanding an image, however, due to different imaging conditions and poor devices, quality of image in terms of contrast will degrade. although, limited methods have been used to assess the quality of a contrast distorted images. Proper image contrast enhancement can increase the perceptual quality of most contrast distorted images. In this paper, assuming that the output images of a contrast enhancing algorithms have a quality such as a reference image, a learning-based contrast-specific no reference image quality assessment method is proposed. In the proposed method in this paper the image with the closest quality to the reference image is selected using a pre-trained classification network, and then the quality assessment is performed by comparing the enhanced image and the distorted image using structural similarity (SSIM) index. The functionality of the proposed method has been validated using three well-known contrast distorted image datasets (CSIQ, CCID2014 and TID2013).","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131340277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hybrid of Inference and Stacked Classifiers to Indoor Scenes Classification of RGB-D Images 基于推理和堆叠混合分类器的RGB-D图像室内场景分类
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738755
Shokouh S. Ahmadi, Hassan Khotanlou
{"title":"A Hybrid of Inference and Stacked Classifiers to Indoor Scenes Classification of RGB-D Images","authors":"Shokouh S. Ahmadi, Hassan Khotanlou","doi":"10.1109/MVIP53647.2022.9738755","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738755","url":null,"abstract":"Scene classification makes it easier to semantic scene understanding and aids to further processes and inference, using an assignment of pre-defined classes. Under this motive, we proposed an approach to classify indoor scene objects. The proposed method utilizes a stacked classifier model and refines classification results considering segment consistency. Furthermore, the challenging and messy indoor scene images have been addressed, as dealing daily. Finally, this approach simplicity and affordably obtains desirable classification results.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124704320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Detection of Blastocyst Embryo In Vitro Fertilization (IVF) 体外受精(IVF)中囊胚的检测
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738768
Kimiya Samie Dehkordi, M. Ebrahimi Moghaddam
{"title":"The Detection of Blastocyst Embryo In Vitro Fertilization (IVF)","authors":"Kimiya Samie Dehkordi, M. Ebrahimi Moghaddam","doi":"10.1109/MVIP53647.2022.9738768","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738768","url":null,"abstract":"One of the most important stages in the fate of the embryo in In vitro fertilization (IVF) is the blastocyst stage. There is currently no way to diagnose blastocyst. In this study, using Resnet and Unet networks, the embryo was detected in the blastocyst state. The proposed method is trained on a set of data consisting of 40392 data, which is 24365 data for training and 5814 data for validation, and is tested on 10263 data obtained from various sources. The results show an accuracy of 92.9% and a precision of 93.7% and recall of 92 92.1% which confirm that the proposed method was well able to detect the states in which the fetus is in the blastocyst state.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121709700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FFDR: Design and implementation framework for face detection based on raspberry pi 基于树莓派的人脸检测框架的设计与实现
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738788
Dhafer Alhajim, G. Akbarizadeh, K. Ansari-Asl
{"title":"FFDR: Design and implementation framework for face detection based on raspberry pi","authors":"Dhafer Alhajim, G. Akbarizadeh, K. Ansari-Asl","doi":"10.1109/MVIP53647.2022.9738788","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738788","url":null,"abstract":"In today’s world, we are surrounded by data of many types, but the abundance of image and video data available offers the data set needed for face recognition technology to function. Face recognition is a critical component of security and surveillance systems that analyze visual data and millions of pictures. In this article, we investigated the possibility of combining standard face detection and identification techniques such as machine learning and deep learning with Raspberry Pi face detection since the Raspberry Pi makes the system cost-effective, easy to use, and improves performance. Furthermore, some images of a selected individual were shot with a camera and a python program in order to do face recognition. This paper proposes a facial recognition system that can detect faces from direct and indirect images. We call this system FFDR, which is characterized by high speed and accuracy in the diagnosis of faces because it uses the Raspberry Pi 4 and the latest libraries and advanced environments in the Python language.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124829401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Augmented Reality Framework for Eye Muscle Education 眼肌肉教育的增强现实框架
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738780
Asiyeh Bahaloo, Arman Ali Mohammadi, Mohammad Reza Mohammadi, M. Soryani
{"title":"An Augmented Reality Framework for Eye Muscle Education","authors":"Asiyeh Bahaloo, Arman Ali Mohammadi, Mohammad Reza Mohammadi, M. Soryani","doi":"10.1109/MVIP53647.2022.9738780","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738780","url":null,"abstract":"Due to the COVID-19 pandemic, the need for remote education is felt more than ever. New technologies such as Augmented Reality (AR) can improve students’ training experiences and directly affect the learning process, especially in remote education. By using AR in medical education, we no longer need to worry about patient safety during the education process because AR helps students see inside the human body without needing to cut human flesh in the real world. In this paper, we present an augmented reality framework that has the ability to add a virtual eye muscle to a person’s face in a single photo or a video. We go one step further to not just show the muscle of the eye but also customize it for each person by modeling the person’s face with a 3D morphable model (3DMM).","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125163508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images 在时间圈图像中使用自适应多阶段卡尔曼滤波的自动细胞跟踪
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738793
Hane Naghshbandi, Yaser Baleghi Damavandi
{"title":"Automated Cell Tracking Using Adaptive Multi-stage Kalman Filter In Time-laps Images","authors":"Hane Naghshbandi, Yaser Baleghi Damavandi","doi":"10.1109/MVIP53647.2022.9738793","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738793","url":null,"abstract":"Segmenting living cells and tracking their movement in microscopy images are significant in biological studies and have played a crucial role in disease diagnosis, targeted therapy, drug delivery, and many other medical applications. Due to a large amount of time-lapse image data, automated image analysis can be a proper alternative to manual analysis, which is unreasonably time-consuming. However, Low-resolution microscopic images, unpredictable cell behavior, and multiple cell divisions make automated cell tracking challenging. In this paper, we propose a novel multi-object tracking approach guided by a two-stage adaptive Kalman forecast. Cell segmentation is performed using an edge detector combined with various morphological operations. The tracking section includes two general stages. At first, a Kalman filter with a constant speed is used to estimate the position of each cell in consecutive frames. The primary Kalman filter was able to detect a significant percentage of cells, but the high rate of cell division and migration of cells in or out of the field of view has caused errors in the final result. In the next stage, a secondary Kalman filter with modified parameters extracted from the results of initial tracking is proposed to estimate the position of cells in each frame, decrease errors, and improve the tracking results. Experimental results indicate that our method is 94.37% accurate in segmenting cells. The validity of the whole method has been conducted by comparing the results of the proposed method with manual tracking results, which demonstrates its efficiency.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115209996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Computer-aided Brain Age Estimation via Ensemble Learning of 3D Convolutional Neural Networks 基于三维卷积神经网络集成学习的计算机辅助脑年龄估计
2022 International Conference on Machine Vision and Image Processing (MVIP) Pub Date : 2022-02-23 DOI: 10.1109/MVIP53647.2022.9738758
Ali Bahari Malayeri, Mohammad Mahdi Moradi, Kian Jafari Dinani
{"title":"Computer-aided Brain Age Estimation via Ensemble Learning of 3D Convolutional Neural Networks","authors":"Ali Bahari Malayeri, Mohammad Mahdi Moradi, Kian Jafari Dinani","doi":"10.1109/MVIP53647.2022.9738758","DOIUrl":"https://doi.org/10.1109/MVIP53647.2022.9738758","url":null,"abstract":"predicting brain age using Magnetic Resonant Imaging (MRI) and its difference with chronological age is useful for detecting Alzheimer's disease in the early stages. For having accurate brain age prediction with MRI, Deep learning could play an active role, but its performance is highly dependent on the amount of data and computes memory we access. In this paper, in order to approximate as accurately as possible, the age of the brain through T1 weighted structural MRI, a deep 3D convolutional neural network model is proposed. Furthermore, different techniques such as data normalization and ensemble learning have been applied to the suggested model for getting more accurate results. The system is trained and tested on the IXI database, which is being normalized by SPM12. Finally, this model is assessed through the Mean Absolute Error (MAE) metric, and the results demonstrate our model is capable of computing the approximation age of the subjects with an MAE, which is equal to 5.07 years.","PeriodicalId":184716,"journal":{"name":"2022 International Conference on Machine Vision and Image Processing (MVIP)","volume":"35 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125551505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信