2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)最新文献

筛选
英文 中文
Absolute and Relative Pose Estimation in Refractive Multi View 折光多视图中绝对和相对姿态估计
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00290
Xiao Hu, F. Lauze, K. S. Pedersen, J. Mélou
{"title":"Absolute and Relative Pose Estimation in Refractive Multi View","authors":"Xiao Hu, F. Lauze, K. S. Pedersen, J. Mélou","doi":"10.1109/ICCVW54120.2021.00290","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00290","url":null,"abstract":"This paper investigates absolute and relative pose estimation under refraction, which are essential problems for refractive structure from motion. We first present an absolute pose estimation algorithm by leveraging an efficient iterative refinement. Then, we derive a novel refractive epipolar constraint for relative pose estimation. The epipolar constraint is established based on the virtual camera transformation, making it in a succinct form and can be efficiently optimized. Evaluations of the proposed algorithms on synthetic data show superior accuracy and computational efficiency to state-of-the-art methods. For further validation, we demonstrate the performance on real data and show the application in 3D reconstruction of objects under refraction.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130582353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Anomaly Detection System via Moving Surveillance Robots with Human Collaboration 基于移动监控机器人的异常检测系统
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00293
M. Zaheer, Arif Mahmood, M. H. Khan, M. Astrid, Seung-Ik Lee
{"title":"An Anomaly Detection System via Moving Surveillance Robots with Human Collaboration","authors":"M. Zaheer, Arif Mahmood, M. H. Khan, M. Astrid, Seung-Ik Lee","doi":"10.1109/ICCVW54120.2021.00293","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00293","url":null,"abstract":"Autonomous anomaly detection is a fundamental step in visual surveillance systems, and so we have witnessed great progress in the form of various promising algorithms. Nonetheless, majority of prior algorithms assume static surveillance cameras that severely restricts the coverage of the system unless the number of cameras is exponentially increased, consequently increasing both the installation and the monitoring costs. In the current work we propose an anomaly detection system based on mobile surveillance cameras, i.e., moving robots which continuously navigate a target area. We compare the newly acquired test images with a database of normal images using geo-tags. For anomaly detection, a Siamese network is trained which analyses two input images for anomalies while ignoring the viewpoint differences. Further, our system is capable of updating the normal images database with human collaboration. Finally, we propose a new tester dataset that is captured by repeated visits of the robot over a constrained outdoor industrial target area. Our experiments demonstrate the effectiveness of the proposed system for anomaly detection using mobile surveillance robots.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130622697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Boosting Fairness for Masked Face Recognition 提高蒙面人脸识别的公平性
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00178
Jun Yu, Xinlong Hao, Zeyu Cui, Peng He, Tongliang Liu
{"title":"Boosting Fairness for Masked Face Recognition","authors":"Jun Yu, Xinlong Hao, Zeyu Cui, Peng He, Tongliang Liu","doi":"10.1109/ICCVW54120.2021.00178","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00178","url":null,"abstract":"Face recognition achieved excellent performance in re-cent years. However, its potential for unfairness is raising alarm. For example, the recognition rate for the special group of East Asian is quite low. Many efforts have spent to improve the fairness of face recognition. During the COVID-19 pandemic, masked face recognition is becoming a hot topic but brings new challenging for fair face recognition. For example, the mouth and nose are important to recognizing faces of Asian groups. Masks would further reduce the recognition rate of Asian faces. To this end, this paper proposes a fair masked face recognition system. First, an appropriate masking method is used to generate masked faces. Then, a data re-sampling approach is employed to balance the data distribution and reduce the bias based on the analysis of training data. Moreover, we propose an asymmetric-arc-loss which is a combination of arc-face loss and circle-loss, it is useful for increasing recognition rate and reducing bias. Integrating these techniques, this paper obtained fairer and better face recognition results on masked faces.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116260673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional Neural Networks Based Remote Sensing Scene Classification under Clear and Cloudy Environments 基于卷积神经网络的晴朗和多云环境遥感场景分类
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00085
Huiming Sun, Yuewei Lin, Q. Zou, Shaoyue Song, Jianwu Fang, Hongkai Yu
{"title":"Convolutional Neural Networks Based Remote Sensing Scene Classification under Clear and Cloudy Environments","authors":"Huiming Sun, Yuewei Lin, Q. Zou, Shaoyue Song, Jianwu Fang, Hongkai Yu","doi":"10.1109/ICCVW54120.2021.00085","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00085","url":null,"abstract":"Remote sensing (RS) scene classification has wide applications in the environmental monitoring and geological survey. In the real-world applications, the RS scene images taken by the satellite might have two scenarios: clear and cloudy environments. However, most of existing methods did not consider these two environments simultaneously. In this paper, we assume that the global and local features are discriminative in either clear or cloudy environments. Many existing Convolution Neural Networks (CNN) based models have made excellent achievements in the image classification, however they somewhat ignored the global and local features in their network structure. In this paper, we pro-pose a new CNN based network (named GLNet) with the Global Encoder and Local Encoder to extract the discriminative global and local features for the RS scene classification, where the constraints for inter-class dispersion and intra-class compactness are embedded in the GLNet training. The experimental results on two publicized RS scene classification datasets show that the proposed GLNet could achieve better performance based on many existing CNN backbones under both clear and cloudy environments.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121706748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fast Unsupervised MRI Reconstruction Without Fully-Sampled Ground Truth Data Using Generative Adversarial Networks 基于生成对抗网络的无监督核磁共振重建
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00444
Elizabeth K. Cole, Frank Ong, S. Vasanawala, J. Pauly
{"title":"Fast Unsupervised MRI Reconstruction Without Fully-Sampled Ground Truth Data Using Generative Adversarial Networks","authors":"Elizabeth K. Cole, Frank Ong, S. Vasanawala, J. Pauly","doi":"10.1109/ICCVW54120.2021.00444","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00444","url":null,"abstract":"Most deep learning (DL) magnetic resonance imaging (MRI) reconstruction approaches rely on supervised training algorithms, which require access to high-quality, fully-sampled ground truth datasets. In MRI, acquiring fully-sampled data is time-consuming, expensive, and, in some cases, impossible due to limitations on data acquisition speed. We present a DL framework for MRI reconstruction that does not require any fully-sampled data using unsupervised generative adversarial networks. We test our proposed method on 2D knee MRI data and 2D+time abdominal dynamic contrast enhanced (DCE) MRI data. In the DCE-MRI dataset, as is the case with many dynamic MRI sequences, ground truth was not possible to acquire and therefore, supervised DL reconstruction was not feasible. We show that our unsupervised method produces reconstructions which are better than compressed sensing in terms of image metrics and the recovery of anatomical structure, with faster inference time. In contrast to most deep learning reconstruction techniques, which are supervised, this method does not need any fully-sampled data. With the proposed method, accelerated imaging and accurate reconstruction can be performed in applications in cases where fully-sampled datasets are difficult to obtain or unavailable.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125276164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
3D mask presentation attack detection via high resolution face parts 3D面具呈现攻击检测通过高分辨率的面部部分
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00100
O. Grinchuk, Aleksandr Parkin, Evgenija Glazistova
{"title":"3D mask presentation attack detection via high resolution face parts","authors":"O. Grinchuk, Aleksandr Parkin, Evgenija Glazistova","doi":"10.1109/ICCVW54120.2021.00100","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00100","url":null,"abstract":"3D mask presentation attack detection (PAD) is a long standing challenge in face anti-spoofing due to the high fidelity of attack artifacts and a limited number of samples available for training and evaluation. With the recent release of the large-scale and diverse CASIA-SURF HiFiMask dataset [19], it now becomes possible to address 3D mask PAD with deep neural networks. This paper introduces a new one-shot method for 3D mask PAD that extracts fine-grained information from appropriate parts of the human face and uses it to identify subtle differences between real and fake samples. The proposed method achieves state-of-the-art results of 3% ACER on the CASIA-SURF HiFiMask test set.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Algorithmic Approach to Quantifying GPS Trajectory Error 一种量化GPS轨迹误差的算法
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00437
Matthew Plaudis, Muhammad Azam, Derek Jacoby, M. Drouin, Y. Coady
{"title":"An Algorithmic Approach to Quantifying GPS Trajectory Error","authors":"Matthew Plaudis, Muhammad Azam, Derek Jacoby, M. Drouin, Y. Coady","doi":"10.1109/ICCVW54120.2021.00437","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00437","url":null,"abstract":"The alignment of aerial and satellite imagery with ground sensor data is an ongoing research challenge. In dense urban environments, part of this challenge is induced by the positioning error of Global Positioning System (GPS). Despite the potential for error, many studies use GPS in order to infer road networks because GPS data is inexpensive and can be acquired quickly. Major transit organizations are freely providing data on the real-time position of their buses as well as ground truth route trajectories. This work exploits geospatial open data to construct a database of historical GPS from bus roads. Using this database, the GPS error map along main arteries of major cities can be reconstructed. The extraction of error maps is highly relevant for the planning and the joint exploitation of airborne and ground-based imagery. In this work, we use bus routes in downtown Victoria, BC, Canada and Adelaide, Australia to demonstrate the extraction GPS error maps.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122903397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Computer Vision-Based Attention Generator using DQN 基于DQN的计算机视觉注意力生成器
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00329
Jordan B. Chipka, Shuqing Zeng, Thanura R. Elvitigala, P. Mudalige
{"title":"A Computer Vision-Based Attention Generator using DQN","authors":"Jordan B. Chipka, Shuqing Zeng, Thanura R. Elvitigala, P. Mudalige","doi":"10.1109/ICCVW54120.2021.00329","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00329","url":null,"abstract":"A significant obstacle to achieving autonomous driving (AD) and advanced driver-assistance systems (ADAS) functionality in passenger vehicles is high-fidelity perception at a sufficiently low cost of computation and sensors. An area of research that aims to address this challenge takes inspiration from human foveal vision by using attention-based sensing. This work presents an end-to-end computer vision-based Deep Q-Network (DQN) technique that intelligently selects a priority region of an image to place greater attention to achieve better perception performance. This method is evaluated on the Berkeley Deep Drive (BDD) dataset. Results demonstrate that a substantial improvement in perception performance can be attained – compared to a baseline method – at a minimal cost in terms of time and processing.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131209043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Residual Dilated U-net For The Segmentation Of COVID-19 Infection From CT Images CT图像中COVID-19感染的残留扩张U-net分割
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00056
Alyaa Amer, Xujiong Ye, Faraz Janan
{"title":"Residual Dilated U-net For The Segmentation Of COVID-19 Infection From CT Images","authors":"Alyaa Amer, Xujiong Ye, Faraz Janan","doi":"10.1109/ICCVW54120.2021.00056","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00056","url":null,"abstract":"Medical imaging such as computed tomography (CT) plays a critical role in the global fight against COVID-19. Computer-aided platforms have emerged to help radiologists diagnose and track disease prognosis. In this paper, we introduce an automated deep-learning segmentation model, which builds upon the current U-net model, however, leverages the strengths of long and short skip connections. We complemented the long skip connections with a cascaded dilated convolution module that learns multi-scale context information, compensates the reduction in receptive fields, and reduces the disparity between encoded and decoded features. The short connections are considered in utilizing residual blocks as the basic building blocks for our model. They ease the training process, reduce the degradation problem, and propagate the low fine details. This enables the model to perform well in capturing smaller regions of interest. Furthermore, each residual block is followed by a squeeze and excitation unit, which stimulates informative features and suppresses less important ones, thus improving the overall feature representation. After extensive experimentation with a dataset of 1705 COVID-19 axial CT images, we demonstrate that performance gains can be achieved when deep learning modules are integrated with the basic U- net model. Experimental results show that our model outperformed the basic U-net and ResDUnet model by 8.1% and 1.9% in dice similarity, respectively. Our model provided a dice similarity measure of 85.3%, with a slight increase in trainable parameters, thus demonstrating a huge potential for use in the clinical domain.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131253533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mask Aware Network for Masked Face Recognition in the Wild 野外蒙面人脸识别的面具感知网络
2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW) Pub Date : 2021-10-01 DOI: 10.1109/ICCVW54120.2021.00168
Kai Wang, Shuo Wang, Jianfei Yang, Xiaobo Wang, Baigui Sun, Hao Li, Yang You
{"title":"Mask Aware Network for Masked Face Recognition in the Wild","authors":"Kai Wang, Shuo Wang, Jianfei Yang, Xiaobo Wang, Baigui Sun, Hao Li, Yang You","doi":"10.1109/ICCVW54120.2021.00168","DOIUrl":"https://doi.org/10.1109/ICCVW54120.2021.00168","url":null,"abstract":"Face recognition is one of the most important research topics for intelligence security system, especially in the COVID-19 era. Medical research has proven that wearing a mask is the most efficient way to avoid the risk of COVID-19. Nevertheless, classic face recognition systems often fail when dealing with masked faces. So it is essential to design a method that is robust to Masked Face Recognition (MFR). In this paper, to relieve the degraded performance of MFR, we propose Mask Aware Network (MAN) including a mask generation module and a loss function searching module. The mask generation module utilizes the face landmarks to obtain more realistic and reliable masked faces for training. The loss function searching module tries to match the most suitable loss for face recognition. On ICCV MFR challenge, our team victor-2021 achieves 5 first places (including 3 champions in standard face recognition and 2 champions in masked face recognition) and 1 third place by 3rd August 2021. These results demonstrate the robustness and generalization of our method in both standard or masked face recognition task.","PeriodicalId":226794,"journal":{"name":"2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133273557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信