{"title":"Object Detection Based on CNN and Vision-Transformer: A Survey","authors":"Jinfeng Cao, Bo Peng, Mingzhong Gao, Haichun Hao, Xinfang Li, Hongwei Mou","doi":"10.1049/cvi2.70028","DOIUrl":"https://doi.org/10.1049/cvi2.70028","url":null,"abstract":"<p>Object detection is the most crucial and challenging task of computer vision and has been used in various fields in recent years, such as autonomous driving and industrial inspection. Traditional object detection methods are mainly based on the sliding windows and the handcrafted features, which have problems such as insufficient understanding of image features and low accuracy of detection. With the rapid advancements in deep learning, convolutional neural networks (CNNs) and vision transformers have become fundamental components in object detection models. These components are capable of learning more advanced and deeper image properties, leading to a transformational breakthrough in the performance of object detection. In this review, we comprehensively review the representative object detection models from deep learning periods, tracing their architectural shifts and technological breakthroughs. Furthermore, we discuss key challenges and promising research directions in the object detection. This review aims to provide a comprehensive foundation for practitioners to enhance their understanding of object detection technologies.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144179248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FastVDT: Fast Transformer With Optimised Attention Masks and Positional Encoding for Visual Dialogue","authors":"Qiangqiang He, Shuwei Qian, Chongjun Wang","doi":"10.1049/cvi2.70022","DOIUrl":"https://doi.org/10.1049/cvi2.70022","url":null,"abstract":"<p>The visual dialogue task requires computers to comprehend image content and preceding question-and-answer history to accurately answer related questions, with each round of dialogue providing the necessary historical context for subsequent interactions. Existing research typically processes multiple questions related to a single image as independent samples, which results in redundant modelling of the images and their captions and substantially increases computational costs. To address the challenges above, we introduce a fast transformer for visual dialogue, termed FastVDT, which utilises novel attention masks and continuous positional encoding. FastVDT models multiple image-related questions as an integrated entity, accurately processing prior conversation history in each dialogue round while predicting answers to multiple questions. Our method effectively captures the interrelations among questions and significantly reduces computational overhead. Experimental results demonstrate that our method delivers outstanding performance on the VisDial v0.9 and v1.0 datasets. FastVDT achieves comparable performance to VD-BERT and VU-BERT while reducing computational costs by 80% and 56%, respectively.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144171519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"End-to-End Cascaded Image Restoration and Object Detection for Rain and Fog Conditions","authors":"Peng Li, Jun Ni, Dapeng Tao","doi":"10.1049/cvi2.70021","DOIUrl":"https://doi.org/10.1049/cvi2.70021","url":null,"abstract":"<p>Adverse weather conditions in real-world scenarios can degrade the performance of deep learning-based object detection models. A commonly used approach is to apply image restoration before object detection to improve degraded images. However, there is no direct correlation between the visual quality of image restoration and the object detection accuracy. Furthermore, image restoration and object detection have potential conflicting objectives, making joint optimisation difficult. To address this, we propose an end-to-end object detection network specifically designed for rainy and foggy conditions. Our approach cascades an image restoration subnetwork with a detection subnetwork and optimises them jointly through a shared objective. Specifically, we introduce an expanded dilated convolution block and a weather attention block to enhance the effectiveness and robustness of the restoration network under various weather degradations. Additionally, we incorporate an auxiliary alignment branch with feature alignment loss to align the features of restored and clean images within the detection backbone, enabling joint optimisation of both subnetworks. A novel training strategy is also proposed to further improve object detection performance under rainy and foggy conditions. Extensive experiments on the vehicle-rain-fog, VOC-fog and real-world fog datasets demonstrate that our method outperforms recent state-of-the-art approaches in image restoration quality and detection accuracy. The code is available at https://github.com/HappyPessimism/RainFog-Restoration-Detection.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144091874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LRCM: Enhancing Adversarial Purification Through Latent Representation Compression","authors":"Yixin Li, Xintao Luo, Weijie Wu, Minjia Zheng","doi":"10.1049/cvi2.70030","DOIUrl":"https://doi.org/10.1049/cvi2.70030","url":null,"abstract":"<p>In the current context of the extensive use of deep neural networks, it has been observed that neural network models are vulnerable to adversarial perturbations, which may lead to unexpected results. In this paper, we introduce an Adversarial Purification Model rooted in latent representation compression, aimed at enhancing the robustness of deep learning models. Initially, we employ an encoder-decoder architecture inspired by the U-net to extract features from input samples. Subsequently, these features undergo a process of information compression to remove adversarial perturbations from the latent space. To counteract the model's tendency to overly focus on fine-grained details of input samples, resulting in ineffective adversarial sample purification, an early freezing mechanism is introduced during the encoder training process. We tested our model's ability to purify adversarial samples generated from the CIFAR-10, CIFAR-100, and ImageNet datasets using various methods. These samples were then used to test ResNet, an image recognition classifiers. Our experiments covered different resolutions and attack types to fully assess LRCM's effectiveness against adversarial attacks. We also compared LRCM with other defence strategies, demonstrating its strong defensive capabilities.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70030","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144085290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Geometric Edge Modelling in Self-Supervised Learning for Enhanced Indoor Depth Estimation","authors":"Niclas Joswig, Laura Ruotsalainen","doi":"10.1049/cvi2.70026","DOIUrl":"https://doi.org/10.1049/cvi2.70026","url":null,"abstract":"<p>Recently, the accuracy of self-supervised deep learning models for indoor depth estimation has approached that of supervised models by improving the supervision in planar regions. However, a common issue with integrating multiple planar priors is the generation of <i>oversmooth</i> depth maps, leading to unrealistic and erroneous depth representations at edges. Despite the fact that edge pixels only cover a small part of the image, they are of high significance for downstream tasks such as visual odometry, where image features, essential for motion computation, are mostly located at edges. To improve erroneous depth predictions at edge regions, we delve into the self-supervised training process, identifying its limitations and using these insights to develop a geometric edge model. Building on this, we introduce a novel algorithm that utilises the smooth depth predictions of existing models and colour image data to accurately identify edge pixels. After finding the edge pixels, our approach generates targeted self-supervision in these zones by interpolating depth values from adjacent planar areas towards the edges. We integrate the proposed algorithms into a novel loss function that encourages neural networks to predict sharper and more accurate depth edges in indoor scenes. To validate our methodology, we incorporated the proposed edge-enhancing loss function into a state-of-the-art self-supervised depth estimation framework. Our results demonstrate a notable improvement in the accuracy of edge depth predictions and a 19% improvement in visual odometry when using our depth model to generate RGB-D input, compared to the baseline model.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143938938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Avirath Sundaresan, Jason Parham, Jonathan Crall, Rosemary Warungu, Timothy Muthami, Jackson Miliko, Margaret Mwangi, Jason Holmberg, Tanya Berger-Wolf, Daniel Rubenstein, Charles Stewart, Sara Beery
{"title":"Adapting the Re-ID Challenge for Static Sensors","authors":"Avirath Sundaresan, Jason Parham, Jonathan Crall, Rosemary Warungu, Timothy Muthami, Jackson Miliko, Margaret Mwangi, Jason Holmberg, Tanya Berger-Wolf, Daniel Rubenstein, Charles Stewart, Sara Beery","doi":"10.1049/cvi2.70027","DOIUrl":"https://doi.org/10.1049/cvi2.70027","url":null,"abstract":"<p>The Grévy's zebra, an endangered species native to Kenya and southern Ethiopia, has been the target of sustained conservation efforts in recent years. Accurately monitoring Grévy's zebra populations is essential for ecologists to evaluate ongoing conservation initiatives. Recently, in both 2016 and 2018, a full census of the Grévy's zebra population was enabled by the Great Grévy's Rally (GGR), a citizen science event that combines teams of volunteers to capture data with computer vision algorithms that help experts estimate the number of individuals in the population. A complementary, scalable, cost-effective and long-term Grévy's population monitoring approach involves deploying a network of camera traps, which we have done at the Mpala Research Centre in Laikipia County, Kenya. In both scenarios, a substantial majority of the images of zebras are not usable for individual identification due to ‘in-the-wild’ imaging conditions—occlusions from vegetation or other animals, oblique views, low image quality and animals that appear in the far background and are thus too small to identify. Camera trap images, without an intelligent human photographer to select the framing and focus on the animals of interest, are of even poorer quality, with high rates of occlusion and high spatiotemporal similarity within image bursts. We employ an image filtering pipeline incorporating animal detection, species identification, viewpoint estimation, quality evaluation and temporal subsampling to compensate for these factors and obtain individual crops from camera trap and GGR images of suitable quality for re-ID. We then employ the local clusterings and their alternatives (LCA) algorithm, a hybrid computer vision and graph clustering method for animal re-ID, on the resulting high-quality crops. Our method processed images taken during GGR-16 and GGR-18 in Meru County, Kenya, into 4142 highly comparable annotations, requiring only 120 contrastive same-vs-different-individual decisions from a human reviewer to produce a population estimate of 349 individuals (within 4.6<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>%</mi>\u0000 </mrow>\u0000 <annotation> $%$</annotation>\u0000 </semantics></math> of the ground truth count in Meru County). Our method also efficiently processed 8.9M unlabelled camera trap images from 70 camera traps at Mpala over 2 years into 685 encounters of 173 unique individuals, requiring only 331 contrastive decisions from a human reviewer.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143925963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Texture-Aware Network for Enhancing Inner Smoke Representation in Visual Smoke Density Estimation","authors":"Xue Xia, Yajing Peng, Zichen Li, Jinting Shi, Yuming Fang","doi":"10.1049/cvi2.70023","DOIUrl":"https://doi.org/10.1049/cvi2.70023","url":null,"abstract":"<p>Smoke often appears before visible flames in the early stages of fire disasters, making accurate pixel-wise detection essential for fire alarms. Although existing segmentation models effectively identify smoke pixels, they generally treat all pixels within a smoke region as having the same prior probability. This assumption of rigidity, common in natural object segmentation, fails to account for the inherent variability within smoke. We argue that pixels within smoke exhibit a probabilistic relationship with both smoke and background, necessitating density estimation to enhance the representation of internal structures within the smoke. To this end, we propose enhancements across the entire network. First, we improve the backbone by adaptively integrating scene information into texture features through separate paths, enabling smoke-tailored feature representation for further exploit. Second, we introduce a texture-aware head with long convolutional kernels to integrate both global and orientation-specific information, enhancing representation for intricate smoke structure. Third, we develop a dual-task decoder for simultaneous density and location recovery, with the frequency-domain alignment in the final stage to preserve internal smoke details. Extensive experiments on synthetic and real smoke datasets demonstrate the effectiveness of our approach. Specifically, comparisons with 17 models show the superiority of our method, with mean IoU improvements of 4.88%, 2.63%, and 3.17% on three test sets. (The code will be available on https://github.com/xia-xx-cv/TANet_smoke).</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143914029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Angle Metric Learning for Discriminative Features on Vehicle Re-Identification","authors":"Yutong Xie, Shuoqi Zhang, Lide Guo, Yuming Liu, Rukai Wei, Yanzhao Xie, Yangtao Wang, Maobin Tang, Lisheng Fan","doi":"10.1049/cvi2.70015","DOIUrl":"https://doi.org/10.1049/cvi2.70015","url":null,"abstract":"<p>Vehicle re-identification (Re-ID) facilitates the recognition and distinction of vehicles based on their visual characteristics in images or videos. However, accurately identifying a vehicle poses great challenges due to (i) the pronounced intra-instance variations encountered under varying lighting conditions such as day and night and (ii) the subtle inter-instance differences observed among similar vehicles. To address these challenges, the authors propose <b>A</b>ngle <b>M</b>etric learning for <b>D</b>iscriminative <b>F</b>eatures on vehicle Re-ID (termed as AMDF), which aims to maximise the variance between visual features of different classes while minimising the variance within the same class. AMDF comprehensively measures the angle and distance discrepancies between features. First, to mitigate the impact of lighting conditions on intra-class variation, the authors employ CycleGAN to generate images that simulate consistent lighting (either day or night), thereby standardising the conditions for distance measurement. Second, Swin Transformer was integrated to help generate more detailed features. At last, a novel angle metric loss based on cosine distance is proposed, which organically integrates angular metric and 2-norm metric, effectively maximising the decision boundary in angular space. Extensive experimental evaluations on three public datasets including VERI-776, VERI-Wild, and VEHICLEID, indicate that the method achieves state-of-the-art performance. The code of this project is released at https://github.com/ZnCu-0906/AMDF.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143905080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tran-GCN: A Transformer-Enhanced Graph Convolutional Network for Person Re-Identification in Monitoring Videos","authors":"Xiaobin Hong, Tarmizi Adam, Masitah Ghazali","doi":"10.1049/cvi2.70025","DOIUrl":"https://doi.org/10.1049/cvi2.70025","url":null,"abstract":"<p>Person re-identification (Re-ID) has gained popularity in computer vision, enabling cross-camera pedestrian recognition. Although the development of deep learning has provided a robust technical foundation for person Re-ID research, most existing person Re-ID methods overlook the potential relationships among local person features, failing to adequately address the impact of pedestrian pose variations and local body parts occlusion. Therefore, we propose a transformer-enhanced graph convolutional network (Tran-GCN) model to improve person re-identification performance in monitoring videos. The model comprises four key components: (1) a pose estimation learning branch is utilised to estimate pedestrian pose information and inherent skeletal structure data, extracting pedestrian key point information; (2) a transformer learning branch learns the global dependencies between fine-grained and semantically meaningful local person features; (3) a convolution learning branch uses the basic ResNet architecture to extract the person's fine-grained local features; and (4) a Graph convolutional module (GCM) integrates local feature information, global feature information and body information for more effective person identification after fusion. Quantitative and qualitative analysis experiments conducted on three different datasets (Market-1501, DukeMTMC-ReID and MSMT17) demonstrate that the Tran-GCN model can more accurately capture discriminative person features in monitoring videos, significantly improving identification accuracy.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143889172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vanessa Suessle, Marco Heurich, Colleen T. Downs, Andreas Weinmann, Elke Hergenroether
{"title":"CNN-Based Flank Predictor for Quadruped Animal Species","authors":"Vanessa Suessle, Marco Heurich, Colleen T. Downs, Andreas Weinmann, Elke Hergenroether","doi":"10.1049/cvi2.70024","DOIUrl":"https://doi.org/10.1049/cvi2.70024","url":null,"abstract":"<p>The bilateral asymmetry of flanks, where the sides of an animal with unique visual markings are independently patterned, complicates tasks such as individual identification. Automatically generating additional information on the visible side of the animal would improve the accuracy of individual identification. In this study, we used transfer learning on popular convolutional neural network (CNN) image classification architectures to train a flank predictor that predicted the visible flank of quadruped mammalian species in images. We automatically derived the data labels from existing datasets initially labelled for animal pose estimation. The developed models were evaluated across various scenarios involving unseen quadruped species in familiar and unfamiliar habitats. As a real-world scenario, we used a dataset of manually labelled Eurasian lynx (<i>Lynx lynx</i>) from camera traps in the Bavarian Forest National Park, Germany, to evaluate the model. The best model on data obtained in the field was trained on a MobileNetV2 architecture. It achieved an accuracy of 91.7% for the unseen/untrained species lynx in a complex unseen/untrained habitat with challenging light conditions. The developed flank predictor was designed to be embedded as a preprocessing step for automated analysis of camera trap datasets to enhance tasks such as individual identification.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"19 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.70024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143889173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}