Yu Chen , Yang Yu , Rongrong Ni , Haoliang Li , Wei Wang , Yao Zhao
{"title":"NPVForensics: Learning VA correlations in non-critical phoneme–viseme regions for deepfake detection","authors":"Yu Chen , Yang Yu , Rongrong Ni , Haoliang Li , Wei Wang , Yao Zhao","doi":"10.1016/j.imavis.2025.105461","DOIUrl":"10.1016/j.imavis.2025.105461","url":null,"abstract":"<div><div>Advanced deepfake technology enables the manipulation of visual and audio signals within videos, leading to visual–audio (VA) inconsistencies. Current multimodal detectors primarily rely on VA contrastive learning to identify such inconsistencies, particularly in critical phoneme–viseme (PV) regions. However, state-of-the-art deepfake techniques have aligned critical PV pairs, thereby reducing the inconsistency traces on which existing methods rely. Due to technical constraints, forgers cannot fully synchronize VA in non-critical phoneme–viseme (NPV) regions. Consequently, we exploit inconsistencies in NPV regions as a general cue for deepfake detection. We propose NPVForensics, a two-stage VA correlation learning framework specifically designed to detect VA inconsistencies in NPV regions of deepfake videos. Firstly, to better extract VA unimodal features, we utilize the Swin Transformer to capture long-term global dependencies. Additionally, the Local Feature Aggregation (LFA) module aggregates local features from spatial and channel dimensions, thus preserving more comprehensive and subtle information. Secondly, the VA Correlation Learning (VACL) module enhances intra-modal augmentation and inter-modal information interaction, exploring intrinsic correlations between the two modalities. Moreover, Representation Alignment is introduced for real videos to narrow the modal gap and effectively extract VA correlations. Finally, our model is pre-trained on real videos using a self-supervised strategy and fine-tuned for the deepfake detection task. We conducted extensive experiments on six widely used deepfake datasets: FaceForensics++, FakeAVCeleb, Celeb-DF-v2, DFDC, FaceShifter, and DeeperForensics-1.0. Our method achieves state-of-the-art performance in cross-manipulation generalization and robustness. Notably, our approach demonstrates superior performance on VA-coordinated datasets such as A2V, T2V-L, and T2V-S. It indicates that VA inconsistencies in NPV regions serve as a general cue for deepfake detection.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105461"},"PeriodicalIF":4.2,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143519335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junting Li , Yanghong Zhou , Jintu Fan , Dahua Shou , Sa Xu , P.Y. Mok
{"title":"Feature Field Fusion for few-shot novel view synthesis","authors":"Junting Li , Yanghong Zhou , Jintu Fan , Dahua Shou , Sa Xu , P.Y. Mok","doi":"10.1016/j.imavis.2025.105465","DOIUrl":"10.1016/j.imavis.2025.105465","url":null,"abstract":"<div><div>Reconstructing neural radiance fields from limited or sparse views has given very promising potential for this field of research. Previous methods usually constrain the reconstruction process with additional priors, e.g. semantic-based or patch-based regularization. Nevertheless, such regularization is given to the synthesis of unseen views, which may not effectively assist the field of learning, in particular when the training views are sparse. Instead, we propose a feature Field Fusion (FFusion) NeRF in this paper that can learn structure and more details from features extracted from pre-trained neural networks for the sparse training views, and use as extra guide for the training of the RGB field. With such extra feature guides, FFusion predicts more accurate color and density when synthesizing novel views. Experimental results have shown that FFusion can effectively improve the quality of the synthesized novel views with only limited or sparse inputs.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105465"},"PeriodicalIF":4.2,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaoyu Chen , Xinggang Wang , Tianheng Cheng , Qian Zhang , Chang Huang , Wenyu Liu
{"title":"PolarDETR: Polar Parametrization for vision-based surround-view 3D detection","authors":"Shaoyu Chen , Xinggang Wang , Tianheng Cheng , Qian Zhang , Chang Huang , Wenyu Liu","doi":"10.1016/j.imavis.2025.105438","DOIUrl":"10.1016/j.imavis.2025.105438","url":null,"abstract":"<div><div>3D detection based on surround-view camera system is a critical and promising technique in autopilot. In this work, we exploit the view symmetry of surround-view camera system as inductive bias to improve optimization and boost performance. We parameterize object’s position by polar coordinate and decompose velocity along radial and tangential direction. And the perception range, label assignment and loss function are correspondingly reformulated in polar coordinate system. This new Polar Parametrization scheme establishes explicit associations between image patterns and prediction targets. Based on it, we propose a surround-view 3D detection method, termed PolarDETR. PolarDETR achieves competitive performance on nuScenes dataset. Thorough ablation studies are provided to validate the effectiveness.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105438"},"PeriodicalIF":4.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vishwas Rathi , Abhilasha Sharma , Amit Kumar Singh
{"title":"Multispectral images reconstruction using median filtering based spectral correlation","authors":"Vishwas Rathi , Abhilasha Sharma , Amit Kumar Singh","doi":"10.1016/j.imavis.2025.105462","DOIUrl":"10.1016/j.imavis.2025.105462","url":null,"abstract":"<div><div>Multispectral images are widely utilized in various computer vision applications because they capture more information than traditional color images. Multispectral imaging systems utilize a multispectral filter array (MFA), an extension of the color filter array found in standard RGB cameras. This approach provides an efficient, cost-effective, and practical method for capturing multispectral images. The primary challenge with multispectral imaging systems using an MFA is the significant undersampling of spectral bands in the mosaicked image. This occurs because a multispectral mosaic image contains a greater number of spectral bands compared to an RGB mosaicked image, leading to reduced sampling density per band. Now, multispectral demosaicing algorithm is required to generate the complete multispectral image from the mosaicked image. The effectiveness of demosaicing algorithms relies heavily on the efficient utilization of spatial and spectral correlations inherent in mosaicked images. In the proposed method, a binary tree-based MFA pattern is employed to capture the mosaicked image. Rather than directly leveraging spectral correlations between bands, median filtering is applied to the spectral differences to mitigate the impact of noise on these correlations. Experimental results demonstrate that the proposed method achieves an improvement of 1.03 dB and 0.92 dB on average from 5-band to 10-band multispectral images from the widely used TokyoTech and CAVE datasets, respectively.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105462"},"PeriodicalIF":4.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143473818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gait recognition via View-aware Part-wise Attention and Multi-scale Dilated Temporal Extractor","authors":"Xu Song , Yang Wang , Yan Huang , Caifeng Shan","doi":"10.1016/j.imavis.2025.105464","DOIUrl":"10.1016/j.imavis.2025.105464","url":null,"abstract":"<div><div>Gait recognition based on silhouette sequences has made significant strides in recent years through the extraction of body shape and motion features. However, challenges remain in achieving accurate gait recognition under covariate changes, such as variations in view and clothing. To tackle these issues, this paper introduces a novel methodology incorporating a View-aware Part-wise Attention (VPA) mechanism and a Multi-scale Dilated Temporal Extractor (MDTE) to enhance gait recognition. Distinct from existing techniques, VPA mechanism acknowledges the differential sensitivity of various body parts to view changes, applying targeted attention weights at the feature level to improve the efficacy of view-aware constraints in areas of higher saliency or distinctiveness. Concurrently, MDTE employs dilated convolutions across multiple scales to capture the temporal dynamics of gait at diverse levels, thereby refining the motion representation. Comprehensive experiments on the CASIA-B, OU-MVLP, and Gait3D datasets validate the superior performance of our approach. Remarkably, our method achieves a 91.0% accuracy rate under clothing-change conditions on the CASIA-B dataset using solely silhouette information, surpassing the current state-of-the-art (SOTA) techniques. These results underscore the effectiveness and adaptability of our proposed strategy in overcoming the complexities of gait recognition amidst covariate changes.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105464"},"PeriodicalIF":4.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FRoundation: Are foundation models ready for face recognition?","authors":"Tahar Chettaoui , Naser Damer , Fadi Boutros","doi":"10.1016/j.imavis.2025.105453","DOIUrl":"10.1016/j.imavis.2025.105453","url":null,"abstract":"<div><div>Foundation models are predominantly trained in an unsupervised or self-supervised manner on highly diverse and large-scale datasets, making them broadly applicable to various downstream tasks. In this work, we investigate for the first time whether such models are suitable for the specific domain of face recognition (FR). We further propose and demonstrate the adaptation of these models for FR across different levels of data availability, including synthetic data. Extensive experiments are conducted on multiple foundation models and datasets of varying scales for training and fine-tuning, with evaluation on a wide range of benchmarks. Our results indicate that, despite their versatility, pre-trained foundation models tend to underperform in FR in comparison with similar architectures trained specifically for this task. However, fine-tuning foundation models yields promising results, often surpassing models trained from scratch, particularly when training data is limited. For example, after fine-tuning only on 1K identities, DINOv2 ViT-S achieved average verification accuracy on LFW, CALFW, CPLFW, CFP-FP, and AgeDB30 benchmarks of 87.10%, compared to 64.70% achieved by the same model and without fine-tuning. While training the same model architecture, ViT-S, from scratch on 1k identities reached 69.96%. With access to larger-scale FR training datasets, these performances reach 96.03% and 95.59% for the DINOv2 and CLIP ViT-L models, respectively. In comparison to the ViT-based architectures trained from scratch for FR, fine-tuned same architectures of foundation models achieve similar performance while requiring lower training computational costs and not relying on the assumption of extensive data availability. We further demonstrated the use of synthetic face data, showing improved performances over both pre-trained foundation and ViT models. Additionally, we examine demographic biases, noting slightly higher biases in certain settings when using foundation models compared to models trained from scratch. We release our code and pre-trained models’ weights at <span><span>github.com/TaharChettaoui/FRoundation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105453"},"PeriodicalIF":4.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuezhi Xiang , Zhushan Ma , Xiaoheng Li , Lei Zhang , Xiantong Zhen
{"title":"Vehicle re-identification with large separable kernel attention and hybrid channel attention","authors":"Xuezhi Xiang , Zhushan Ma , Xiaoheng Li , Lei Zhang , Xiantong Zhen","doi":"10.1016/j.imavis.2025.105442","DOIUrl":"10.1016/j.imavis.2025.105442","url":null,"abstract":"<div><div>With the rapid development of intelligent transportation systems and the popularity of smart city infrastructure, Vehicle Re-ID technology has become an important research field. The vehicle Re-ID task faces an important challenge, which is the high similarity between different vehicles. Existing methods use additional detection or segmentation models to extract differentiated local features. However, these methods either rely on additional annotations or greatly increase the computational cost. Using attention mechanism to capture global and local features is crucial to solve the challenge of high similarity between classes in vehicle Re-ID tasks. In this paper, we propose LSKA-ReID with large separable kernel attention and hybrid channel attention. Specifically, the large separable kernel attention (LSKA) utilizes the advantages of self-attention and also benefits from the advantages of convolution, which can extract the global and local features of the vehicle more comprehensively. We also compare the performance of LSKA and large kernel attention (LKA) on the vehicle ReID task. We also introduce hybrid channel attention (HCA), which combines channel attention with spatial information, so that the model can better focus on channels and feature regions, and ignore background and other disturbing information. Extensive experiments on three popular datasets VeRi-776, VehicleID and VERI-Wild demonstrate the effectiveness of LSKA-ReID. In particular, on VeRi-776 dataset, mAP reaches 86.78% and Rank-1 reaches 98.09%.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"155 ","pages":"Article 105442"},"PeriodicalIF":4.2,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Ning Fan , Geng-Kun Wu , Jia-Zheng Han , Bei-Ping Zhang , Jie Xu
{"title":"Innovative underwater image enhancement algorithm: Combined application of adaptive white balance color compensation and pyramid image fusion to submarine algal microscopy","authors":"Yi-Ning Fan , Geng-Kun Wu , Jia-Zheng Han , Bei-Ping Zhang , Jie Xu","doi":"10.1016/j.imavis.2025.105466","DOIUrl":"10.1016/j.imavis.2025.105466","url":null,"abstract":"<div><div>Real-time collected microscopic images of harmful algal blooms (HABs) in coastal areas often suffer from significant color deviations and loss of fine cellular details. To address these issues, this paper proposes an innovative method for enhancing underwater marine algal microscopic images based on Adaptive White Balance Color Compensation (AWBCC) and Image Pyramid Fusion (IPF). Firstly, an effective Algorithm Adaptive Cyclic Channel Compensation (ACCC) is proposed based on the gray world assumption to enhance the color of underwater images. Then, the Maximum Color Channel Attention Guidance (MCCAG) method is employed to reduce color disturbance caused by ignoring light absorption. This paper introduces an Empirical Contrast Enhancement (ECH) module based on multi-scale IPF tailored for underwater microscopic images of algae, which is used for global contrast enhancement, texture detail enhancement, and noise control. Secondly, this paper proposes a network based on a diffusion probability model for edge detection in HABs, which simultaneously considers both high-order and low-order features extracted from images. This approach enriches the semantic information of the feature maps and enhances edge detection accuracy. This edge detection method achieves an ODS of 0.623 and an OIS of 0.683. Experimental evaluations demonstrate that our underwater algae microscopic image enhancement method amplifies local texture features while preserving the original image structure. This significantly improves the accuracy of edge detection and key point matching. Compared to several state-of-the-art underwater image enhancement methods, our approach achieves the highest values in contrast, average gradient, entropy, and Enhancement Measure Estimation (EME), and also delivers competitive results in terms of image noise control.<!--> <!-->.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"156 ","pages":"Article 105466"},"PeriodicalIF":4.2,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two-modal multiscale feature cross fusion for hyperspectral unmixing","authors":"Senlong Qin, Yuqi Hao, Minghui Chu, Xiaodong Yu","doi":"10.1016/j.imavis.2025.105445","DOIUrl":"10.1016/j.imavis.2025.105445","url":null,"abstract":"<div><div>Hyperspectral images (HSI) possess rich spectral characteristics but suffer from low spatial resolution, which has led many methods to focus on extracting more spatial information from HSI. However, the spatial information that can be extracted from a single HSI is limited, making it difficult to distinguish objects with similar materials. To address this issue, we propose a multimodal unmixing network called MSFF-Net. This network enhances unmixing performance by integrating the spatial information from light detection and ranging (LiDAR) data into the unmixing process. To ensure a more comprehensive fusion of features from the two modalities, we introduce a multi-scale cross-fusion method, providing a new approach to multimodal data fusion. Additionally, the network employs attention mechanisms to enhance channel-wise and spatial features, boosting the model's representational capacity. Our proposed model effectively consolidates multimodal information, significantly improving its unmixing capability, especially in complex environments, leading to more accurate unmixing results and facilitating further analysis of HSI. We evaluate our method using two real-world datasets. Experimental results demonstrate that our proposed approach outperforms other state-of-the-art methods in terms of both stability and effectiveness.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"155 ","pages":"Article 105445"},"PeriodicalIF":4.2,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143445920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proactive robot task sequencing through real-time hand motion prediction in human–robot collaboration","authors":"Shyngyskhan Abilkassov , Michael Gentner , Almas Shintemirov , Eckehard Steinbach , Mirela Popa","doi":"10.1016/j.imavis.2025.105443","DOIUrl":"10.1016/j.imavis.2025.105443","url":null,"abstract":"<div><div>Human–robot collaboration (HRC) is essential for improving productivity and safety across various industries. While reactive motion re-planning strategies are useful, there is a growing demand for proactive methods that predict human intentions to enable more efficient collaboration. This study addresses this need by introducing a framework that combines deep learning-based human hand trajectory forecasting with heuristic optimization for robotic task sequencing. The deep learning model advances real-time hand position forecasting using a multi-task learning loss to account for both hand positions and contact delay regression, achieving state-of-the-art performance on the Ego4D Future Hand Prediction benchmark. By integrating hand trajectory predictions into task planning, the framework offers a cohesive solution for HRC. To optimize task sequencing, the framework incorporates a Dynamic Variable Neighborhood Search (DynamicVNS) heuristic algorithm, which allows robots to pre-plan task sequences and avoid potential collisions with human hand positions. DynamicVNS provides significant computational advantages over the generalized VNS method. The framework was validated on a UR10e robot performing a visual inspection task in a HRC scenario, where the robot effectively anticipated and responded to human hand movements in a shared workspace. Experimental results highlight the system’s effectiveness and potential to enhance HRC in industrial settings by combining predictive accuracy and task planning efficiency.</div></div>","PeriodicalId":50374,"journal":{"name":"Image and Vision Computing","volume":"155 ","pages":"Article 105443"},"PeriodicalIF":4.2,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}