IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
CloCap-GS: Clothed Human Performance Capture With 3D Gaussian Splatting CloCap-GS:穿衣服的人类表现捕捉与3D高斯飞溅
IF 13.7
Kangkan Wang;Chong Wang;Jian Yang;Guofeng Zhang
{"title":"CloCap-GS: Clothed Human Performance Capture With 3D Gaussian Splatting","authors":"Kangkan Wang;Chong Wang;Jian Yang;Guofeng Zhang","doi":"10.1109/TIP.2025.3592534","DOIUrl":"10.1109/TIP.2025.3592534","url":null,"abstract":"Capturing the human body and clothing from videos has obtained significant progress in recent years, but several challenges remain to be addressed. Previous methods reconstruct the 3D bodies and garments from videos with self-rotating human motions or capture the body and clothing separately based on neural implicit fields. However, the reconstruction methods for self-rotating motions may cause instable tracking on dynamic videos with arbitrary human motions, while implicit fields based methods are limited to inefficient rendering and low quality synthesis. To solve these problems, we propose a new method, called CloCap-GS, for clothed human performance capture with 3D Gaussian Splatting. Specifically, we align 3D Gaussians with the deforming geometries of body and clothing, and leverage photometric constraints formed by matching Gaussians renderings with input video frames to recover temporal deformations of the dense template geometry. The geometry deformations and Gaussians properties of both the body and clothing are optimized jointly, achieving both dense geometry tracking and novel-view synthesis. In addition, we introduce a physics-aware material-varying cloth model to preserve physically-plausible cloth dynamics and body-clothing interactions that is pre-trained in a self-supervised manner without preparing training data. Compared with the existing methods, our method improves the accuracy of dense geometry tracking and quality of novel-view synthesis for a variety of daily garment types (e.g., loose clothes). Extensive experiments in both quantitative and qualitative evaluations demonstrate the effectiveness of CloCap-GS on real sparse-view or monocular videos.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5200-5214"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Escaping Modal Interactions: An Efficient DESANet for Multi-Modal Object Re-Identification 转义模态交互:多模态对象再识别的高效DESANet
IF 13.7
Wenjiao Dong;Xi Yang;De Cheng;Nannan Wang;Xinbo Gao
{"title":"Escaping Modal Interactions: An Efficient DESANet for Multi-Modal Object Re-Identification","authors":"Wenjiao Dong;Xi Yang;De Cheng;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2025.3592575","DOIUrl":"10.1109/TIP.2025.3592575","url":null,"abstract":"Multi-modal object Re-ID aims to leverage the complementary information provided by multiple modalities to overcome challenging conditions and achieve high-quality object matching. However, existing multi-modal methods typically rely on various modality interaction modules for information fusion, which can reduce the efficiency of real-time monitoring systems. Additionally, practical challenges such as low-quality multi-modal data or missing modalities further complicate the application of object Re-ID. To address these issues, we propose the Complementary Data Enhancement and Modal-Aware Soft Alignment Network (DESANet), which is designed to be independent of interactive networks and adaptable to scenarios with missing modalities. This approach ensures a simple-yet-effective, and efficient multi-modal object Re-ID. DESANet consists of three key components: Firstly, the Dual-Color Space Data Enhancement (DCDE) module, which enhances multi-modal data by performing patch rotation in the RGB space and improving image quality in the HSV space. Secondly, the Salient Feature ReConstruction (SFRC) module, which addresses the issue of missing modalities by reconstructing features from one modality using the other two. Thirdly, the Modal-Aware Soft Alignment (MASA) module, which integrates multi-source data to avoid the blind fusion of features and prevents the propagation of noise from reconstructed modalities. Our approach achieves state-of-the-art performances on both person and vehicle datasets. Source code is available at <uri>https://github.com/DWJ11/DESANet</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5068-5083"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Nonconvex Low-rank Matrix Recovery 分散非凸低秩矩阵恢复
IF 13.7
Junzhuo Gao;Heng Lian
{"title":"Decentralized Nonconvex Low-rank Matrix Recovery","authors":"Junzhuo Gao;Heng Lian","doi":"10.1109/TIP.2025.3588719","DOIUrl":"10.1109/TIP.2025.3588719","url":null,"abstract":"For the low-rank matrix recovery problem, algorithms that directly manipulate the low-rank matrix typically require computing the top singular values/vectors of the matrix and thus are computationally expensive. Matrix factorization is a computationally efficient nonconvex approach for low-rank matrix recovery, utilizing an alternating minimization or a gradient descent algorithm, and its theoretical properties have been investigated in recent years. However, the behavior of the factorization-based matrix recovery problem in the decentralized setting is still unknown when data are distributed on multiple nodes. In this paper, we consider the distributed gradient descent algorithm and establish its (local) linear convergence up to the approximation error. Numerical results are also presented to illustrate the convergence of the algorithm over a general network.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4806-4813"},"PeriodicalIF":13.7,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144736838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse Weather Conditions 一个用于恶劣天气条件下图像恢复的通用扩散模型
IF 13.7
Wenfeng Huang;Guoan Xu;Wenjing Jia;Stuart Perry;Guangwei Gao
{"title":"ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse Weather Conditions","authors":"Wenfeng Huang;Guoan Xu;Wenjing Jia;Stuart Perry;Guangwei Gao","doi":"10.1109/TIP.2025.3587578","DOIUrl":"10.1109/TIP.2025.3587578","url":null,"abstract":"Images captured in challenging environments–such as nighttime, smoke, rainy weather, and underwater–often suffer from significant degradation, resulting in a substantial loss of visual quality. The effective restoration of these degraded images is critical for the subsequent vision tasks. While many existing approaches have successfully incorporated specific priors for individual tasks, these tailored solutions limit their applicability to other degradations. In this work, we propose a universal network architecture, dubbed “ReviveDiff”, which can address various degradations and restore images to their original quality by enhancing and restoring their details. Our approach is inspired by the observation that, unlike degradation caused by movement or electronic issues, quality degradation under adverse conditions primarily stems from natural media (such as fog, water, and low luminance), which generally preserves the original structures of objects. To restore the quality of such images, we leveraged the latest advancements in diffusion models and developed ReviveDiff to restore image quality from both macro and micro levels across some key factors determining image quality, such as sharpness, distortion, noise level, dynamic range, and color accuracy. We rigorously evaluated ReviveDiff on seven benchmark datasets covering five types of degrading conditions: Rainy, Underwater, Low-light, Smoke, and Nighttime Hazy. Our experimental results demonstrate that ReviveDiff outperforms the state-of-the-art methods both quantitatively and visually.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4706-4720"},"PeriodicalIF":13.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144694069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective Cross-View Topology for Deep Incomplete Multi-View Clustering 深度不完全多视图聚类的选择性交叉视图拓扑。
IF 13.7
Zhibin Dong;Dayu Hu;Jiaqi Jin;Siwei Wang;Xinwang Liu;En Zhu
{"title":"Selective Cross-View Topology for Deep Incomplete Multi-View Clustering","authors":"Zhibin Dong;Dayu Hu;Jiaqi Jin;Siwei Wang;Xinwang Liu;En Zhu","doi":"10.1109/TIP.2025.3587586","DOIUrl":"10.1109/TIP.2025.3587586","url":null,"abstract":"Incomplete multi-view clustering has gained significant attention due to the prevalence of incomplete multi-view data in real-world scenarios. However, existing methods often overlook the critical role of inter-view relationships. In unsupervised settings, selectively leveraging cross-view topological relationships can effectively guide view completion and representation learning. To address this challenge, we propose a novel framework called Selective Cross-View Topology Incomplete Multi-View Clustering (SCVT). Our approach constructs a view topology graph using the Optimal Transport (OT) distance between view. This graph helps identify neighboring views for those with missing data, enabling the inference of topological relationships and accurate completion of missing samples. Additionally, we introduce the Max View Graph Contrastive Alignment module to facilitate information transfer and alignment across neighboring views. Furthermore, we propose the View Graph Weighted Intra-View Contrastive Learning module, which enhances representation learning by pulling representations of samples within the same cluster closer, while applying varying degrees of enhancement across different views based on the view graph. Our method achieves state-of-the-art performance on seven benchmark datasets, significantly outperforming existing methods for incomplete multi-view clustering and demonstrating its effectiveness.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4792-4805"},"PeriodicalIF":13.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144693171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-Shot Skeleton-Based Action Recognition With Prototype-Guided Feature Alignment 基于原型引导特征对齐的零射击骨架动作识别。
Kai Zhou;Shuhai Zhang;Zeng You;Jinwu Hu;Mingkui Tan;Fei Liu
{"title":"Zero-Shot Skeleton-Based Action Recognition With Prototype-Guided Feature Alignment","authors":"Kai Zhou;Shuhai Zhang;Zeng You;Jinwu Hu;Mingkui Tan;Fei Liu","doi":"10.1109/TIP.2025.3586487","DOIUrl":"10.1109/TIP.2025.3586487","url":null,"abstract":"Zero-shot skeleton-based action recognition aims to classify unseen skeleton-based human actions without prior exposure to such categories during training. This task is extremely challenging due to the difficulty in generalizing from known to unknown actions. Previous studies typically use two-stage training: pre-training skeleton encoders on seen action categories using cross-entropy loss and then aligning pre-extracted skeleton and text features, enabling knowledge transfer to unseen classes through skeleton-text alignment and language models’ generalization. However, their efficacy is hindered by 1) insufficient discrimination for skeleton features, as the fixed skeleton encoder fails to capture necessary alignment information for effective skeleton-text alignment; 2) the neglect of alignment bias between skeleton and unseen text features during testing. To this end, we propose a prototype-guided feature alignment paradigm for zero-shot skeleton-based action recognition, termed PGFA. Specifically, we develop an end-to-end cross-modal contrastive training framework to improve skeleton-text alignment, ensuring sufficient discrimination for skeleton features. Additionally, we introduce a prototype-guided text feature alignment strategy to mitigate the adverse impact of the distribution discrepancy during testing. We provide a theoretical analysis to support our prototype-guided text feature alignment strategy and empirically evaluate our overall PGFA on three well-known datasets. Compared with the top competitor SMIE method, our PGFA achieves absolute accuracy improvements of 22.96%, 12.53%, and 18.54% on the NTU-60, NTU-120, and PKU-MMD datasets, respectively.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4602-4617"},"PeriodicalIF":0.0,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144664130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SQLNet: Scale-Modulated Query and Localization Network for Few-Shot Class-Agnostic Counting 基于尺度调制查询和定位网络的少射类不可知计数。
Hefeng Wu;Yandong Chen;Lingbo Liu;Tianshui Chen;Keze Wang;Liang Lin
{"title":"SQLNet: Scale-Modulated Query and Localization Network for Few-Shot Class-Agnostic Counting","authors":"Hefeng Wu;Yandong Chen;Lingbo Liu;Tianshui Chen;Keze Wang;Liang Lin","doi":"10.1109/TIP.2025.3588255","DOIUrl":"10.1109/TIP.2025.3588255","url":null,"abstract":"The class-agnostic counting (CAC) task has recently been proposed to solve the problem of counting all objects of an arbitrary class with several exemplars given in the input image. To address this challenging task, existing leading methods all resort to density map regression, which renders them impractical for downstream tasks that require object locations and restricts their ability to well explore the scale information of exemplars for supervision. Meanwhile, they generally model the interaction between the input image and the exemplars in an exemplar-by-exemplar way, which is inefficient and may not fully synthesize information from all exemplars. To address these limitations, we propose a novel localization-based CAC approach, termed Scale-modulated Query and Localization Network (SQLNet). It fully explores the scales of exemplars in both the query and localization stages and achieves effective counting by accurately locating each object and predicting its approximate size. Specifically, during the query stage, rich discriminative representations of the target class are acquired by the Hierarchical Exemplars Collaborative Enhancement (HECE) module from the few exemplars through multi-scale exemplar cooperation with equifrequent size prompt embedding. These representations are then fed into the Exemplars-Unified Query Correlation (EUQC) module to interact with the query features in a unified manner and produce the correlated query tensor. In the localization stage, the Scale-aware Multi-head Localization (SAML) module utilizes the query tensor to predict the confidence, location, and size of each potential object. Moreover, a scale-aware localization loss is introduced, which exploits flexible location associations and exemplar scales for supervision to optimize the model performance. Extensive experiments demonstrate that SQLNet outperforms state-of-the-art methods on popular CAC benchmarks, achieving excellent performance not only in counting accuracy but also in localization and bounding box generation.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4631-4645"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Alpha-Trees for Hierarchical Segmentation by Horizontal Cuts 水平切割分层分割的alpha树评价。
IF 13.7
Xiaoxuan Zhang;Michael H. F. Wilkinson
{"title":"Evaluation of Alpha-Trees for Hierarchical Segmentation by Horizontal Cuts","authors":"Xiaoxuan Zhang;Michael H. F. Wilkinson","doi":"10.1109/TIP.2025.3588250","DOIUrl":"10.1109/TIP.2025.3588250","url":null,"abstract":"Alpha trees, and derived <inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-<inline-formula> <tex-math>$omega $ </tex-math></inline-formula>-hierarchies are powerful tools for hierarchical image representation in computer vision. However, the quality of <inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-<inline-formula> <tex-math>$omega $ </tex-math></inline-formula>-hierarchies has not been fully evaluated, limiting their further development and application. In our study, an algorithm for evaluating the quality of <inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-<inline-formula> <tex-math>$omega $ </tex-math></inline-formula>-hierarchies based on horizontal cut filters is proposed. With the aim to automatically select optimal parameters and dissimilarity measures for <inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-<inline-formula> <tex-math>$omega $ </tex-math></inline-formula>-hierarchy constructions, key factors including maximum accuracy, construction complexity, and efficiency of <inline-formula> <tex-math>$alpha $ </tex-math></inline-formula>-<inline-formula> <tex-math>$omega $ </tex-math></inline-formula>-hierarchies are systematically considered. Notably, remote sensing images based experiments were conducted to demonstrate the usefulness of this algorithm. In addition, our algorithm can be potentially extended to qualify other types of hierarchical trees, making it useful for the automatic selection of optimal hierarchical segmentation methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4646-4659"},"PeriodicalIF":13.7,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144652716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View Disparity Estimation Using the Gradient Consistency Model 基于梯度一致性模型的多视差估计。
James Lyndon Gray;Aous Thabit Naman;David S. Taubman
{"title":"Multi-View Disparity Estimation Using the Gradient Consistency Model","authors":"James Lyndon Gray;Aous Thabit Naman;David S. Taubman","doi":"10.1109/TIP.2025.3588322","DOIUrl":"10.1109/TIP.2025.3588322","url":null,"abstract":"Variational approaches to disparity estimation typically use a linearised brightness constancy constraint, which only applies in smooth regions and over small distances. Accordingly, current variational approaches rely on a schedule to progressively include image data. This paper proposes the use of Gradient Consistency information to assess the validity of the linearisation; this information is used to determine the weights applied to the data term as part of an analytically inspired Gradient Consistency Model. The Gradient Consistency Model penalises the data term for view pairs that have a mismatch between the spatial gradients in the source view and the spatial gradients in the target view. Instead of relying on a tuned or learned schedule, the Gradient Consistency Model is self-scheduling, since the weights evolve as the algorithm progresses. We show that the Gradient Consistency Model outperforms standard coarse-to-fine schemes and the recently proposed progressive inclusion of views approach in both rate of convergence and accuracy.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4676-4690"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ED4: Explicit Data-Level Debiasing for Deepfake Detection ED4:用于深度伪造检测的显式数据级去偏。
Jikang Cheng;Ying Zhang;Qin Zou;Zhiyuan Yan;Chao Liang;Zhongyuan Wang;Chen Li
{"title":"ED4: Explicit Data-Level Debiasing for Deepfake Detection","authors":"Jikang Cheng;Ying Zhang;Qin Zou;Zhiyuan Yan;Chao Liang;Zhongyuan Wang;Chen Li","doi":"10.1109/TIP.2025.3588323","DOIUrl":"10.1109/TIP.2025.3588323","url":null,"abstract":"Learning intrinsic bias from limited data has been considered the main reason for the failure of deepfake detection with generalizability. Apart from the discovered content and specific-forgery bias, we reveal a novel spatial bias, where detectors inertly anticipate observing structural forgery clues appearing at the image center, also can lead to the poor generalization of existing methods. We present ED4, a simple and effective strategy, to address aforementioned biases explicitly at the data level in a unified framework rather than implicit disentanglement via network design. In particular, we develop ClockMix to produce facial structure preserved mixtures with arbitrary samples, which allows the detector to learn from an exponentially extended data distribution with much more diverse identities, backgrounds, local manipulation traces, and the co-occurrence of multiple forgery artifacts. We further propose the Adversarial Spatial Consistency Module (AdvSCM) to prevent extracting features with spatial bias, which adversarially generates spatial-inconsistent images and constrains their extracted feature to be consistent. As a model-agnostic debiasing strategy, ED4 is plug-and-play: it can be integrated with various deepfake detectors to obtain significant benefits. We conduct extensive experiments to demonstrate its effectiveness and superiority over existing deepfake detection approaches. Code is available at <uri>https://github.com/beautyremain/ED4</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4618-4630"},"PeriodicalIF":0.0,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信