IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Partial Domain Adaptation via Importance Sampling-Based Shift Correction 基于重要采样偏移校正的局部域自适应
IF 13.7
Cheng-Jun Guo;Chuan-Xian Ren;You-Wei Luo;Xiao-Lin Xu;Hong Yan
{"title":"Partial Domain Adaptation via Importance Sampling-Based Shift Correction","authors":"Cheng-Jun Guo;Chuan-Xian Ren;You-Wei Luo;Xiao-Lin Xu;Hong Yan","doi":"10.1109/TIP.2025.3593115","DOIUrl":"10.1109/TIP.2025.3593115","url":null,"abstract":"Partial domain adaptation (PDA) is a challenging task in real-world machine learning scenarios. It aims to transfer knowledge from a labeled source domain to a related unlabeled target domain, where the support set of the source label distribution subsumes the target one. Previous PDA works managed to correct the label distribution shift by weighting samples in the source domain. However, the simple reweighing technique cannot explore the latent structure and sufficiently use the labeled data, and then models are prone to over-fitting on the source domain. In this work, we propose a novel importance sampling-based shift correction (IS2C) method, where new labeled data are sampled from a built sampling domain, whose label distribution is supposed to be the same as the target domain, to characterize the latent structure and enhance the generalization ability of the model. We provide theoretical guarantees for IS2C by proving that the generalization error can be sufficiently dominated by IS2C. In particular, by implementing sampling with the mixture distribution, the extent of shift between source and sampling domains can be connected to generalization error, which provides an interpretable way to build IS2C. To improve knowledge transfer, an optimal transport-based independence criterion is proposed for conditional distribution alignment, where the computation of the criterion can be adjusted to reduce the complexity from <inline-formula> <tex-math>$mathcal {O}(n^{3})$ </tex-math></inline-formula> to <inline-formula> <tex-math>$mathcal {O}(n^{2})$ </tex-math></inline-formula> in realistic PDA scenarios. Extensive experiments on PDA benchmarks validate the theoretical results and demonstrate the effectiveness of our IS2C over existing methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5009-5022"},"PeriodicalIF":13.7,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144763228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Source Domain Generalization for Learned Lossless Volumetric Biomedical Image Compression 学习后无损体积生物医学图像压缩的多源域泛化
IF 13.7
Dongmei Xue;Siqi Wu;Li Li;Dong Liu;Zhu Li
{"title":"Multi-Source Domain Generalization for Learned Lossless Volumetric Biomedical Image Compression","authors":"Dongmei Xue;Siqi Wu;Li Li;Dong Liu;Zhu Li","doi":"10.1109/TIP.2025.3592549","DOIUrl":"10.1109/TIP.2025.3592549","url":null,"abstract":"Learned lossless compression methods for volumetric biomedical images have achieved significant performance improvements compared with the traditional ones. However, they often perform poorly when applied to unseen domains due to domain gap issues. To address this problem, we propose a multi-source domain generalization method to handle two main sources of domain gap issues: modality and structure differences. To address modality differences, we develop an adaptive modality transfer (AMT) module, which predicts a set of modality-specific parameters from the original image and embeds them into the bit stream. These parameters control the weights of a mixture of experts to create a dynamic convolution, which is then used for entropy coding to facilitate modality transfer. To address structure differences, we design an adaptive structure transfer (AST) module, which decomposes the high dynamic range biomedical images into least significant bits (LSB) and most significant bits (MSB) in the wavelet domain. The MSB information, which is unique to the test image, is then used to predict an additional set of dynamic convolutions to enable structure transfer. Experimental results show that our approach reduces performance degradation caused by the domain gap to within 3% across various volumetric biomedical modalities. This paves the way for the practical end-to-end biomedical image compression.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4896-4907"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown 将所有检测到的东西联系起来:促进检测跟踪到未知
IF 13.7
Zimeng Fang;Chao Liang;Xue Zhou;Shuyuan Zhu;Xi Li
{"title":"Associate Everything Detected: Facilitating Tracking-by-Detection to the Unknown","authors":"Zimeng Fang;Chao Liang;Xue Zhou;Shuyuan Zhu;Xi Li","doi":"10.1109/TIP.2025.3592524","DOIUrl":"10.1109/TIP.2025.3592524","url":null,"abstract":"Multi-object tracking (MOT) emerges as a pivotal and highly promising branch in the field of computer vision. Classical closed-vocabulary MOT (CV-MOT) methods aim to track objects of predefined categories. Recently, some open-vocabulary MOT (OV-MOT) methods have successfully addressed the problem of tracking unknown categories. However, we found that the CV-MOT and OV-MOT methods each struggle to excel in the tasks of the other. In this paper, we present a unified framework, Associate Everything Detected (AED), that simultaneously tackles CV-MOT and OV-MOT by integrating with any off-the-shelf detector and supports unknown categories. Different from existing tracking-by-detection MOT methods, AED gets rid of prior knowledge (e.g., motion cues) and relies solely on highly robust feature learning to handle complex trajectories in OV-MOT tasks while keeping excellent performance in CV-MOT tasks. Specifically, we model the association task as a similarity decoding problem and propose a sim-decoder with an association-centric learning mechanism. The sim-decoder calculates similarities in three aspects: spatial, temporal, and cross-clip. Subsequently, association-centric learning leverages these threefold similarities to ensure that the extracted features are appropriate for continuous tracking and robust enough to generalize to unknown categories. Compared with existing powerful OV-MOT and CV-MOT methods, AED achieves superior performance on TAO, SportsMOT, and DanceTrack without any prior knowledge. Our code is available at <uri>https://github.com/balabooooo/AED</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4830-4842"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Level Contextual Prototype Modulation for Compositional Zero-Shot Learning 合成零射击学习的多级上下文原型调制
IF 13.7
Yang Liu;Xinshuo Wang;Xinbo Gao;Jungong Han;Ling Shao
{"title":"Multi-Level Contextual Prototype Modulation for Compositional Zero-Shot Learning","authors":"Yang Liu;Xinshuo Wang;Xinbo Gao;Jungong Han;Ling Shao","doi":"10.1109/TIP.2025.3592560","DOIUrl":"10.1109/TIP.2025.3592560","url":null,"abstract":"Compositional Zero-Shot Learning (CZSL) aims to recognize unseen attribute-object compositions by leveraging prior knowledge of known primitives. However, real-world visual features of attributes and objects are often entangled, causing distribution shifts between seen and unseen combinations. Existing methods often ignore intrinsic variations and interactions among primitives, leading to poor feature discrimination and biased predictions. To address these challenges, we propose Multi-level Contextual Prototype Modulation (MCPM), a transformer-based framework with a hierarchical structure that effectively integrates attributes and objects to generate richer visual embeddings. At the feature level, we apply contrastive learning to improve discriminability across compositional tasks. At the prototype level, a subclass-driven modulator captures fine-grained attribute-object interactions, enabling better adaptation to long-tail distributions. Additionally, we introduce a Minority Attribute Enhancement (MAE) strategy that synthesizes virtual samples by mixing attribute classes, further mitigating data imbalance. Experiments on four benchmark datasets (MIT-States, C-GQA, UT-Zappos, and VAW-CZSL) show that MCPM brings significant performance improvements, verifying its effectiveness in complex composition scenes.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4856-4868"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation AdaAugment:一种无调优和自适应的方法来增强数据增强
IF 13.7
Suorong Yang;Peijia Li;Xin Xiong;Furao Shen;Jian Zhao
{"title":"AdaAugment: A Tuning-Free and Adaptive Approach to Enhance Data Augmentation","authors":"Suorong Yang;Peijia Li;Xin Xiong;Furao Shen;Jian Zhao","doi":"10.1109/TIP.2025.3592538","DOIUrl":"10.1109/TIP.2025.3592538","url":null,"abstract":"Data augmentation (DA) is widely employed to improve the generalization performance of deep models. However, most existing DA methods employ augmentation operations with fixed or random magnitudes throughout the training process. While this fosters data diversity, it can also inevitably introduce uncontrolled variability in augmented data, which could potentially cause misalignment with the evolving training status of the target models. Both theoretical and empirical findings suggest that this misalignment increases the risks of both underfitting and overfitting. To address these limitations, we propose AdaAugment, an innovative and tuning-free adaptive augmentation method that leverages reinforcement learning to dynamically and adaptively adjust augmentation magnitudes for individual training samples based on real-time feedback from the target network. Specifically, AdaAugment features a dual-model architecture consisting of a policy network and a target network, which are jointly optimized to adapt augmentation magnitudes in accordance with the model’s training progress effectively. The policy network optimizes the variability within the augmented data, while the target network utilizes the adaptively augmented samples for training. These two networks are jointly optimized and mutually reinforce each other. Extensive experiments across benchmark datasets and deep architectures demonstrate that AdaAugment consistently outperforms other state-of-the-art DA methods in effectiveness while maintaining remarkable efficiency. Code is available at <uri>https://github.com/Jackbrocp/AdaAugment.</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4843-4855"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour Flow Constraint: Preserving Global Shape Similarity for Deep Learning-Based Image Segmentation 轮廓流约束:保持基于深度学习的图像分割的全局形状相似度
IF 13.7
Shengzhe Chen;Zhaoxuan Dong;Jun Liu
{"title":"Contour Flow Constraint: Preserving Global Shape Similarity for Deep Learning-Based Image Segmentation","authors":"Shengzhe Chen;Zhaoxuan Dong;Jun Liu","doi":"10.1109/TIP.2025.3592545","DOIUrl":"10.1109/TIP.2025.3592545","url":null,"abstract":"For effective image segmentation, it is crucial to employ constraints informed by prior knowledge about the characteristics of the areas to be segmented to yield favorable segmentation outcomes. However, the existing methods have primarily focused on priors of specific properties or shapes, lacking consideration of the general global shape similarity from a Contour Flow perspective. Furthermore, naturally integrating this contour flow prior image segmentation model into the activation functions of deep convolutional networks through mathematical methods is currently unexplored. In this paper, we establish a concept of global shape similarity based on the premise that two shapes exhibit comparable contours. Furthermore, we mathematically derive a contour flow constraint that ensures the preservation of global shape similarity. We propose two implementations to integrate the constraint with deep neural networks. Firstly, the constraint is converted to a shape loss, which can be seamlessly incorporated into the training phase for any learning-based segmentation framework. Secondly, we add the constraint into a variational segmentation model and derive its iterative schemes for solution. The scheme is then unrolled to get the architecture of the proposed CFSSnet. Validation experiments on diverse datasets are conducted on classic benchmark deep network segmentation models. The results indicate a great improvement in segmentation accuracy and shape similarity for the proposed shape loss, showcasing the general adaptability of the proposed loss term regardless of specific network architectures. CFSSnet shows robustness in segmenting noise-contaminated images, and inherent capability to preserve global shape similarity.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5054-5067"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Boundary Features Network With Wider Catchers for Glass Segmentation 基于傅立叶边界特征网络的宽捕获器玻璃分割
IF 13.7
Xiaolin Qin;Jiacen Liu;Qianlei Wang;Shaolin Zhang;Fei Zhu;Zhang Yi
{"title":"Fourier Boundary Features Network With Wider Catchers for Glass Segmentation","authors":"Xiaolin Qin;Jiacen Liu;Qianlei Wang;Shaolin Zhang;Fei Zhu;Zhang Yi","doi":"10.1109/TIP.2025.3592522","DOIUrl":"10.1109/TIP.2025.3592522","url":null,"abstract":"Glass largely blurs the boundary between the real world and the reflection. The special transmittance and reflectance quality have confused the semantic tasks related to machine vision. Therefore, how to clear the boundary built by glass, and avoid over-capturing features as false positive information in deep structure, matters for constraining the segmentation of reflection surface and penetrating glass. We propose the Fourier Boundary Features Network with Wider Catchers (FBWC), which might represent the first attempt to utilize sufficiently wide horizontal shallow branches without vertical deepening for guiding the fine granularity segmentation boundary through primary glass semantic information. Specifically, we design the Wider Coarse-Catchers (WCC) for anchoring large area segmentation and reducing excessive extraction from a structural perspective. We embed fine-grained features by Cross Transpose Attention (CTA), which is introduced to avoid the incomplete area within the boundary caused by reflection noise. For excavating glass features and balancing high-low layers context, a learnable Fourier Convolution Controller (FCC) is proposed to regulate information integration robustly. The proposed method is validated on three different public glass segmentation datasets. Experimental results reveal that the proposed method yields better segmentation performance compared with the state-of-the-art (SOTA) methods in glass image segmentation.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5038-5053"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Denoising Using Green Channel Prior 基于绿色通道先验的图像去噪
IF 13.7
Zhaoming Kong;Fangxi Deng;Xiaowei Yang
{"title":"Image Denoising Using Green Channel Prior","authors":"Zhaoming Kong;Fangxi Deng;Xiaowei Yang","doi":"10.1109/TIP.2025.3592550","DOIUrl":"10.1109/TIP.2025.3592550","url":null,"abstract":"Image denoising is an appealing and challenging task, in that noise statistics of real-world observations may vary with local image contents and different image channels. Specifically, the green channel usually has twice the sampling rate in raw data. To handle noise variances and leverage such channel-wise prior information, we propose a simple and effective green channel prior-based image denoising (GCP-ID) method, which integrates GCP into the classic patch-based denoising framework. Briefly, we exploit the green channel to guide the search for similar patches, which aims to improve the patch grouping quality and encourage sparsity in the transform domain. The grouped image patches are then reformulated into RGGB arrays to explicitly characterize the density of green samples. Furthermore, to enhance the adaptivity of GCP-ID to various image contents, we cast the noise estimation problem into a classification task and train an effective estimator based on convolutional neural networks (CNNs). Experiments on real-world datasets demonstrate the competitive performance of the proposed GCP-ID method for image and video denoising applications in both raw and sRGB spaces. Our code is available at <uri>https://github.com/ZhaomingKong/GCP-ID</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4869-4884"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hypergraph Mamba Reasoning-Based Social Relation Recognition 基于超图曼巴推理的社会关系识别
IF 13.7
Wang Tang;Linbo Qing;Pingyu Wang;Lindong Li;Ce Zhu
{"title":"Hypergraph Mamba Reasoning-Based Social Relation Recognition","authors":"Wang Tang;Linbo Qing;Pingyu Wang;Lindong Li;Ce Zhu","doi":"10.1109/TIP.2025.3592551","DOIUrl":"10.1109/TIP.2025.3592551","url":null,"abstract":"Recognizing social relations from images is crucial for improving machine perception of social interactions. Current studies mainly focus on exploring single-type relation reasoning frameworks, such as the relation between father, mother and son in a family. However, real-world scenarios often involve complex hybrid relations, such as friendships and professional relations, which pose a challenge for current methods due to the difficulty of establishing robust logical connections between these relations. In fact, in this hybrid social relation recognition setting, the interactions extend beyond dyadic to multipartite structures. To effectively explore these multipartite interactions, we propose a novel Hypergraph Mamba (HGM) framework. Specifically, we construct two hypergraphs, i.e., Person-Person Hypergraphs (PPH) and Person-Object Hypergraphs (POH), to model these high-order multipartite interactions. The HGM module performs social relation reasoning within these hypergraph structures, which includes a Vertex Selection Algorithm to mitigate inference confusion by filtering out confounders, and a Vertex Interaction Operator to find optimal global vertex neighborhoods by capturing long-range vertex dependencies. In addition, a Multilevel Transformer is proposed to adaptively align the PPH and POH inferred knowledge and visual signals to facilitate information fusion. We validate the effectiveness of our proposed HGM model on several public datasets and perform extensive ablation studies to elucidate the reasons contributing to its superior performance. Experimental results indicate that our HGM model achieves superior accuracy in predicting social relations compared to the state-of-the-art methods. Codes and datasets are available at: <uri>https://github.com/tw-repository/HGM-SRR</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4814-4829"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Personalized Federated Learning for Cross-Spectral Palmprint Recognition 面向交叉光谱掌纹识别的动态个性化联邦学习
IF 13.7
Shuyi Li;Jianian Hu;Bob Zhang;Xin Ning;Lifang Wu
{"title":"Dynamic Personalized Federated Learning for Cross-Spectral Palmprint Recognition","authors":"Shuyi Li;Jianian Hu;Bob Zhang;Xin Ning;Lifang Wu","doi":"10.1109/TIP.2025.3592508","DOIUrl":"10.1109/TIP.2025.3592508","url":null,"abstract":"Palmprint recognition has recently garnered attention due to its high accuracy, strong robustness, and high security. Existing deep learning-based palmprint recognition methods usually require large amounts of data for centralized training, facing the challenge of privacy disclosure. In addition, the non-independent and identically distributed (non-IID) issue in the multi-spectral palmprint images generally leads to the degradation of recognition performance. To tackle these problems, this paper proposes a dynamic personalized federated learning model for cross-spectral palmprint recognition, called DPFed-Palm. Specifically, for each client’s local training, we present a new combination of loss functions to enforce the constraints of local models and effectively enhance the feature representation capability of models. Subsequently, DPFed-Palm aggregates the above-trained local models by using the combined aggregation strategies of the Federated Averaging (FedAvg) and Personalized Federated Learning (PFL) to obtain the best personalized global model of each client. For the selection of the best personalized global model, we develop a dynamic weight selection strategy to obtain the optimal weights of the local and global models by cross-spectral (cross-client) testing. Extensive experimental results on three public PolyU multispectral, IITD, and CASIA datasets show that the proposed method outperforms the existing techniques in privacy-preserving and recognition performance.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"4885-4895"},"PeriodicalIF":13.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144747506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信