IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
USB-Net: Unfolding Split Bregman Method With Multi-Phase Feature Integration for Compressive Imaging
Zhen Guo;Hongping Gan
{"title":"USB-Net: Unfolding Split Bregman Method With Multi-Phase Feature Integration for Compressive Imaging","authors":"Zhen Guo;Hongping Gan","doi":"10.1109/TIP.2025.3533198","DOIUrl":"10.1109/TIP.2025.3533198","url":null,"abstract":"Existing unfolding-based compressive imaging approaches always suffer from certain issues, including inefficient feature extraction and information loss during iterative reconstruction phases, which become particularly evident at low sampling ratios, i.e., significant detail degradation and distortion in reconstructed images. To mitigate these challenges, we propose USB-Net, a deep unfolding method inspired by the renowned Split Bregman algorithm and multi-phase feature integration strategy, for compressive imaging reconstruction. Specifically, we use a customized Depthwise Attention Block as a fundamental block for feature extraction, but also to address the sparse induction-related splitting operator within Split Bregman method. Based on this, we introduce three Auxiliary Iteration Modules: <inline-formula> <tex-math>${mathrm {X}}^{(k)}$ </tex-math></inline-formula>, <inline-formula> <tex-math>${mathrm {D}}^{(k)}$ </tex-math></inline-formula>, and <inline-formula> <tex-math>${mathrm {B}}^{(k)}$ </tex-math></inline-formula> to reinforce the effectiveness of Split Bregman’s decomposition strategy for problem breakdown and Bregman iterations. Moreover, we introduce two categories of Iterative Fusion Modules to seamlessly harmonize and integrate insights across iterative reconstruction phases, enhancing the utilization of crucial features, such as edge information and textures. In general, USB-Net can fully harness the advantages of traditional Split Bregman approach, manipulating multi-phase iterative insights to enhance feature extraction, optimize data fidelity, and achieve high-quality image reconstruction. Extensive experiments show that USB-Net significantly outperforms current state-of-the-art methods on image compressive sensing, CS-magnetic resonance imaging, and snapshot compressive imaging tasks, demonstrating superior generalizability. Our code is available at USB-Net.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"925-938"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143057119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network
Bo Zhang;Yaxiong Chen;Shengwu Xiong;Xiaoqiang Lu
{"title":"Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network","authors":"Bo Zhang;Yaxiong Chen;Shengwu Xiong;Xiaoqiang Lu","doi":"10.1109/TIP.2025.3533205","DOIUrl":"10.1109/TIP.2025.3533205","url":null,"abstract":"In hyperspectral images (HSIs), different land cover (LC) classes have distinct reflective characteristics at various wavelengths. Therefore, relying on only a few bands to distinguish all LC classes often leads to information loss, resulting in poor average accuracy. To address this problem, we propose a method called Cascaded Spatial Cross-Attention Network (CSCANet) for HSI classification. We design a cascaded spatial cross-attention module, which first performs cross-attention on local and global features in the spatial context, then uses a group cascade structure to sequentially propagate important spatial regions within the different channels, and finally obtains joint attention features to improve the robustness of the network. Moreover, we also design a two-branch feature separation structure based on spatial-spectral features to separate different LC Tokens as much as possible, thereby improving the distinguishability of different LC classes. Extensive experiments demonstrate that our method achieves excellent performance in enhancing classification accuracy and robustness. The source code can be obtained from <uri>https://github.com/WUTCM-Lab/CSCANet</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"899-913"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Label Propagation With Nuclear Norm Maximization for Visual Domain Adaptation
Wei Wang;Hanyang Li;Cong Wang;Chao Huang;Zhengming Ding;Feiping Nie;Xiaochun Cao
{"title":"Deep Label Propagation With Nuclear Norm Maximization for Visual Domain Adaptation","authors":"Wei Wang;Hanyang Li;Cong Wang;Chao Huang;Zhengming Ding;Feiping Nie;Xiaochun Cao","doi":"10.1109/TIP.2025.3533199","DOIUrl":"10.1109/TIP.2025.3533199","url":null,"abstract":"Domain adaptation aims to leverage abundant label information from a source domain to an unlabeled target domain with two different distributions. Existing methods usually rely on a classifier to generate high-quality pseudo-labels for the target domain, facilitating the learning of discriminative features. Label propagation (LP), as an effective classifier, propagates labels from the source domain to the target domain by designing a smooth function over a similarity graph, which represents structural relationships among data points in feature space. However, LP has not been thoroughly explored in deep neural network-based domain adaptation approaches. Additionally, the probability labels generated by LP are low-confident and LP is sensitive to class imbalance problem. To address these problems, we propose a novel approach for domain adaptation named deep label propagation with nuclear norm maximization (DLP-NNM). Specifically, we employ the constraint of nuclear norm maximization to enhance both label confidence and class diversity in LP and propose an efficient algorithm to solve the corresponding optimization problem. Subsequently, we utilize the proposed LP to guide the classifier layer in a deep discriminative adaptation network using the cross-entropy loss. As such, the network could produce more reliable predictions for the target domain, thereby facilitating more effective discriminative feature learning. Extensive experimental results on three cross-domain benchmark datasets demonstrate that the proposed DLP-NNM surpasses existing state-of-the-art domain adaptation approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1246-1258"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Fusion: Continuous-Time Light Field Video Factorization
Li-De Chen;Li-Qun Weng;Hao-Chien Cheng;An-Yu Cheng;Chao-Tsung Huang
{"title":"Temporal Fusion: Continuous-Time Light Field Video Factorization","authors":"Li-De Chen;Li-Qun Weng;Hao-Chien Cheng;An-Yu Cheng;Chao-Tsung Huang","doi":"10.1109/TIP.2025.3533203","DOIUrl":"10.1109/TIP.2025.3533203","url":null,"abstract":"A factored display emits full-parallax dense-view light fields for a glasses-free 3D experience without sacrificing the spatial resolution of a liquid-crystal display (LCD). For static light fields, it achieves high-quality reconstruction by applying frame-based low-rank factorization to time-multiplexed sub-frame contents of stacked LCDs. However, for light field videos such frame-based factorization could introduce reconstruction artifacts and visual flickers and further cause human discomfort. The artifacts mainly come from incomplete constraints for the emitted light fields that are actually perceived in continuous time, instead of discrete frames. In particular, the perceived light fields are related to the persistence-of-vision (POV) effect of human eyes and the refresh rates of LCD displays, which is not well explored in previous work. In this work, we introduce a light-field video factorization framework—temporal fusion (TF)—to resolve these issues. To begin with, we explicitly formulate the continuous-time POV effect into a global factorization objective functional to eliminate visual flickers and enhance image quality. We further show that this optimization problem can be solved by sequence-level iterative updates on LCD sub-frames. Then, to tackle the enormous requirement of memory access for the sequence-level processing flow, we devise an efficient cuboid-wise factorization algorithm which enables practical GPU implementation. We also devise another lightweight causal framework, TF-C, for supporting low-latency applications. Finally, extensive experiments are performed to verify the effectiveness. Compared to the plain frame-based factorization, TF/TF-C can improve temporal consistency by reducing flicker values by 85%/91% and enhance reconstruction quality by increasing PSNR values by 5.0dB/3.7dB. In addition, we present a prototype dual-layer factored display, which was built with two 240-Hz high-refresh-rate LCDs, to demonstrate the visual quality for real-life applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"955-968"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143057122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Cross-Space Total Variation Regularization Model for Color Image Restoration With Quaternion Blur Operator
Zhigang Jia;Yuelian Xiang;Meixiang Zhao;Tingting Wu;Michael K. Ng
{"title":"A New Cross-Space Total Variation Regularization Model for Color Image Restoration With Quaternion Blur Operator","authors":"Zhigang Jia;Yuelian Xiang;Meixiang Zhao;Tingting Wu;Michael K. Ng","doi":"10.1109/TIP.2025.3533209","DOIUrl":"10.1109/TIP.2025.3533209","url":null,"abstract":"The cross-channel deblurring problem in color image processing is difficult to solve due to the complex coupling and structural blurring of color pixels. Until now, there are few efficient algorithms that can reduce color artifacts in deblurring process. To solve this challenging problem, we present a novel cross-space total variation (CSTV) regularization model for color image deblurring by introducing a quaternion blur operator and a cross-color space regularization functional. The existence and uniqueness of the solution are proved and a new L-curve method is proposed to find a balance of regularization terms on different color spaces. The Euler-Lagrange equation is derived to show that CSTV has taken into account the coupling of all color channels and the local smoothing within each color channel. A quaternion operator splitting method is firstly proposed to enhance the ability of color artifacts reduction of the CSTV regularization model. This strategy also applies to the well-known color deblurring models. Numerical experiments on color image databases illustrate the efficiency and effectiveness of the new model and algorithms. The color images restored by them successfully maintain the color and spatial information and are of higher quality in terms of PSNR, SSIM, MSE and CIEde2000 than the restorations of the-state-of-the-art methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"995-1008"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Monocular Depth Estimation With Dual-Path Encoders and Offset Field Interpolation
Cheng Feng;Congxuan Zhang;Zhen Chen;Weiming Hu;Ke Lu;Liyue Ge
{"title":"Self-Supervised Monocular Depth Estimation With Dual-Path Encoders and Offset Field Interpolation","authors":"Cheng Feng;Congxuan Zhang;Zhen Chen;Weiming Hu;Ke Lu;Liyue Ge","doi":"10.1109/TIP.2025.3533207","DOIUrl":"10.1109/TIP.2025.3533207","url":null,"abstract":"Although self-supervised learning approaches have demonstrated tremendous potential in multi-frame depth estimation scenarios, existing methods struggle to perform well in cases involving dynamic targets and static ego-camera conditions. To address this issue, we propose a self-supervised monocular depth estimation method featuring dual-path encoders and learnable offset interpolation (LOI). First, we construct a dual-path encoding scheme that utilizes residual and transformer blocks to extract both single- and multi-frame features from the input frames. We design a contrastive learning strategy to effectively decouple single- and multi-frame features, enabling weighted fusion guided by a confidence map. Next, we explore two distinct decoding heads for simultaneously generating low-resolution predictions and offset fields. We then design an LOI module to directly upsample a low-resolution depth map to a full-resolution map. This one-step decoding framework enables accurate and efficient depth prediction. Finally, we evaluate our proposed method on the KITTI and Cityscapes benchmarks, conducting a comprehensive comparison with state-of-the-art approaches. The experimental results demonstrate that our DualDepth method achieves competitive performance in terms of both estimation accuracy and efficiency.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"939-954"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Bit Selection for Scalable Deep Hashing
Min Wang;Wengang Zhou;Xin Yao;Houqiang Li
{"title":"Adaptive Bit Selection for Scalable Deep Hashing","authors":"Min Wang;Wengang Zhou;Xin Yao;Houqiang Li","doi":"10.1109/TIP.2025.3533215","DOIUrl":"10.1109/TIP.2025.3533215","url":null,"abstract":"Deep Hashing is one of the most important methods for generating compact feature representation in content-based image retrieval. However, in various application scenarios, it requires training different models with diversified memory and computational resource costs. To address this problem, in this paper, we propose a new scalable deep hashing framework, which aims to generate binary codes with different code lengths by adaptive bit selection. Specifically, the proposed framework consists of two alternative steps, i.e., bit pool generation and adaptive bit selection. In the first step, a deep feature extraction model is trained to output binary codes by optimizing retrieval performance and bit properties. In the second step, we select informative bits from the generated bit pool with reinforcement learning algorithm, in which the same retrieval performance and bit properties are directly used in computing reward. The bit pool can be further updated by fine-tuning the deep feature extraction model with more attention on the selected bits. Hence, these two steps are alternatively iterated until convergence is achieved. Notably, most existing binary hashing methods can be readily integrated into our framework to generate scalable binary codes. Experiments on four public image datasets prove the effectiveness of the proposed framework for image retrieval tasks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1048-1059"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SARATR-X: Toward Building a Foundation Model for SAR Target Recognition
Weijie Li;Wei Yang;Yuenan Hou;Li Liu;Yongxiang Liu;Xiang Li
{"title":"SARATR-X: Toward Building a Foundation Model for SAR Target Recognition","authors":"Weijie Li;Wei Yang;Yuenan Hou;Li Liu;Yongxiang Liu;Xiang Li","doi":"10.1109/TIP.2025.3531988","DOIUrl":"10.1109/TIP.2025.3531988","url":null,"abstract":"Despite the remarkable progress in synthetic aperture radar automatic target recognition (SAR ATR), recent efforts have concentrated on detecting and classifying a specific category, e.g., vehicles, ships, airplanes, or buildings. One of the fundamental limitations of the top-performing SAR ATR methods is that the learning paradigm is supervised, task-specific, limited-category, closed-world learning, which depends on massive amounts of accurately annotated samples that are expensively labeled by expert SAR analysts and have limited generalization capability and scalability. In this work, we make the first attempt towards building a foundation model for SAR ATR, termed SARATR-X. SARATR-X learns generalizable representations via self-supervised learning (SSL) and provides a cornerstone for label-efficient model adaptation to generic SAR target detection and classification tasks. Specifically, SARATR-X is trained on 0.18 M unlabelled SAR target samples, which are curated by combining contemporary benchmarks and constitute the largest publicly available dataset till now. Considering the characteristics of SAR images, a backbone tailored for SAR ATR is carefully designed, and a two-step SSL method endowed with multi-scale gradient features was applied to ensure the feature diversity and model scalability of SARATR-X. The capabilities of SARATR-X are evaluated on classification under few-shot and robustness settings and detection across various categories and scenes, and impressive performance is achieved, often competitive with or even superior to prior fully supervised, semi-supervised, or self-supervised algorithms. Our SARATR-X and the curated dataset are released at <uri>https://github.com/waterdisappear/SARATR-X</uri> to foster research into foundation models for SAR image interpretation.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"869-884"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10856784","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143055091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Mixture Alignment for Multi-Source Domain Adaptation
Aveen Dayal;Shrusti S.;Linga Reddy Cenkeramaddi;C. Krishna Mohan;Abhinav Kumar
{"title":"Leveraging Mixture Alignment for Multi-Source Domain Adaptation","authors":"Aveen Dayal;Shrusti S.;Linga Reddy Cenkeramaddi;C. Krishna Mohan;Abhinav Kumar","doi":"10.1109/TIP.2025.3532094","DOIUrl":"10.1109/TIP.2025.3532094","url":null,"abstract":"In a conventional Domain Adaptation (DA) setting, we only have one source and target domain, whereas, in many real-world applications, data is often collected from several related sources in different conditions. This has led to a more practical and challenging knowledge transfer problem called Multi-source Domain Adaptation (MDA). Several methodologies, such as prototype matching, explicit distance discrepancy, adversarial learning, etc., have been considered to tackle the MDA problem in recent years. Among them, the adversarial-based learning framework is a popular methodology for transferring knowledge from multiple sources to target domains using a min-max optimization strategy. Despite the advances in adversarial-based methods, several limitations exist, such as the need for a classifier-aware discrepancy metric to align the domains and the need to consider target samples’ consistency and semantic information while aligning the domains. To mitigate these issues, in this work, we propose a novel adversarial learning MDA algorithm, MDAMA, which aligns the target domain with a mixture distribution that consists of source domains. MDAMA uses margin-based discrepancy and augmented intermediate distributions to align the domains effectively. We also propose consistency of target samples by confidence thresholding and transfer of semantic information from multiple source domains to the augmented target domain to further improve the performance of the target domain. We extensively experiment with the MDAMA algorithm on popular real-world MDA datasets such as OfficeHome, Office31, PACS, Office-Caltech, and DomainNet. We evaluate the MDAMA model on these benchmark datasets and demonstrate top performance in all of them.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"885-898"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143055092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification
Lanyun Zhu;Tianrun Chen;Deyi Ji;Jieping Ye;Jun Liu
{"title":"Not Every Patch is Needed: Toward a More Efficient and Effective Backbone for Video-Based Person Re-Identification","authors":"Lanyun Zhu;Tianrun Chen;Deyi Ji;Jieping Ye;Jun Liu","doi":"10.1109/TIP.2025.3531299","DOIUrl":"10.1109/TIP.2025.3531299","url":null,"abstract":"This paper proposes a new effective and efficient plug-and-play backbone for video-based person re-identification (ReID). Conventional video-based ReID methods typically use CNN or transformer backbones to extract deep features for every position in every sampled video frame. Here, we argue that this exhaustive feature extraction could be unnecessary, since we find that different frames in a ReID video often exhibit small differences and contain many similar regions due to the relatively slight movements of human beings. Inspired by this, a more selective, efficient paradigm is explored in this paper. Specifically, we introduce a patch selection mechanism to reduce computational cost by choosing only the crucial and non-repetitive patches for feature extraction. Additionally, we present a novel network structure that generates and utilizes pseudo frame global context to address the issue of incomplete views resulting from sparse inputs. By incorporating these new designs, our backbone can achieve both high performance and low computational cost. Extensive experiments on multiple datasets show that our approach reduces the computational cost by 74% compared to ViT-B and 28% compared to ResNet50, while the accuracy is on par with ViT-B and outperforms ResNet50 significantly.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"785-800"},"PeriodicalIF":0.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信