IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Adapting Few-Shot Classification via In-Process Defense 通过过程中的防御来调整 "少量 "分类
Xi Yang;Dechen Kong;Ren Lin;Nannan Wang;Xinbo Gao
{"title":"Adapting Few-Shot Classification via In-Process Defense","authors":"Xi Yang;Dechen Kong;Ren Lin;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2024.3458858","DOIUrl":"10.1109/TIP.2024.3458858","url":null,"abstract":"Most few-shot learning methods employ either adaptive approaches or parameter amortization techniques. However, their reliance on pre-trained models presents a significant vulnerability. When an attacker’s trigger activates a hidden backdoor, it may result in the misclassification of images, profoundly affecting the model’s performance. In our research, we explore adaptive defenses against backdoor attacks for few-shot learning. We introduce a specialized stochastic process tailored to task characteristics that safeguards the classification model against attack-induced incorrect feature extraction. This process functions during forward propagation and is thus termed an “in-process defense.” Our method employs an adaptive strategy, effectively generating task-level representations, enabling rapid adaptation to pre-trained models, and proving effective in few-shot classification scenarios for countering backdoor attacks. We apply latent stochastic processes to approximate task distributions and derive task-level representations from the support set. This task-level representation guides feature extraction, leading to backdoor trigger mismatching and forming the foundation of our parameter defense strategy. Benchmark tests on Meta-Dataset reveal that our approach not only withstands backdoor attacks but also shows an improved adaptation in addressing few-shot classification tasks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5232-5245"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive Learning With Cross-Window Consistency for Semi-Supervised Semantic Segmentation 利用跨窗口一致性渐进学习实现半监督语义分割
Bo Dang;Yansheng Li;Yongjun Zhang;Jiayi Ma
{"title":"Progressive Learning With Cross-Window Consistency for Semi-Supervised Semantic Segmentation","authors":"Bo Dang;Yansheng Li;Yongjun Zhang;Jiayi Ma","doi":"10.1109/TIP.2024.3458854","DOIUrl":"10.1109/TIP.2024.3458854","url":null,"abstract":"Semi-supervised semantic segmentation focuses on the exploration of a small amount of labeled data and a large amount of unlabeled data, which is more in line with the demands of real-world image understanding applications. However, it is still hindered by the inability to fully and effectively leverage unlabeled images. In this paper, we reveal that cross-window consistency (CWC) is helpful in comprehensively extracting auxiliary supervision from unlabeled data. Additionally, we propose a novel CWC-driven progressive learning framework to optimize the deep network by mining weak-to-strong constraints from massive unlabeled data. More specifically, this paper presents a biased cross-window consistency (BCC) loss with an importance factor, which helps the deep network explicitly constrain confidence maps from overlapping regions in different windows to maintain semantic consistency with larger contexts. In addition, we propose a dynamic pseudo-label memory bank (DPM) to provide high-consistency and high-reliability pseudo-labels to further optimize the network. Extensive experiments on three representative datasets of urban views, medical scenarios, and satellite scenes with consistent performance gain demonstrate the superiority of our framework. Our code is released at \u0000<uri>https://jack-bo1220.github.io/project/CWC.html</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5219-5231"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Temperature Estimation and Nonuniformity Correction From Multiple Frames 从多个帧同时进行温度估算和非均匀性校正
Navot Oz;Omri Berman;Nir Sochen;David Mendlovic;Iftach Klapp
{"title":"Simultaneous Temperature Estimation and Nonuniformity Correction From Multiple Frames","authors":"Navot Oz;Omri Berman;Nir Sochen;David Mendlovic;Iftach Klapp","doi":"10.1109/TIP.2024.3458861","DOIUrl":"10.1109/TIP.2024.3458861","url":null,"abstract":"IR cameras are widely used for temperature measurements in various applications, including agriculture, medicine, and security. Low-cost IR cameras have the immense potential to replace expensive radiometric cameras in these applications; however, low-cost microbolometer-based IR cameras are prone to spatially variant nonuniformity and to drift in temperature measurements, which limit their usability in practical scenarios. To address these limitations, we propose a novel approach for simultaneous temperature estimation and nonuniformity correction (NUC) from multiple frames captured by low-cost microbolometer-based IR cameras. We leverage the camera’s physical image-acquisition model and incorporate it into a deep-learning architecture termed kernel prediction network (KPN), which enables us to combine multiple frames despite imperfect registration between them. We also propose a novel offset block that incorporates the ambient temperature into the model and enables us to estimate the offset of the camera, which is a key factor in temperature estimation. Our findings demonstrate that the number of frames has a significant impact on the accuracy of the temperature estimation and NUC. Moreover, introduction of the offset block results in significantly improved performance compared to vanilla KPN. The method was tested on real data collected by a low-cost IR camera mounted on an unmanned aerial vehicle, showing only a small average error of \u0000<inline-formula> <tex-math>$0.27-0.54^{circ } C$ </tex-math></inline-formula>\u0000 relative to costly scientific-grade radiometric cameras. Real data collected horizontally resulted in similar errors of \u0000<inline-formula> <tex-math>$0.48-0.68^{circ } C$ </tex-math></inline-formula>\u0000. Our method provides an accurate and efficient solution for simultaneous temperature estimation and NUC, which has important implications for a wide range of practical applications.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5246-5259"},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10682482","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Attention Regression Flow for Defect Detection 用于缺陷检测的交叉注意回归流程
Binhui Liu;Tianchu Guo;Bin Luo;Zhen Cui;Jian Yang
{"title":"Cross-Attention Regression Flow for Defect Detection","authors":"Binhui Liu;Tianchu Guo;Bin Luo;Zhen Cui;Jian Yang","doi":"10.1109/TIP.2024.3457236","DOIUrl":"https://doi.org/10.1109/TIP.2024.3457236","url":null,"abstract":"Defect detection from images is a crucial and challenging topic of industry scenarios due to the scarcity and unpredictability of anomalous samples. However, existing defect detection methods exhibit low detection performance when it comes to small-size defects. In this work, we propose a Cross-Attention Regression Flow (CARF) framework to model a compact distribution of normal visual patterns for separating outliers. To retain rich scale information of defects, we build an interactive cross-attention pattern flow module to jointly transform and align distributions of multi-layer features, which is beneficial for detecting small-size defects that may be annihilated in high-level features. To handle the complexity of multi-layer feature distributions, we introduce a layer-conditional autoregression module to improve the fitting capacity of data likelihoods on multi-layer features. By transforming the multi-layer feature distributions into a latent space, we can better characterize normal visual patterns. Extensive experiments on four public datasets and our collected industrial dataset demonstrate that the proposed CARF outperforms state-of-the-art methods, particularly in detecting small-size defects.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5183-5193"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142246501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Log-Euclidean Metrics for SPD Matrix Learning 用于 SPD 矩阵学习的自适应对数欧氏度量法
Ziheng Chen;Yue Song;Tianyang Xu;Zhiwu Huang;Xiao-Jun Wu;Nicu Sebe
{"title":"Adaptive Log-Euclidean Metrics for SPD Matrix Learning","authors":"Ziheng Chen;Yue Song;Tianyang Xu;Zhiwu Huang;Xiao-Jun Wu;Nicu Sebe","doi":"10.1109/TIP.2024.3451930","DOIUrl":"10.1109/TIP.2024.3451930","url":null,"abstract":"Symmetric Positive Definite (SPD) matrices have received wide attention in machine learning due to their intrinsic capacity to encode underlying structural correlation in data. Many successful Riemannian metrics have been proposed to reflect the non-Euclidean geometry of SPD manifolds. However, most existing metric tensors are fixed, which might lead to sub-optimal performance for SPD matrix learning, especially for deep SPD neural networks. To remedy this limitation, we leverage the commonly encountered pullback techniques and propose Adaptive Log-Euclidean Metrics (ALEMs), which extend the widely used Log-Euclidean Metric (LEM). Compared with the previous Riemannian metrics, our metrics contain learnable parameters, which can better adapt to the complex dynamics of Riemannian neural networks with minor extra computations. We also present a complete theoretical analysis to support our ALEMs, including algebraic and Riemannian properties. The experimental and theoretical results demonstrate the merit of the proposed metrics in improving the performance of SPD neural networks. The efficacy of our metrics is further showcased on a set of recently developed Riemannian building blocks, including Riemannian batch normalization, Riemannian Residual blocks, and Riemannian classifiers.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5194-5205"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142236319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Change Representation and Extraction in Stripes: Rethinking Unsupervised Hyperspectral Image Change Detection With an Untrained Network 条纹中的变化表示和提取:利用未训练网络反思无监督高光谱图像变化检测
Bin Yang;Yin Mao;Licheng Liu;Leyuan Fang;Xinxin Liu
{"title":"Change Representation and Extraction in Stripes: Rethinking Unsupervised Hyperspectral Image Change Detection With an Untrained Network","authors":"Bin Yang;Yin Mao;Licheng Liu;Leyuan Fang;Xinxin Liu","doi":"10.1109/TIP.2024.3438100","DOIUrl":"10.1109/TIP.2024.3438100","url":null,"abstract":"Deep learning-based hyperspectral image (HSI) change detection (CD) approaches have a strong ability to leverage spectral-spatial-temporal information through automatic feature extraction, and currently dominate in the research field. However, their efficiency and universality are limited by the dependency on labeled data. Although the newly applied untrained networks can avoid the need for labeled data, their feature volatility from the simple difference space easily leads to inaccurate CD results. Inspired by the interesting finding that salient changes appear as bright “stripes” in a new feature space, we propose a novel unsupervised CD method that represents and models changes in stripes for HSIs (named as StripeCD), which integrates optimization modeling into an untrained network. The StripeCD method constructs a new feature space that represents change features in stripes and models them in a novel optimization manner. It consists of three main parts: 1) dual-branch untrained convolutional network, which is utilized to extract deep difference features from bitemporal HSIs and combined with a two-stage channel selection strategy to emphasize the important channels that contribute to CD. 2) multiscale forward-backward segmentation framework, which is proposed for salient change representation. It transforms deep difference features into a new feature space by exploiting the structure information of ground objects and associates salient changes with the stripe-shaped change component. 3) stripe-shaped change extraction model, which characterizes the global sparsity and local discontinuity of salient changes. It explores the intrinsic properties of deep difference features and constructs model-based constraints to better identify changed regions in a controllable manner. The proposed StripeCD method outperformed the state-of-the-art unsupervised CD approaches on three widely used datasets. In addition, the proposed StripeCD method indicates the potential for further investigation of untrained networks in facilitating reliable CD.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5098-5113"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Degradation Representation Learning for All-in-One Image Restoration 用于一体化图像修复的神经退化表征学习
Mingde Yao;Ruikang Xu;Yuanshen Guan;Jie Huang;Zhiwei Xiong
{"title":"Neural Degradation Representation Learning for All-in-One Image Restoration","authors":"Mingde Yao;Ruikang Xu;Yuanshen Guan;Jie Huang;Zhiwei Xiong","doi":"10.1109/TIP.2024.3456583","DOIUrl":"10.1109/TIP.2024.3456583","url":null,"abstract":"Existing methods have demonstrated effective performance on a single degradation type. In practical applications, however, the degradation is often unknown, and the mismatch between the model and the degradation will result in a severe performance drop. In this paper, we propose an all-in-one image restoration network that tackles multiple degradations. Due to the heterogeneous nature of different types of degradations, it is difficult to process multiple degradations in a single network. To this end, we propose to learn a neural degradation representation (NDR) that captures the underlying characteristics of various degradations. The learned NDR adaptively decomposes different types of degradations, similar to a neural dictionary that represents basic degradation components. Subsequently, we develop a degradation query module and a degradation injection module to effectively approximate and utilize the specific degradation based on NDR, enabling the all-in-one restoration ability for multiple degradations. Moreover, we propose a bidirectional optimization strategy to effectively drive NDR to learn the degradation representation by optimizing the degradation and restoration processes alternately. Comprehensive experiments on representative types of degradations (including noise, haze, rain, and downsampling) demonstrate the effectiveness and generalizability of our method. Code is available at \u0000<uri>https://github.com/mdyao/NDR-Restore</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5408-5423"},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-Level Adaptive Adversarial Ranking for Person Re-Identification 用于人员再识别的图像级自适应对抗排序
Xi Yang;Huanling Liu;Nannan Wang;Xinbo Gao
{"title":"Image-Level Adaptive Adversarial Ranking for Person Re-Identification","authors":"Xi Yang;Huanling Liu;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2024.3456000","DOIUrl":"10.1109/TIP.2024.3456000","url":null,"abstract":"The potential vulnerability of deep neural networks and the complexity of pedestrian images, greatly limits the application of person re-identification techniques in the field of smart security. Current attack methods often focus on generating carefully crafted adversarial samples or only disrupting the metric distances between targets and similar pedestrians. However, both aspects are crucial for evaluating the security of methods adapted for person re-identification tasks. For this reason, we propose an image-level adaptive adversarial ranking method that comprehensively considers two aspects to adapt to changes in pedestrians in the real world and effectively evaluate the robustness of models in adversarial environments. To generate more refined adversarial samples, our image representation enhancement module leverages channel-wise information entropy, assigning varying weights to different channels to produce images with richer information content, along with a generative adversarial network to create adversarial samples. Subsequently, for adaptive perturbation of ranking, the adaptive weight confusion ranking loss is presented to calculate the weights of distances between positive or negative samples and query samples. It endeavors to push positive samples away from query samples and bring negative samples closer, thereby interfering with the ranking of system. Notably, this method requires no additional hyperparameter tuning or extra data training, making it an adaptive attack strategy. Experimental results on large-scale datasets such as Market1501, CUHK03, and DukeMTMC demonstrate the effectiveness of our method in attacking ReID systems.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5172-5182"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disentangled Sample Guidance Learning for Unsupervised Person Re-Identification 用于无监督人员再识别的分离样本指导学习
Haoxuanye Ji;Le Wang;Sanping Zhou;Wei Tang;Gang Hua
{"title":"Disentangled Sample Guidance Learning for Unsupervised Person Re-Identification","authors":"Haoxuanye Ji;Le Wang;Sanping Zhou;Wei Tang;Gang Hua","doi":"10.1109/TIP.2024.3456008","DOIUrl":"10.1109/TIP.2024.3456008","url":null,"abstract":"Unsupervised person re-identification (Re-ID) is challenging due to the lack of ground truth labels. Most existing methods employ iterative clustering to generate pseudo labels for unlabeled training data to guide the learning process. However, how to select samples that are both associated with high-confidence pseudo labels and hard (discriminative) enough remains a critical problem. To address this issue, a disentangled sample guidance learning (DSGL) method is proposed for unsupervised Re-ID. The method consists of disentangled sample mining (DSM) and discriminative feature learning (DFL). DSM disentangles (unlabeled) person images into identity-relevant and identity-irrelevant factors, which are used to construct disentangled positive/negative groups that contain discriminative enough information. DFL incorporates the mined disentangled sample groups into model training by a surrogate disentangled learning loss and a disentangled second-order similarity regularization, to help the model better distinguish the characteristics of different persons. By using the DSGL training strategy, the mAP on Market-1501 and MSMT17 increases by 6.6% and 10.1% when applying the ResNet50 framework, and by 0.6% and 6.9% with the vision transformer (VIT) framework, respectively, validating the effectiveness of the DSGL method. Moreover, DSGL surpasses previous state-of-the-art methods by achieving higher Top-1 accuracy and mAP on the Market-1501, MSMT17, PersonX, and VeRi-776 datasets. The source code for this paper is available at \u0000<uri>https://github.com/jihaoxuanye/DiseSGL</uri>\u0000.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5144-5158"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convex Hull Prediction for Adaptive Video Streaming by Recurrent Learning 通过循环学习进行自适应视频流的凸面体预测
Somdyuti Paul;Andrey Norkin;Alan C. Bovik
{"title":"Convex Hull Prediction for Adaptive Video Streaming by Recurrent Learning","authors":"Somdyuti Paul;Andrey Norkin;Alan C. Bovik","doi":"10.1109/TIP.2024.3455989","DOIUrl":"10.1109/TIP.2024.3455989","url":null,"abstract":"Adaptive video streaming relies on the construction of efficient bitrate ladders to deliver the best possible visual quality to viewers under bandwidth constraints. The traditional method of content dependent bitrate ladder selection requires a video shot to be pre-encoded with multiple encoding parameters to find the optimal operating points given by the convex hull of the resulting rate-quality curves. However, this pre-encoding step is equivalent to an exhaustive search process over the space of possible encoding parameters, which causes significant overhead in terms of both computation and time expenditure. To reduce this overhead, we propose a deep learning based method of content aware convex hull prediction. We employ a recurrent convolutional network (RCN) to implicitly analyze the spatiotemporal complexity of video shots in order to predict their convex hulls. A two-step transfer learning scheme is adopted to train our proposed RCN-Hull model, which ensures sufficient content diversity to analyze scene complexity, while also making it possible to capture the scene statistics of pristine source videos. Our experimental results reveal that our proposed model yields better approximations of the optimal convex hulls, and offers competitive time savings as compared to existing approaches. On average, the pre-encoding time was reduced by 53.8% by our method, while the average Bjøntegaard delta bitrate (BD-rate) of the predicted convex hulls against ground truth was 0.26%, and the mean absolute deviation of the BD-rate distribution was 0.57%.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"5114-5128"},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142174679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信