IEEE transactions on pattern analysis and machine intelligence最新文献

筛选
英文 中文
Learning to Explore Sample Relationships. 学习探索样本关系。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-10 DOI: 10.1109/TPAMI.2025.3549300
Zhi Hou, Baosheng Yu, Chaoyue Wang, Yibing Zhan, Dacheng Tao
{"title":"Learning to Explore Sample Relationships.","authors":"Zhi Hou, Baosheng Yu, Chaoyue Wang, Yibing Zhan, Dacheng Tao","doi":"10.1109/TPAMI.2025.3549300","DOIUrl":"https://doi.org/10.1109/TPAMI.2025.3549300","url":null,"abstract":"<p><p>Despite the great success achieved, deep learning technologies usually suffer from data scarcity issues in real-world applications, where existing methods mainly explore sample relationships in a vanilla way from the perspectives of either the input or the loss function. In this paper, we propose a batch transformer module, BatchFormerV1, to equip deep neural networks themselves with the abilities to explore sample relationships in a learnable way. Basically, the proposed method enables data collaboration, e.g., head-class samples will also contribute to the learning of tail classes. Considering that exploring instance-level relationships has very limited impacts on dense prediction, we generalize and refer to the proposed module as BatchFormerV2, which further enables exploring sample relationships for pixel-/patch-level dense representations. In addition, to address the train-test inconsistency where a mini-batch of data samples are neither necessary nor desirable during inference, we also devise a two-stream training pipeline, i.e., a shared model is first jointly optimized with and without BatchFormerV2 which is then removed during testing. The proposed module is plug-and-play without requiring any extra inference cost. Lastly, we evaluate the proposed method on over ten popular datasets, including 1) different data scarcity settings such as long-tailed recognition, zero-shot learning, domain generalization, and contrastive learning; and 2) different visual recognition tasks ranging from image classification to object detection and panoptic segmentation. Code is available at https://zhihou7.github.io/BatchFormer.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2024 Reviewers List* 2024审稿人名单*
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-06 DOI: 10.1109/TPAMI.2025.3537400
{"title":"2024 Reviewers List*","authors":"","doi":"10.1109/TPAMI.2025.3537400","DOIUrl":"10.1109/TPAMI.2025.3537400","url":null,"abstract":"","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 4","pages":"3183-3199"},"PeriodicalIF":0.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10916530","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143569503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class-Agnostic Repetitive Action Counting Using Wearable Devices. 使用可穿戴设备进行类别不可知性重复动作计数。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-05 DOI: 10.1109/TPAMI.2025.3548131
Duc Duy Nguyen, Lam Thanh Nguyen, Yifeng Huang, Cuong Pham, Minh Hoai
{"title":"Class-Agnostic Repetitive Action Counting Using Wearable Devices.","authors":"Duc Duy Nguyen, Lam Thanh Nguyen, Yifeng Huang, Cuong Pham, Minh Hoai","doi":"10.1109/TPAMI.2025.3548131","DOIUrl":"https://doi.org/10.1109/TPAMI.2025.3548131","url":null,"abstract":"<p><p>We present Class-agnostic Repetitive action Counting (CaRaCount), a novel approach to count repetitive human actions in the wild using wearable devices time series data. CaRaCount is the first few-shot class-agnostic method, being able to count repetitions of any action class with only a short exemplar data sequence containing a few examples from the action class of interest. To develop and evaluate this method, we collect a large-scale time series dataset of repetitive human actions in various context, containing smartwatch data from 10 subjects performing 50 different activities. Experiments on this dataset and three other activity counting datasets namely Crossfit, Recofit, and MM-Fit show that CaRaCount can count repetitive actions with low error, and it outperforms other baselines and state-of-the-art action counting methods. Finally, with a user experience study, we evaluate the usability of our real-time implementation. Our results highlight the efficiency and effectiveness of our approach when deployed outside the laboratory environments.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143569151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rate-Distortion Theory in Coding for Machines and its Applications. 机器编码中的率失真理论及其应用。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-05 DOI: 10.1109/TPAMI.2025.3548516
Alon Harell, Yalda Foroutan, Nilesh Ahuja, Parual Datta, Bhavya Kanzariya, V Srinivasa Somayazulu, Omesh Tickoo, Anderson de Andrade, Ivan V Bajic
{"title":"Rate-Distortion Theory in Coding for Machines and its Applications.","authors":"Alon Harell, Yalda Foroutan, Nilesh Ahuja, Parual Datta, Bhavya Kanzariya, V Srinivasa Somayazulu, Omesh Tickoo, Anderson de Andrade, Ivan V Bajic","doi":"10.1109/TPAMI.2025.3548516","DOIUrl":"10.1109/TPAMI.2025.3548516","url":null,"abstract":"<p><p>Recent years have seen a tremendous growth in both the capability and popularity of automatic machine analysis of media, especially images and video. As a result, a growing need for efficient compression methods optimised for machine vision, rather than human vision, has emerged. To meet this growing demand, significant developments have been made in image and video coding for machines. Unfortunately, while there is a substantial body of knowledge regarding rate-distortion theory for human vision, the same cannot be said of machine analysis. In this paper, we greatly extend the current rate-distortion theory for machines, providing insight into important design considerations of machine-vision codecs. We then utilise this newfound understanding to improve several methods for learned image coding for machines. Our proposed methods achieve state-of-the-art rate-distortion performance on several computer vision tasks - classification, instance and semantic segmentation, and object detection.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visible-Thermal Tiny Object Detection: A Benchmark Dataset and Baselines. 可见热微小目标检测:基准数据集和基线。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-05 DOI: 10.1109/TPAMI.2025.3544621
Xinyi Ying, Chao Xiao, Wei An, Ruojing Li, Xu He, Boyang Li, Xu Cao, Zhaoxu Li, Yingqian Wang, Mingyuan Hu, Qingyu Xu, Zaiping Lin, Miao Li, Shilin Zhou, Weidong Sheng, Li Liu
{"title":"Visible-Thermal Tiny Object Detection: A Benchmark Dataset and Baselines.","authors":"Xinyi Ying, Chao Xiao, Wei An, Ruojing Li, Xu He, Boyang Li, Xu Cao, Zhaoxu Li, Yingqian Wang, Mingyuan Hu, Qingyu Xu, Zaiping Lin, Miao Li, Shilin Zhou, Weidong Sheng, Li Liu","doi":"10.1109/TPAMI.2025.3544621","DOIUrl":"10.1109/TPAMI.2025.3544621","url":null,"abstract":"<p><p>Visible-thermal small object detection (RGBT SOD) is a significant yet challenging task with a wide range of applications, including video surveillance, traffic monitoring, search and rescue. However, existing studies mainly focus on either visible or thermal modality, while RGBT SOD is rarely explored. Although some RGBT datasets have been developed, the insufficient quantity, limited diversity, unitary application, misaligned images and large target size cannot provide an impartial benchmark to evaluate RGBT SOD algorithms. In this paper, we build the first large-scale benchmark with high diversity for RGBT SOD (namely RGBT-Tiny), including 115 paired sequences, 93 K frames and 1.2 M manual annotations. RGBT-Tiny contains abundant objects (7 categories) and high-diversity scenes (8 types that cover different illumination and density variations). Note that, over 81% of objects are smaller than 16×16, and we provide paired bounding box annotations with tracking ID to offer an extremely challenging benchmark with wide-range applications, such as RGBT image fusion, object detection and tracking. In addition, we propose a scale adaptive fitness (SAFit) measure that exhibits high robustness on both small and large objects. The proposed SAFit can provide reasonable performance evaluation and promote detection performance. Based on the proposed RGBT-Tiny dataset, extensive evaluations have been conducted with IoU and SAFit metrics, including 32 recent state-of-the-art algorithms that cover four different types (i.e., visible generic detection, visible SOD, thermal SOD and RGBT object detection). Project is available at https://github.com/XinyiYing/RGBT-Tiny.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Upper Bounds of Number of Linear Regions and Generalization Error of Deep Convolutional Neural Networks. 深度卷积神经网络的线性区域数上界及泛化误差。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-03-05 DOI: 10.1109/TPAMI.2025.3548620
Degang Chen, Jiayu Liu, Xiaoya Che
{"title":"On the Upper Bounds of Number of Linear Regions and Generalization Error of Deep Convolutional Neural Networks.","authors":"Degang Chen, Jiayu Liu, Xiaoya Che","doi":"10.1109/TPAMI.2025.3548620","DOIUrl":"10.1109/TPAMI.2025.3548620","url":null,"abstract":"<p><p>Understanding the effect of hyperparameters of the network structure on the performance of Convolutional Neural Networks (CNNs) remains the most fundamental and urgent issue in deep learning, and we attempt to address this issue based on the piecewise linear (PWL) function nature of CNNs in this paper. Firstly, the operations of convolutions, ReLUs and Max pooling in a CNN are represented as the multiplication of multiple matrices for a fixed sample in order to obtain an algebraic expression of CNNs, this expression clearly suggests that CNNs are PWL functions. Although such representation has high time complexity, it provides a more convenient and intuitive way to study the mathematical properties of CNNs. Secondly, we develop a tight bound of the number of linear regions and the upper bounds of generalization error for CNNs, both taking into account factors such as the number of layers, dimension of pooling, and the width in the network. The above research results provide a possible guidance for designing and training CNNs.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143569154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Replay Without Saving: Prototype Derivation and Distribution Rebalance for Class-Incremental Semantic Segmentation. 无需保存的重播:类增量语义分割的原型派生和分布再平衡。
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-02-25 DOI: 10.1109/TPAMI.2025.3545966
Jinpeng Chen, Runmin Cong, Yuxuan Luo, Horace Ho Shing Ip, Sam Kwong
{"title":"Replay Without Saving: Prototype Derivation and Distribution Rebalance for Class-Incremental Semantic Segmentation.","authors":"Jinpeng Chen, Runmin Cong, Yuxuan Luo, Horace Ho Shing Ip, Sam Kwong","doi":"10.1109/TPAMI.2025.3545966","DOIUrl":"10.1109/TPAMI.2025.3545966","url":null,"abstract":"<p><p>The research of class-incremental semantic segmentation (CISS) seeks to enhance semantic segmentation methods by enabling the progressive learning of new classes while preserving knowledge of previously learned ones. A significant yet often neglected challenge in this domain is class imbalance. In CISS, each task focuses on different foreground classes, with the training set for each task exclusively comprising images that contain these currently focused classes. This results in an overrepresentation of these classes within the single-task training set, leading to a classification bias towards them. To address this issue, we propose a novel CISS method named STAR, whose core principle is to reintegrate the missing proportions of previous classes into current single-task training samples by replaying their prototypes. Moreover, we develop a prototype deviation technique that enables the deduction of past-class prototypes, integrating the recognition patterns of the classifiers and the extraction patterns of the feature extractor. With this technique, replay can be accomplished without using any storage to save prototypes. Complementing our method, we devise two loss functions to enforce cross-task feature constraints: the Old-Class Features Maintaining (OCFM) loss and the Similarity-Aware Discriminative (SAD) loss. The OCFM loss is designed to stabilize the feature space of old classes, thus preserving previously acquired knowledge without compromising the ability to learn new classes. The SAD loss aims to enhance feature distinctions between similar old and new class pairs, minimizing potential confusion. Our experiments on two public datasets, Pascal VOC 2012 and ADE20 K, demonstrate that our STAR achieves state-of-the-art performance.</p>","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frozen CLIP-DINO: A Strong Backbone for Weakly Supervised Semantic Segmentation Frozen CLIP-DINO:弱监督语义分割的强主干
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-02-18 DOI: 10.1109/TPAMI.2025.3543191
Bingfeng Zhang;Siyue Yu;Jimin Xiao;Yunchao Wei;Yao Zhao
{"title":"Frozen CLIP-DINO: A Strong Backbone for Weakly Supervised Semantic Segmentation","authors":"Bingfeng Zhang;Siyue Yu;Jimin Xiao;Yunchao Wei;Yao Zhao","doi":"10.1109/TPAMI.2025.3543191","DOIUrl":"10.1109/TPAMI.2025.3543191","url":null,"abstract":"Weakly supervised semantic segmentation has witnessed great achievements with image-level labels. Several recent approaches use the CLIP model to generate pseudo labels for training an individual segmentation model, while there is no attempt to apply the CLIP model as the backbone to directly segment objects with image-level labels. In this paper, we propose WeCLIP and its advanced version WeCLIP+, to build the single-stage pipeline for weakly supervised semantic segmentation. For WeCLIP, the frozen CLIP model is applied as the backbone for semantic feature extraction, and a new light decoder is designed to interpret extracted semantic features for final prediction. Meanwhile, we utilize the above frozen backbone to generate pseudo labels for training the decoder. Such labels are fixed during training. We then propose a refinement module (RFM) to optimize them dynamically. For WeCLIP+, we introduce the frozen DINO model to achieve more comprehensive semantic feature extraction. The frozen DINO is combined with the frozen CLIP as the backbone, followed by a shared decoder to make predictions with less training cost. Moreover, a strengthened refinement module (RFM+) is designed to revise online pseudo labels with extra guidance from DINO features. Extensive experiments show that both WeCLIP and WeCLIP+ significantly outperform other approaches with less training cost. Particularly, WeCLIP+ gets mIoU of 83.9% on VOC 2012 <italic>test</i> set and 56.3% on COCO <italic>val</i> set. Additionally, these two approaches also obtain promising results for fully supervised settings.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"4198-4214"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarially Robust Neural Architectures 对抗鲁棒神经结构
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-02-17 DOI: 10.1109/TPAMI.2025.3542350
Minjing Dong;Yanxi Li;Yunhe Wang;Chang Xu
{"title":"Adversarially Robust Neural Architectures","authors":"Minjing Dong;Yanxi Li;Yunhe Wang;Chang Xu","doi":"10.1109/TPAMI.2025.3542350","DOIUrl":"10.1109/TPAMI.2025.3542350","url":null,"abstract":"Deep Neural Networks (DNNs) are vulnerable to adversarial attacks. Existing methods are devoted to developing various robust training strategies or regularizations to update the weights of the neural network. But beyond the weights, the overall structure and information flow in the network are explicitly determined by the neural architecture, which remains unexplored. Thus, this paper aims to improve the adversarial robustness of the network from the architectural perspective. We explore the relationship among adversarial robustness, Lipschitz constant, and architecture parameters and show that an appropriate constraint on architecture parameters could reduce the Lipschitz constant to further improve the robustness. The importance of architecture parameters could vary from operation to operation or connection to connection. We approximate the Lipschitz constant of the entire network through a univariate log-normal distribution, whose mean and variance are related to architecture parameters. The confidence can be fulfilled through formulating a constraint on the distribution parameters based on the cumulative function. Compared with adversarially trained neural architectures searched by various NAS algorithms as well as efficient human-designed models, our algorithm empirically achieves the best performance among all the models under various attacks on different datasets.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"4183-4197"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Conditional Similarity Learning via Semantic Matching 基于语义匹配的广义条件相似学习
IEEE transactions on pattern analysis and machine intelligence Pub Date : 2025-02-13 DOI: 10.1109/TPAMI.2025.3535730
Yi Shi;Rui-Xiang Li;Le Gan;De-Chuan Zhan;Han-Jia Ye
{"title":"Generalized Conditional Similarity Learning via Semantic Matching","authors":"Yi Shi;Rui-Xiang Li;Le Gan;De-Chuan Zhan;Han-Jia Ye","doi":"10.1109/TPAMI.2025.3535730","DOIUrl":"10.1109/TPAMI.2025.3535730","url":null,"abstract":"The inherent complexity of image semantics engenders a fascinating variability in relationships between images. For instance, under a certain condition, two images may demonstrate similarity, while under different circumstances, the same pair could exhibit absolute dissimilarity. A singular feature space is therefore insufficient for capturing the nuanced semantic relationships that exist between samples. Conditional Similarity Learning (CSL) aims to address this gap by learning multiple, distinct feature spaces. Existing approaches in CSL often fail to capture the intricate similarity relationships between samples across different semantic conditions, particularly in weakly-supervised settings where condition labels are absent during training. To address this limitation, we introduce <bold>D</b>istance <bold>I</b>nduced <bold>S</b>emantic <bold>CO</b>ndition <bold>VER</b>ification <bold>NET</b>work (<sc>DiscoverNet</small>), a unified framework designed to cater to a range of CSL scenarios— supervised CSL (sCSL), weakly-supervised CSL (wsCSL), and semi-supervised CSL (ssCSL). In addition to traditional linear projections, we also introduce a prompt learning technique utilizing transformer encoding layer to create diverse embedding spaces. Our framework incorporates a Condition Match Module (CMM) that dynamically matches different training triplets with corresponding embedding spaces, adapting to varying levels of supervision. We also shed light on existing evaluation biases in wsCSL and introduce two novel criteria for a more robust evaluation. Through extensive experiments and visualizations on benchmark datasets such as UT-Zappos-50 k and Celeb-A, we substantiate the efficacy and interpretability of <sc>DiscoverNet</small>.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"47 5","pages":"3847-3862"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信