IEEE Transactions on Circuits and Systems for Video Technology最新文献

筛选
英文 中文
WeaFU: Weather-Informed Image Blind Restoration via Multi-Weather Distribution Diffusion WeaFU:通过多气象分布扩散进行气象信息图像盲修复
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-28 DOI: 10.1109/TCSVT.2024.3450971
Bodong Cheng;Juncheng Li;Jun Shi;Yingying Fang;Guixu Zhang;Yin Chen;Tieyong Zeng;Zhi Li
{"title":"WeaFU: Weather-Informed Image Blind Restoration via Multi-Weather Distribution Diffusion","authors":"Bodong Cheng;Juncheng Li;Jun Shi;Yingying Fang;Guixu Zhang;Yin Chen;Tieyong Zeng;Zhi Li","doi":"10.1109/TCSVT.2024.3450971","DOIUrl":"10.1109/TCSVT.2024.3450971","url":null,"abstract":"The extraction of distribution from images with diverse weather conditions is crucial for enhancing the robustness of visual algorithms. When addressing image degradation caused by different weather, accurately perceiving the data distribution of weather-informed degradation becomes a fundamental challenge. However, given the highly stochastic nature, modelling weather distribution poses a formidable task. In this paper, we propose a novel multi-Weather distribution difFUsion blind restoration model, named WeaFU. Firstly, the model employs representation learning to map image distribution into a latent space. Subsequently, WeaFU utilizes a diffusion-based approach, with the assistance of Diffusion Distribution Generator (DDG), to perceive and extract corresponding weather distribution. This strategy ingeniously injects data distribution into the recovery process, significantly enhancing the robustness of the model in diverse weather scenarios. Finally, a Conditional Distribution-Aware Transformer (CDAT) is constructed to align the distribution information with pixels, thereby obtaining clear images. Extensive experiments on real and synthetic datasets demonstrate that WeaFU achieves superior performance.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13530-13542"},"PeriodicalIF":8.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reallocating and Evolving General Knowledge for Few-Shot Learning 重新分配和发展常识,实现快速学习
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-28 DOI: 10.1109/TCSVT.2024.3450861
Yuling Su;Xueliang Liu;Zhen Huang;Jun He;Richang Hong;Meng Wang
{"title":"Reallocating and Evolving General Knowledge for Few-Shot Learning","authors":"Yuling Su;Xueliang Liu;Zhen Huang;Jun He;Richang Hong;Meng Wang","doi":"10.1109/TCSVT.2024.3450861","DOIUrl":"10.1109/TCSVT.2024.3450861","url":null,"abstract":"Large-scale vision-language pre-trained models like CLIP are extensively employed in few-shot tasks due to their robust generalization capabilities. Existing methods usually incorporate additional techniques to acquire knowledge for new tasks building upon the general knowledge in CLIP. However, they do not realize that the task-related knowledge might be implicitly embedded within the general knowledge well-learned. In this paper, we propose a novel framework to reallocate and evolve the general knowledge for specific few-shot tasks (REGK), mimicking the human “Attention Allocation” cognition mechanism. With a learnable mask-tuning selection, REGK focuses on selecting the task-related parameters of CLIP while learning specific few-shot knowledge without altering CLIP underlying framework. Specifically, we initially observe that inheriting the strong knowledge representation capability in CLIP is more advantageous for few-shot learning than its task-solving ability. Subsequently, a two-stage tuning framework is introduced to reallocate and control the mask-tuning on different tasks. It allows model automatically mask-tuning on different few-shot tasks with selective sparsity training. In this way, we achieve reliable transfer of task-related knowledge and effective exploration of new knowledge from limited data to enhance few-shot learning. Extensive experiments validate the superiority and potentiality of our model.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13518-13529"},"PeriodicalIF":8.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Teacher Knowledge Distillation With Domain Alignment for Face Anti-Spoofing 针对人脸防欺骗的双师知识提炼与领域对齐
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-28 DOI: 10.1109/TCSVT.2024.3451294
Zhe Kong;Wentian Zhang;Tao Wang;Kaihao Zhang;Yuexiang Li;Xiaoying Tang;Wenhan Luo
{"title":"Dual Teacher Knowledge Distillation With Domain Alignment for Face Anti-Spoofing","authors":"Zhe Kong;Wentian Zhang;Tao Wang;Kaihao Zhang;Yuexiang Li;Xiaoying Tang;Wenhan Luo","doi":"10.1109/TCSVT.2024.3451294","DOIUrl":"10.1109/TCSVT.2024.3451294","url":null,"abstract":"Face recognition systems have raised concerns due to their vulnerability to different presentation attacks, and system security has become an increasingly critical concern. Although many face anti-spoofing (FAS) methods perform well in intra-dataset scenarios, their generalization remains a challenge. To address this issue, some methods adopt domain adversarial training (DAT) to extract domain-invariant features. Differently, in this paper, we propose a domain adversarial attack (DAA) method by adding perturbations to the input images, which makes them indistinguishable across domains and enables domain alignment. Moreover, since models trained on limited data and types of attacks cannot generalize well to unknown attacks, we propose a dual perceptual and generative knowledge distillation framework for face anti-spoofing that utilizes pre-trained face-related models containing rich face priors. Specifically, we adopt two different face-related models as teachers to transfer knowledge to the target student model. The pre-trained teacher models are not from the task of face anti-spoofing but from perceptual and generative tasks, respectively, which implicitly augment the data. By combining both DAA and dual-teacher knowledge distillation, we develop a dual teacher knowledge distillation with domain alignment framework (DTDA) for face anti-spoofing. The advantage of our proposed method has been verified through extensive ablation studies and comparison with state-of-the-art methods on public datasets across multiple protocols.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13177-13189"},"PeriodicalIF":8.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fixed Relative Pose Prior for Camera Array Self-Calibration 用于摄像机阵列自校准的固定相对姿态先验值
IF 8.4 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-27 DOI: 10.1109/tcsvt.2024.3450706
Yaning Zhang, Yingqian Wang, Tianhao Wu, Jungang Yang, Wei An
{"title":"Fixed Relative Pose Prior for Camera Array Self-Calibration","authors":"Yaning Zhang, Yingqian Wang, Tianhao Wu, Jungang Yang, Wei An","doi":"10.1109/tcsvt.2024.3450706","DOIUrl":"https://doi.org/10.1109/tcsvt.2024.3450706","url":null,"abstract":"","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"107 1","pages":""},"PeriodicalIF":8.4,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UniFRD: A Unified Method for Facial Image Restoration Based on Diffusion Probabilistic Model UniFRD:基于扩散概率模型的面部图像修复统一方法
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-27 DOI: 10.1109/TCSVT.2024.3450493
Muwei Jian;Rui Wang;Xiaoyang Yu;Feng Xu;Hui Yu;Kin-Man Lam
{"title":"UniFRD: A Unified Method for Facial Image Restoration Based on Diffusion Probabilistic Model","authors":"Muwei Jian;Rui Wang;Xiaoyang Yu;Feng Xu;Hui Yu;Kin-Man Lam","doi":"10.1109/TCSVT.2024.3450493","DOIUrl":"10.1109/TCSVT.2024.3450493","url":null,"abstract":"This paper presents a Unified Facial image and video Restoration method based on the Diffusion probabilistic model (UniFRD), designed to effectively address both single- and multi-type image degradation. The noise predictor in UniFRD consists of a ViT-based encoder and a novel Separation Fusion Decoding Module (SFDM). The flexible feature optimization strategy allows for decoding complex conditional noise without being limited by degradation patterns. Specifically, SFDM adjusts and refines the channel correlation and expressive power of high-dimensional features step by step, enabling the network to more accurately perceive and enhance the interaction between posterior probabilities and conditional inputs. This process is crucial for improving the visual quality and stability of the restoration results. Extensive experiments demonstrate that even when facial images suffer from both pixel-level and image-level degradation, UniFRD can still guarantee the restoration of rich details and maintain attribute consistency. In summary, compared to existing methods, the solution proposed in this study for facial restoration tasks offers greater generality and adaptability. Moreover, it has high practical value for applications involving faces in complex and unconstrained outdoor scenarios.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13494-13506"},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BatchNorm-Based Weakly Supervised Video Anomaly Detection 基于批量规范的弱监督视频异常检测
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-27 DOI: 10.1109/TCSVT.2024.3450734
Yixuan Zhou;Yi Qu;Xing Xu;Fumin Shen;Jingkuan Song;Heng Tao Shen
{"title":"BatchNorm-Based Weakly Supervised Video Anomaly Detection","authors":"Yixuan Zhou;Yi Qu;Xing Xu;Fumin Shen;Jingkuan Song;Heng Tao Shen","doi":"10.1109/TCSVT.2024.3450734","DOIUrl":"10.1109/TCSVT.2024.3450734","url":null,"abstract":"In weakly supervised video anomaly detection (WVAD), where only video-level labels indicating the presence or absence of abnormal events are available, the primary challenge arises from the inherent ambiguity in temporal annotations of abnormal occurrences. Inspired by the statistical insight that temporal features of abnormal events often exhibit outlier characteristics, we propose a novel method, BN-WVAD, which incorporates BatchNorm into WVAD. In the proposed BN-WVAD, we leverage the Divergence of Feature from the Mean vector (DFM) of BatchNorm as a reliable abnormality criterion to discern potential abnormal snippets in abnormal videos. The proposed DFM criterion is also discriminative for anomaly recognition and more resilient to label noise, serving as the additional anomaly score to amend the prediction of the anomaly classifier that is susceptible to noisy labels. Moreover, a batch-level selection strategy is devised to filter more abnormal snippets in videos where more abnormal events occur. The proposed BN-WVAD model demonstrates state-of-the-art performance on UCF-Crime with an AUC of 87.24%, and XD-Violence, where AP reaches up to 84.93%. Our code implementation is accessible at \u0000<uri>https://github.com/cool-xuan/BN-WVAD</uri>\u0000.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13642-13654"},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CFAN-SDA: Coarse-Fine Aware Network With Static-Dynamic Adaptation for Facial Expression Recognition in Videos CFAN-SDA:具有静态-动态自适应功能的粗-细感知网络,用于视频中的面部表情识别
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-27 DOI: 10.1109/TCSVT.2024.3450652
Dongliang Chen;Guihua Wen;Pei Yang;Huihui Li;Chuyun Chen;Bao Wang
{"title":"CFAN-SDA: Coarse-Fine Aware Network With Static-Dynamic Adaptation for Facial Expression Recognition in Videos","authors":"Dongliang Chen;Guihua Wen;Pei Yang;Huihui Li;Chuyun Chen;Bao Wang","doi":"10.1109/TCSVT.2024.3450652","DOIUrl":"10.1109/TCSVT.2024.3450652","url":null,"abstract":"Video-based facial expression recognition (FER) is a challenging task due to the dynamic emotional changes with variant frames in video sequences. This paper proposes a novel coarse-fine aware network with static-dynamic adaptation (CFAN-SDA) for in-the wild video-based FER. From coarse to fine, our method leverages cross-domain static FER database to boost video-based FER performance, and then explore hierarchical spatial-temporal feature learning. Specifically, different from existing methods, we design a static-dynamic adaptation learning to explore the knowledge transfer from labeled static images to unlabeled frames of video, which captures the features of coarse-grained emotion to find those important expression-related frames. Furthermore, we present hierarchical spatial-temporal transformers to better learn features of fine-grained expression, which consist of multi-view spatial transformer and frame-clip temporal transformer. The former captures multi-view spatial regions information from global to local, and the latter achieves cross-frame and cross-clip temporal interaction to select the key frame-level and clip-level multi-scale temporal information for fusing. Extensive experimental results on dynamic FER databases indicate that CFAN-SDA achieves superior performance compared to the state-of-the-art models.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13507-13517"},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Lifelong Cross-Modal Hashing 深度终身跨模态哈希算法
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-27 DOI: 10.1109/TCSVT.2024.3450490
Liming Xu;Hanqi Li;Bochuan Zheng;Weisheng Li;Jiancheng Lv
{"title":"Deep Lifelong Cross-Modal Hashing","authors":"Liming Xu;Hanqi Li;Bochuan Zheng;Weisheng Li;Jiancheng Lv","doi":"10.1109/TCSVT.2024.3450490","DOIUrl":"10.1109/TCSVT.2024.3450490","url":null,"abstract":"Hashing methods have made significant progress in cross-modal retrieval tasks with fast query speed and low storage cost. Among them, deep learning-based hashing achieves better performance on large-scale data due to its excellent extraction and representation ability for nonlinear heterogeneous features. However, there are still two main challenges in catastrophic forgetting when data with new categories arrive continuously, and time-consuming for non-continuous hashing retrieval to retrain for updating. To this end, we, in this paper, propose a novel deep lifelong cross-modal hashing to achieve lifelong hashing retrieval instead of re-training hash function repeatedly when new data arrive. Specifically, we design lifelong learning strategy to update hash functions by directly training the incremental data instead of retraining new hash functions using all the accumulated data, which significantly reduce training time. Then, we propose lifelong hashing loss to enable original hash codes participate in lifelong learning but remain invariant, and further preserve the similarity and dis-similarity among original and incremental hash codes to maintain performance. Additionally, considering distribution heterogeneity when new data arriving continuously, we introduce enhanced-semantic similarity to supervise hash learning, and it has been proven that the similarity improves performance with detailed analysis. Experimental results on benchmark datasets show that our proposed method achieves comparative performance comparing with recent state-of-the-art cross-modal hashing methods, and it yields substantial average increments over 20% in retrieval accuracy and almost reduces over 80% training time when new data arrives continuously.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13478-13493"},"PeriodicalIF":8.3,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video Quality Assessment for Online Processing: From Spatial to Temporal Sampling 在线处理视频质量评估:从空间采样到时间采样
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-26 DOI: 10.1109/TCSVT.2024.3450085
Jiebin Yan;Lei Wu;Yuming Fang;Xuelin Liu;Xue Xia;Weide Liu
{"title":"Video Quality Assessment for Online Processing: From Spatial to Temporal Sampling","authors":"Jiebin Yan;Lei Wu;Yuming Fang;Xuelin Liu;Xue Xia;Weide Liu","doi":"10.1109/TCSVT.2024.3450085","DOIUrl":"10.1109/TCSVT.2024.3450085","url":null,"abstract":"With the rapid development of multimedia processing and deep learning technologies, especially in the field of video understanding, video quality assessment (VQA) has achieved significant progress. Although researchers have moved from designing efficient video quality mapping models to various research directions, in-depth exploration of the effectiveness-efficiency trade-offs of spatio-temporal modeling in VQA models is still less sufficient. Considering the fact that videos have highly redundant information, this paper investigates this problem from the perspective of joint spatial and temporal sampling, aiming to seek the answer to how little information we should keep at least when feeding videos into the VQA models while with acceptable performance sacrifice. To this end, we drastically sample the video’s information from both spatial and temporal dimensions, and the heavily squeezed video is then fed into a stable VQA model. Comprehensive experiments regarding joint spatial and temporal sampling are conducted on six public video quality databases, and the results demonstrate the acceptable performance of the VQA model when throwing away most of the video information. Furthermore, with the proposed joint spatial and temporal sampling strategy, we make an initial attempt to design an online VQA model, which is instantiated by as simple as possible a spatial feature extractor, a temporal feature fusion module, and a global quality regression module. Through quantitative and qualitative experiments, we verify the feasibility of online VQA model by simplifying itself and reducing input.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13441-13451"},"PeriodicalIF":8.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diff-Privacy: Diffusion-Based Face Privacy Protection Diff-Privacy:基于扩散的人脸隐私保护
IF 8.3 1区 工程技术
IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-08-26 DOI: 10.1109/TCSVT.2024.3449290
Xiao He;Mingrui Zhu;Dongxin Chen;Nannan Wang;Xinbo Gao
{"title":"Diff-Privacy: Diffusion-Based Face Privacy Protection","authors":"Xiao He;Mingrui Zhu;Dongxin Chen;Nannan Wang;Xinbo Gao","doi":"10.1109/TCSVT.2024.3449290","DOIUrl":"10.1109/TCSVT.2024.3449290","url":null,"abstract":"Privacy protection has become a top priority due to the widespread collection and misuse of personal data. Anonymization and visual identity information hiding are two crucial tasks in face privacy protection, both striving to alter identifying characteristics from face images to prevent privacy information leakage. However, the goals of the two are not entirely the same. Consequently, training a model to simultaneously perform both tasks proves challenging. In this paper, we propose Diff-Privacy, a novel face privacy protection method based on diffusion models that unifies the task of anonymization and visual identity information hiding. Specifically, we present a Multi-Scale image Inversion module (MSI) that, through training, generates a set of Stable Diffusion (SD) format conditional embeddings for the original image. With these conditional embeddings, we design corresponding embedding scheduling strategies and formulate distinct energy functions during the inference process to achieve anonymization and visual identity information hiding, respectively. Extensive experiments demonstrate the effectiveness of the proposed method in protecting face privacy.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"34 12","pages":"13164-13176"},"PeriodicalIF":8.3,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信