IEEE Transactions on Information Forensics and Security最新文献

筛选
英文 中文
Hypergraph-Driven Anomaly Detection in Dynamic Noisy Graphs 动态噪声图中的超图驱动异常检测
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-15 DOI: 10.1109/TIFS.2025.3610063
Guanghua Liu;Chenlong Wang;Zhiguo Gong;Jia Zhang;Shuqi Tang;Huan Wang
{"title":"Hypergraph-Driven Anomaly Detection in Dynamic Noisy Graphs","authors":"Guanghua Liu;Chenlong Wang;Zhiguo Gong;Jia Zhang;Shuqi Tang;Huan Wang","doi":"10.1109/TIFS.2025.3610063","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3610063","url":null,"abstract":"As interactions among elements in applications such as social networks, transaction networks, and IP-IP networks dynamically evolve, anomaly detection in dynamic graphs to mitigate potentially threatening interactions has become increasingly important. Existing methods often assume noise-free graph structures and primarily focus on monitoring structural changes to discover anomalies. Regrettably, practical applications often involve inaccurate information, individual non-response and dropout, and sampling biases. These factors contribute to the pervasiveness of dynamic noisy graphs that encompass structural noises, making anomaly detection more challenging. To address this issue, we propose a novel Hypergraph-driven Anomaly Detection Framework (HADF), which resists the interference of structural noises and adapts to dynamic noisy graphs. HADF consists of a hyper encoder and an embedding enhancer. The hyper encoder leverages inter-edge correlations to generate hyperedges and design their resistant weights, further employing hypergraph convolutional layers to extract the basic hyper-embeddings of edges. The embedding enhancer exploits temporal structural correlation and reconstructs multi-head attention to achieve noise-resistant enhancement of basic hyper-embeddings. Extensive experiments show that our proposed HADF can realize resistance to structural noises and outperform state-of-the-art methods in identifying anomalous edges in dynamic noisy graphs.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9848-9863"},"PeriodicalIF":8.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Securing Federated Learning Against Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection on Local Updates 基于局部更新的多维时间序列异常检测保护联邦学习免受极端模型中毒攻击
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-15 DOI: 10.1109/TIFS.2025.3608671
Edoardo Gabrielli;Dimitri Belli;Zoe Matrullo;Vittorio Miori;Gabriele Tolomei
{"title":"Securing Federated Learning Against Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection on Local Updates","authors":"Edoardo Gabrielli;Dimitri Belli;Zoe Matrullo;Vittorio Miori;Gabriele Tolomei","doi":"10.1109/TIFS.2025.3608671","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3608671","url":null,"abstract":"Current defense mechanisms against model poisoning attacks in federated learning (FL) systems have proven effective up to a certain threshold of malicious clients (e.g., 25% to 50%). In this work, we introduce FLANDERS, a novel pre-aggregation filter for FL that is resilient to large-scale model poisoning attacks, i.e., when malicious clients far exceed legitimate participants. FLANDERS treats the sequence of local models sent by clients in each FL round as a matrix-valued time series. Then, it identifies malicious client updates as outliers in this time series by comparing actual observations with estimates generated by a matrix autoregressive forecasting model maintained by the server. Experiments conducted in several non-iid FL setups show that FLANDERS significantly improves robustness across a wide spectrum of attacks when paired with standard and robust aggregation methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9610-9624"},"PeriodicalIF":8.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FedG3FA: Three-Stage GAN-Aided Target Feature Alignment for Secure Data Sharing in Federated Learning System FedG3FA:联邦学习系统中安全数据共享的三阶段gan辅助目标特征对齐
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-12 DOI: 10.1109/TIFS.2025.3609664
Qingxia Li;Yuchen Jiang;Ray Y. Zhong;Xiaochun Cao
{"title":"FedG3FA: Three-Stage GAN-Aided Target Feature Alignment for Secure Data Sharing in Federated Learning System","authors":"Qingxia Li;Yuchen Jiang;Ray Y. Zhong;Xiaochun Cao","doi":"10.1109/TIFS.2025.3609664","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3609664","url":null,"abstract":"Federated learning (FL) allows distributed clients to train model collaboratively without sharing the original data. However, using private model updates often makes traditional FL systems susceptible to privacy leakage problem. In addition, the performance of existing FL methods is often limited by statistical heterogeneity problem. In order to solve both privacy leakage and statistical heterogeneity problems, we propose a three-stage targeted feature alignment FL framework named FedG3FA. In the first stage, each client trains a generator through generative adversarial training and the generator will be utilized for data interaction instead of private model. After that, in the second stage, the generators will be aligned by our proposed Domain Pulling Network and then aggregated to a global one. Finally, in the third stage, the global generator will be used to train the private model for each client. The effectiveness of our method is verified on medical care and computer vision scenarios including five datasets. The experimental results suggest that our method not only achieves a high level of privacy protection performance, but also remains competitive classification accuracy.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9806-9817"},"PeriodicalIF":8.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Universal Adversarial Defense for Black-box Models 黑盒模型中保护隐私的通用对抗防御
IF 6.8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-11 DOI: 10.1109/tifs.2025.3609104
Qiao Li, Cong Wu, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuang Zhao, Yang Liu
{"title":"Privacy-preserving Universal Adversarial Defense for Black-box Models","authors":"Qiao Li, Cong Wu, Jing Chen, Zijun Zhang, Kun He, Ruiying Du, Xinxin Wang, Qingchuang Zhao, Yang Liu","doi":"10.1109/tifs.2025.3609104","DOIUrl":"https://doi.org/10.1109/tifs.2025.3609104","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"86 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles AdvNeRF:用NeRF生成3D对抗网格来欺骗驾驶车辆
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-11 DOI: 10.1109/TIFS.2025.3609180
Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu
{"title":"AdvNeRF: Generating 3D Adversarial Meshes With NeRF to Fool Driving Vehicles","authors":"Boyuan Zhang;Jiaxu Li;Yucheng Shi;Yahong Han;Qinghua Hu","doi":"10.1109/TIFS.2025.3609180","DOIUrl":"10.1109/TIFS.2025.3609180","url":null,"abstract":"Adversarial attacks on deep neural networks (DNNs) have raised significant concerns, particularly in safety-critical applications such as autonomous driving. Autonomous vehicles rely on both vision and LiDAR sensors to provide accurate 3D visual perception of their surroundings. However, adversarial vulnerabilities in these models pose several risks, as they can lead to misinterpretation of sensor data, ultimately endangering safety. While substantial research has been devoted to image-level adversarial attacks, these efforts are predominantly confined to 2D-pixel spaces, lacking physical realism and applicability in the 3D world. To address these limitations, we introduce AdvNeRF, a groundbreaking approach for generating 3D adversarial meshes that effectively target both vision and LiDAR models simultaneously. AdvNeRF is a Transferable Target Adversarial Attack that leverages Neural Radiance Fields (NeRF) to achieve its objectives. NeRF ensures the creation of high-quality adversarial objects and enhances attack performance by maintaining consistency across unseen viewpoints, making the adversarial examples robust from multiple angles. By integrating NeRF, our method represents a leap forward in improving the robustness and effectiveness of 3D adversarial attacks. Experimental results validate the superior performance of AdvNeRF, demonstrating its ability to degrade the accuracy of 3D object detectors under various conditions. These findings highlight the critical implications of AdvNeRF, emphasizing its potential to consistently undermine the perception systems of autonomous vehicles across different perspectives, thus marking an advancement in the field of adversarial attacks and 3D perception security.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9673-9684"},"PeriodicalIF":8.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tactics and Techniques Text Classification Based on Adversarial Contrastive Learning and Meta-Path 基于对抗性对比学习和元路径的文本分类策略与技术
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-11 DOI: 10.1109/TIFS.2025.3609218
Yuchun Han;Weiping Wang;Zhe Qu;Shigeng Zhang
{"title":"Tactics and Techniques Text Classification Based on Adversarial Contrastive Learning and Meta-Path","authors":"Yuchun Han;Weiping Wang;Zhe Qu;Shigeng Zhang","doi":"10.1109/TIFS.2025.3609218","DOIUrl":"10.1109/TIFS.2025.3609218","url":null,"abstract":"Tactics and techniques information in Cyber Threat Intelligence (CTI) represent the objectives of attackers and the means through which these objectives are achieved. The classification of tactics and techniques descriptions in CTI has been extensively studied to assist security experts in interpreting attack patterns. Although many recent studies have applied various deep learning methods to enhance classification performance, they mainly focus on improving performance from an average or top perspective. However, the imbalance between tactical and technical tag samples, as well as text sparsity, may lead to poor model performance, which has been under-explored. To address these issues, we propose a new tactics and techniques classification model based on adversarial contrastive learning and meta-path (TTC-ACLM). In TTC-ACLM, a novel text representation learning module is first designed. It includes pre-trained language model (PLM) and contrastive adversarial methods, which can better adapt to categories with smaller sample sizes while obtaining better text representations. Then, heterogeneous information networks are used to model the rich relationships between texts and labels (tactics and techniques), which can merge additional information, e.g., processes and tools, to address text sparsity. Next, we defined a meta-path based classifier learning module that maps text, tactics, and meta-path based context to a set of classifiers, which are applied to the text representation generated by the text representation module for better classification. Finally, the classification performance is further improved through the tactics and techniques correlation enhancement matrix. Through in-depth research, we demonstrate that the proposed model can effectively address the impact of sample imbalance and text sparsity. Extensive experimental results indicate that TTC-ACLM achieves state-of-the-art performance.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"10098-10113"},"PeriodicalIF":8.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BESA: Boosting Encoder Stealing Attack With Perturbation Recovery 增强编码器窃取攻击与扰动恢复
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-11 DOI: 10.1109/TIFS.2025.3608665
Xuhao Ren;Haotian Liang;Yajie Wang;Chuan Zhang;Zehui Xiong;Liehuang Zhu
{"title":"BESA: Boosting Encoder Stealing Attack With Perturbation Recovery","authors":"Xuhao Ren;Haotian Liang;Yajie Wang;Chuan Zhang;Zehui Xiong;Liehuang Zhu","doi":"10.1109/TIFS.2025.3608665","DOIUrl":"10.1109/TIFS.2025.3608665","url":null,"abstract":"To boost the encoder stealing attack under the perturbation-based defense that hinders the attack performance, we propose a boosting encoder stealing attack with perturbation recovery named BESA. It aims to overcome perturbation-based defenses. The core of BESA consists of two modules: perturbation detection and perturbation recovery, which can be combined with canonical encoder stealing attacks. The perturbation detection module utilizes the feature vectors obtained from the target encoder to infer the defense mechanism employed by the service provider. Once the defense mechanism is detected, the perturbation recovery module leverages the well-designed generative model to restore a clean feature vector from the perturbed one. Through extensive evaluations based on various datasets, we demonstrate that BESA significantly enhances the surrogate encoder accuracy of existing encoder stealing attacks by up to 24.63% when facing state-of-the-art defenses and combinations of multiple defenses.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"10007-10018"},"PeriodicalIF":8.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FADE: A Dataset for Detecting Falling Objects Around Buildings in Video 一个用于检测视频中建筑物周围坠落物体的数据集
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-10 DOI: 10.1109/TIFS.2025.3607254
Zhigang Tu;Zhengbo Zhang;Zitao Gao;Chunluan Zhou;Junsong Yuan;Bo Du
{"title":"FADE: A Dataset for Detecting Falling Objects Around Buildings in Video","authors":"Zhigang Tu;Zhengbo Zhang;Zitao Gao;Chunluan Zhou;Junsong Yuan;Bo Du","doi":"10.1109/TIFS.2025.3607254","DOIUrl":"10.1109/TIFS.2025.3607254","url":null,"abstract":"Objects falling from buildings, a frequently occurring event in daily life, can cause severe injuries to pedestrians due to the high impact force they exert. Surveillance cameras are often installed around buildings to detect falling objects, but such detection remains challenging due to the small size and fast motion of the objects. Moreover, the field of falling object detection around buildings (FODB) lacks a large-scale dataset for training learning-based detection methods and for standardized evaluation. To address these challenges, we propose a large and diverse video benchmark dataset named FADE. Specifically, FADE contains 2,611 videos from 25 scenes, featuring 8 falling object categories, 4 weather conditions, and 4 video resolutions. Additionally, we develop a novel detection method for FODB that effectively leverages motion information and generates small-sized yet high-quality detection proposals. The efficacy of our method is evaluated on the proposed FADE dataset by comparing it with state-of-the-art approaches in generic object detection, video object detection, and moving object detection. The dataset and code are publicly available at <uri>https://fadedataset.github.io/FADE.github.io/</uri>","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9746-9759"},"PeriodicalIF":8.0,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
sBugChecker: A Systematic Framework for Detecting Solidity Compiler-Introduced Bugs sBugChecker:一个用于检测稳定性编译器引入的bug的系统框架
IF 8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-10 DOI: 10.1109/TIFS.2025.3608660
Fei Tong;Zihao Li;Guang Cheng;Yujian Zhang;Heng Li
{"title":"sBugChecker: A Systematic Framework for Detecting Solidity Compiler-Introduced Bugs","authors":"Fei Tong;Zihao Li;Guang Cheng;Yujian Zhang;Heng Li","doi":"10.1109/TIFS.2025.3608660","DOIUrl":"10.1109/TIFS.2025.3608660","url":null,"abstract":"A compiler converts smart contract source code into bytecode, ensuring behavior consistency between them. However, as compiler is also a program, it may contain bugs that disrupt this consistency, known as Compiler-Introduced Bugs (CIBs). Of the latest 4,857 verified smart contracts coded in Solidity, approximately 58% still use compilers that contain at least one CIB. These CIBs can be exploited by attackers to bypass security checks or inject malicious data, leading to significant security issues, which becomes even more serious for smart contracts in blockchain as they cannot be modified after being deployed. To this end, this paper proposes sBugChecker, to the best of our knowledge, the first systematic framework designed to automatically and effectively detect CIBs for smart contracts coded in Solidity. sBugChecker can be readily extended with the rule customization suite we propose based on domain specific language. Additionally, it employs two static analytical methods, i.e., pattern matching, and symbolic execution, to identify CIBs’ triggering conditions and confirm their impacts, broadening its detection scope and improving its detection efficiency. To evaluate sBugChecker’s performance, we construct a CIB mutated smart contract dataset, which is the first publicly-available one for this study. According to the evaluation based on this dataset, sBugChecker performs exceptionally well, with detection precision, recall, and F-measure on average achieving 96.6%, 95.5% and 96.0%, respectively. Moreover, sBugChecker has been applied to successfully discover real-world deployed smart contracts capable of triggering CIBs.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"9760-9775"},"PeriodicalIF":8.0,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealthy and Effective Clean-Label Backdoor Attack via Adaptive Frequency-Domain Suppression and Trigger Combination 基于自适应频域抑制和触发组合的清洁标签后门隐身有效攻击
IF 6.8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-09-10 DOI: 10.1109/tifs.2025.3607257
Chaoying Yuan, Jingpeng Bai, Shumei Yuan, Ni Wei
{"title":"Stealthy and Effective Clean-Label Backdoor Attack via Adaptive Frequency-Domain Suppression and Trigger Combination","authors":"Chaoying Yuan, Jingpeng Bai, Shumei Yuan, Ni Wei","doi":"10.1109/tifs.2025.3607257","DOIUrl":"https://doi.org/10.1109/tifs.2025.3607257","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"19 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信