2022 IEEE International Conference on Assured Autonomy (ICAA)最新文献

筛选
英文 中文
Adversarial Email Generation against Spam Detection Models through Feature Perturbation 基于特征扰动的针对垃圾邮件检测模型的对抗电子邮件生成
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00019
Qi Cheng, Anyi Xu, Xiangyang Li, Leah Ding
{"title":"Adversarial Email Generation against Spam Detection Models through Feature Perturbation","authors":"Qi Cheng, Anyi Xu, Xiangyang Li, Leah Ding","doi":"10.1109/ICAA52185.2022.00019","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00019","url":null,"abstract":"Machine learning-based spam detection models learn from a set of labeled training data and detect spam emails after the training phase. We study a class of vulnerabilities of such detection models, where the attack can manipulate a trained model to misclassify maliciously crafted spam emails at the detection phase. However, very often feature extraction methods make it very difficult to translate the change in the feature space to that in the textual email space. This paper proposes a new attack method of making guided changes to text data by taking advantage of findings of generated adversarial examples that purposely modify the features representing an email. We study different feature extraction methods using various Natural Language Processing (NLP) techniques. We develop effective methods to translate adversarial perturbations in the feature space back to a set of “magic words”, or malicious words, in the text space, which can cause desirable misclassifications from the attacker’s perspective. We show that our attacks are effective across different datasets and various machine learning methods in white-box, gray-box, and black-box attack settings. Finally, we discuss preliminary exploration to counter such attacks. We hope our findings and analysis will allow future work to perform additional studies of defensive solutions against this new class of attacks.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114748461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Layer-Wise Analysis of Neuron Activation Values for Performance Verification of Artificial Neural Network Classifiers 用于人工神经网络分类器性能验证的神经元激活值分层分析
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00016
Darryl Hond, H. Asgari, Leonardo Symonds, M. Newman
{"title":"Layer-Wise Analysis of Neuron Activation Values for Performance Verification of Artificial Neural Network Classifiers","authors":"Darryl Hond, H. Asgari, Leonardo Symonds, M. Newman","doi":"10.1109/ICAA52185.2022.00016","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00016","url":null,"abstract":"Object classification in dynamic, uncontrolled environments is one of the functional elements of safety-critical Autonomous Systems. It is crucial to develop methods for the specification and verification of these elements, and the associated algorithms, in order to gain confidence in the overall safety of Autonomous Systems and their functional and behavioural correctness and adequacy. Artificial Neural Network (ANN) object classifiers must therefore be assured and need to be verified with respect to requirements. A classifier might be required to generalize to a satisfactory extent, in the sense that its classification performance must be maintained at an acceptable level when the input data differs from the training data. This requirement would apply when data received during operation is drawn from a different distribution to the training data. The specification and verification of classifier generalization capability can be based on measures of the dissimilarity between operational and training data. A requirement could state the permitted forms of the relationship between classification performance and a data dissimilarity measure. We have previously proposed such a dissimilarity measure, which we have termed the Neuron Region Distance (NRD). The NRD is a function of network activation values. In this paper, we analyze neuron activation values layer-by-layer across a neural network. This is in order to advance our progress towards the conception of a novel, generalized form of the NRD. This new measure is called the Per Neuron Ranking (PNR) measure. The activation value analysis provides insight into the required formulation of the PNR measure.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123122953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers 利用实时可达性保证机器学习控制器的安全
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00010
Patrick Musau, Nathaniel P. Hamilton, Diego Manzanas Lopez, Preston K. Robinette, Taylor T. Johnson
{"title":"On Using Real-Time Reachability for the Safety Assurance of Machine Learning Controllers","authors":"Patrick Musau, Nathaniel P. Hamilton, Diego Manzanas Lopez, Preston K. Robinette, Taylor T. Johnson","doi":"10.1109/ICAA52185.2022.00010","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00010","url":null,"abstract":"Over the last decade, advances in machine learning and sensing technology have paved the way for the belief that safe, accessible, and convenient autonomous vehicles may be realized in the near future. Despite the prolific competencies of machine learning models for learning the nuances of sensing, actuation, and control, they are notoriously difficult to assure. The challenge here is that some models, such as neural networks, are “black box” in nature, making verification and validation difficult, and sometimes infeasible. Moreover, these models are often tasked with operating in uncertain and dynamic environments where design time assurance may only be partially transferable. Thus, it is critical to monitor these components at runtime. One approach for providing runtime assurance of systems with unverified components is the simplex architecture, where an unverified component is wrapped with a safety controller and a switching logic designed to prevent dangerous behavior. In this paper, we propose the use of a real-time reachability algorithm for the implementation of such an architecture for the safety assurance of a 1/10 scale open source autonomous vehicle platform known as F1/10. The reachability algorithm (a) provides provable guarantees of safety, and (b) is used to detect potentially unsafe scenarios. In our approach, the need to analyze the underlying controller is abstracted away, instead focusing on the effects of the controller’s decisions on the system’s future states. We demonstrate the efficacy of our architecture through experiments conducted both in simulation and on an embedded hardware platform.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131470334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Adversarially Robust Edge-Based Object Detection for Assuredly Autonomous Systems 面向可靠自治系统的对抗鲁棒边缘目标检测
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00021
Robert Canady, Xingyu Zhou, Yogesh D. Barve, D. Balasubramanian, A. Gokhale
{"title":"Adversarially Robust Edge-Based Object Detection for Assuredly Autonomous Systems","authors":"Robert Canady, Xingyu Zhou, Yogesh D. Barve, D. Balasubramanian, A. Gokhale","doi":"10.1109/ICAA52185.2022.00021","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00021","url":null,"abstract":"Edge-based and autonomous, deep learning computer vision applications, such as those used in surveillance or traffic management, must be assuredly correct and performant. However, realizing these applications in practice incurs a number of challenges. First, the constraints on edge resources precludes the use of large-sized, deep learning computer vision models. Second, the heterogeneity in edge resource types causes different execution speeds and energy consumption during model inference. Third, deep learning models are known to be vulnerable to adversarial perturbations, which can make them ineffective or lead to incorrect inferences. Although some research that addresses the first two challenges exists, defending against adversarial attacks at the edge remains mostly an unresolved problem. To that end, this paper presents techniques to realize robust and edge-based deep learning computer vision applications thereby providing a level of assured autonomy. We utilize state-of-the-art (SOTA) object detection attacks from the TOG (adversarial objectness gradient attacks) suite to design a generalized adversarial robustness evaluation procedure. It enables fast robustness evaluations on popular object detection architectures of YOLOv3, YOLOv3-tiny, and Faster R-CNN with different image classification backbones to test the robustness of these object detection models. We explore two variations of adversarial training. The first variant augments the training data with multiple types of attacks. The second variant exchanges a clean image in the training set for a randomly chosen adversarial image. Our solutions are then evaluated using the PASCAL VOC dataset. Using the first variant, we are able to improve the robustness of YOLOv3-tiny models by 1–2% mean average precision (mAP) and YOLOv3 realized an improvement of up to 17% mAP on attacked data. The second variant saw even better results in some cases with improvements in robustness of over 25% for YOLOv3. The Faster RCNN models also saw improvement, however, less substantially at around 10–15%. Yet, their mAP was improved on clean data as well.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133476351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explainable Forecasts of Disruptive Events using Recurrent Neural Networks 利用递归神经网络对破坏性事件的可解释预测
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00017
A. Buczak, Benjamin D. Baugher, Adam J. Berlier, Kayla E. Scharfstein, Christine S. Martin
{"title":"Explainable Forecasts of Disruptive Events using Recurrent Neural Networks","authors":"A. Buczak, Benjamin D. Baugher, Adam J. Berlier, Kayla E. Scharfstein, Christine S. Martin","doi":"10.1109/ICAA52185.2022.00017","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00017","url":null,"abstract":"This paper describes the Crystal Cube method we developed for forecasting disruptive events around the world, specifically Irregular Leadership Change. Crystal Cube uses a Recurrent Neural Network (RNN) with Long-Short Term Memory (LSTM) units for forecasting. In this paper special emphasis is put on explanations of the network forecasts. We are using SHapley Additive exPlanations (SHAP) for individual forecast explanations and we are aggregating the explanations separately for True Positives, False Positives, True Negatives, and False Negatives. The method can be extended to Deep Reinforcement Learning models for self-driving cars or unmanned fighter jets.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132041671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hallmarks of an Autonomous Space System’s Development and V&V 自主空间系统发展特征与V&V
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00025
M. Feather
{"title":"Hallmarks of an Autonomous Space System’s Development and V&V","authors":"M. Feather","doi":"10.1109/ICAA52185.2022.00025","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00025","url":null,"abstract":"NASA’s deep space missions are dependent on autonomy when conditions require a faster response than can be directed by communication to and from Earth. Verification and Validation (V&V) of the autonomy is an essential step for missions to be confident in its use. This paper provides an overview of the development and V&V of one such autonomous system, DIMES (Descent Image Motion Estimation System), used successfully to reduce a critical mission risk during the landings of NASA’s two Mars Exploration Rovers on Mars in 2004.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117119955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reference architectures for autonomous on-orbit servicing, assembly and manufacturing (OSAM) mission resilience 自主在轨服务、装配和制造(OSAM)任务弹性的参考架构
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00024
Nathaniel G. Gordon, Gregory Falco
{"title":"Reference architectures for autonomous on-orbit servicing, assembly and manufacturing (OSAM) mission resilience","authors":"Nathaniel G. Gordon, Gregory Falco","doi":"10.1109/ICAA52185.2022.00024","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00024","url":null,"abstract":"On-orbit servicing, assembly and manufacturing (OSAM) missions promise to help reduce space debris and prolong the life of space vehicles. OSAM systems will require increasing degrees of autonomy given the complexity of servicing missions. This complexity exposes the system to a variety of failures that could be precipitated by mechanical faults, software bugs, environmental factors, or adversaries. These resource-intensive and risk-prone missions will require a high degree of assurance to be operationally feasible. This paper proposes a series of autonomous OSAM reference architectures that can be engaged to evaluate assurance challenges spanning faults to cyber resilience. The future success of these missions will require a high degree of mission resilience so that space vehicles can adapt to and mitigate consequences in a highly dynamic environment. Opportunities to address assurance challenges and enable mission resilience are also discussed as future work.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129328172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Mapping of Assurance Techniques for Learning Enabled Autonomous Systems to the Systems Engineering Lifecycle 学习自治系统到系统工程生命周期的保证技术映射
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00013
Christian Ellis, Maggie B. Wigness, L. Fiondella
{"title":"A Mapping of Assurance Techniques for Learning Enabled Autonomous Systems to the Systems Engineering Lifecycle","authors":"Christian Ellis, Maggie B. Wigness, L. Fiondella","doi":"10.1109/ICAA52185.2022.00013","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00013","url":null,"abstract":"Learning enabled autonomous systems provide increased capabilities compared to traditional systems. However, the complexity of and probabilistic nature in the underlying methods enabling such capabilities present challenges for current systems engineering processes for assurance, and test, evaluation, verification, and validation (TEVV). This paper provides a preliminary attempt to map recently developed technical approaches in the assurance and TEVV of learning enabled autonomous systems (LEAS) literature to a traditional systems engineering v-model. This mapping categorizes such techniques into three main approaches: development, acquisition, and sustainment. This mapping reviews the latest techniques to develop safe, reliable, and resilient learning enabled autonomous systems, without recommending radical and impractical changes to existing systems engineering processes. By performing this mapping, we seek to assist acquisition professionals by (i) informing comprehensive test and evaluation planning, and (ii) objectively communicating risk to leaders.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130112809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Focusing on the Ethical Challenges of Data Breaches and Applications 关注数据泄露和应用程序的道德挑战
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00018
Karen Joisten, Nicole Thiemer, Tobias J. Renner, Anke Janssen, Alexander Scheffler
{"title":"Focusing on the Ethical Challenges of Data Breaches and Applications","authors":"Karen Joisten, Nicole Thiemer, Tobias J. Renner, Anke Janssen, Alexander Scheffler","doi":"10.1109/ICAA52185.2022.00018","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00018","url":null,"abstract":"Ethical challenges of the human lifeworld that are caused by data breaches and applications are steadily increasing. Therefore, a new ethical concept must be brought into focus: Technoethics for Emerging Digital Systems (TEDS). TEDS is presented as an integrative and innovative approach committed to an interdisciplinary perspective. Thereby, TEDS reflects all social areas of the human lifeworld in their ethical scope. With recourse to phenomenological methods, TEDS helps to address ethical implications which arise from deep interference of autonomous systems with the human lifeworld. The meaning of intentional structures and the problem of appresentations in the phenomenological sense still represents a blank gap in the current ethical discourse on the problem of data breaches. The findings provide methods for dealing with ethical challenges and explain the problem area of an appropriate technoethical use in the lifeworld. In this way, problems can already be avoided in the development process of artificial intelligence systems and their applications by specifically searching for blind spots in a technical and ethical manner. Furthermore, this approach helps to assure a technoethical use of autonomous systems in an appropriate way and ultimately leads to a limitation of damages – which may occur in case of malfunctions and data breaches of artificial intelligence systems – in the lifeworld of humans. Our contribution is to introduce TEDS as a new ethical concept that has not existed before. This new concept focuses on the application of phenomenological methods to detect ethical errors in digital systems.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121785898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovery of AI/ML Supply Chain Vulnerabilities within Automotive Cyber-Physical Systems 汽车网络物理系统中AI/ML供应链漏洞的发现
2022 IEEE International Conference on Assured Autonomy (ICAA) Pub Date : 2022-03-01 DOI: 10.1109/ICAA52185.2022.00020
Daniel Williams, Chelece Clark, Rachel McGahan, Bradley Potteiger, Daniel Cohen, Patrick Musau
{"title":"Discovery of AI/ML Supply Chain Vulnerabilities within Automotive Cyber-Physical Systems","authors":"Daniel Williams, Chelece Clark, Rachel McGahan, Bradley Potteiger, Daniel Cohen, Patrick Musau","doi":"10.1109/ICAA52185.2022.00020","DOIUrl":"https://doi.org/10.1109/ICAA52185.2022.00020","url":null,"abstract":"Steady advancement in Artificial Intelligence (AI) development over recent years has caused AI systems to become more readily adopted across industry and military use-cases globally. As powerful as these algorithms are, there are still gaping questions regarding their security and reliability. Beyond adversarial machine learning, software supply chain vulnerabilities and model backdoor injection exploits are emerging as potential threats to the physical safety of AI reliant CPS such as autonomous vehicles. In this work in progress paper, we introduce the concept of AI supply chain vulnerabilities with a provided proof of concept autonomous exploitation framework. We investigate the viability of algorithm backdoors and software third party library dependencies for applicability into modern AI attack kill chains. We leverage an autonomous vehicle case study for demonstrating the applicability of our offensive methodologies within a realistic AI CPS operating environment.","PeriodicalId":206047,"journal":{"name":"2022 IEEE International Conference on Assured Autonomy (ICAA)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122044989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信