Applied AI letters最新文献

筛选
英文 中文
Generating visual explanations with natural language 用自然语言生成视觉解释
Applied AI letters Pub Date : 2021-11-22 DOI: 10.1002/ail2.55
Lisa Anne Hendricks, Anna Rohrbach, Bernt Schiele, Trevor Darrell, Zeynep Akata
{"title":"Generating visual explanations with natural language","authors":"Lisa Anne Hendricks,&nbsp;Anna Rohrbach,&nbsp;Bernt Schiele,&nbsp;Trevor Darrell,&nbsp;Zeynep Akata","doi":"10.1002/ail2.55","DOIUrl":"10.1002/ail2.55","url":null,"abstract":"<p>We generate natural language explanations for a fine-grained visual recognition task. Our explanations fulfill two criteria. First, explanations are <i>class discriminative</i>, meaning they mention attributes in an image which are important to identify a class. Second, explanations are <i>image relevant</i>, meaning they reflect the actual content of an image. Our system, composed of an explanation sampler and phrase-critic model, generates class discriminative and image relevant explanations. In addition, we demonstrate that our explanations can help humans decide whether to accept or reject an AI decision.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.55","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48200035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Explaining autonomous drones: An XAI journey 解释自主无人机:XAI之旅
Applied AI letters Pub Date : 2021-11-22 DOI: 10.1002/ail2.54
Mark Stefik, Michael Youngblood, Peter Pirolli, Christian Lebiere, Robert Thomson, Robert Price, Lester D. Nelson, Robert Krivacic, Jacob Le, Konstantinos Mitsopoulos, Sterling Somers, Joel Schooler
{"title":"Explaining autonomous drones: An XAI journey","authors":"Mark Stefik,&nbsp;Michael Youngblood,&nbsp;Peter Pirolli,&nbsp;Christian Lebiere,&nbsp;Robert Thomson,&nbsp;Robert Price,&nbsp;Lester D. Nelson,&nbsp;Robert Krivacic,&nbsp;Jacob Le,&nbsp;Konstantinos Mitsopoulos,&nbsp;Sterling Somers,&nbsp;Joel Schooler","doi":"10.1002/ail2.54","DOIUrl":"10.1002/ail2.54","url":null,"abstract":"<p>COGLE (<i>CO</i>mmon <i>G</i>round <i>L</i>earning and <i>E</i>xplanation) is an explainable artificial intelligence (XAI) system where autonomous drones deliver supplies to field units in mountainous areas. The mission risks vary with topography, flight decisions, and mission goals. The missions engage a human plus AI team where users determine which of two AI-controlled drones is better for each mission. This article reports on the technical approach and findings of the project and reflects on challenges that complex combinatorial problems present for users, machine learning, user studies, and the context of use for XAI systems. COGLE creates explanations in multiple modalities. Narrative “What” explanations compare what each drone does on a mission and “Why” based on drone competencies determined from experiments using counterfactuals. Visual “Where” explanations highlight risks on maps to help users to interpret flight plans. One branch of the research studied whether the explanations helped users to predict drone performance. In this branch, a model induction user study showed that <i>post-decision explanations</i> had only a small effect in teaching users to determine by themselves which drone is better for a mission. Subsequent reflection suggests that supporting human plus AI decision making with <i>pre-decision explanations</i> is a better context for benefiting from explanations on combinatorial tasks.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.54","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47270045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Generating and evaluating explanations of attended and error-inducing input regions for VQA models 生成和评估VQA模型的参与和错误诱导输入区域的解释
Applied AI letters Pub Date : 2021-11-12 DOI: 10.1002/ail2.51
Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
{"title":"Generating and evaluating explanations of attended and error-inducing input regions for VQA models","authors":"Arijit Ray,&nbsp;Michael Cogswell,&nbsp;Xiao Lin,&nbsp;Kamran Alipour,&nbsp;Ajay Divakaran,&nbsp;Yi Yao,&nbsp;Giedrius Burachas","doi":"10.1002/ail2.51","DOIUrl":"https://doi.org/10.1002/ail2.51","url":null,"abstract":"<p>Attention maps, a popular heatmap-based explanation method for Visual Question Answering, are supposed to help users understand the model by highlighting portions of the image/question used by the model to infer answers. However, we see that users are often misled by current attention map visualizations that point to relevant regions despite the model producing an incorrect answer. Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err. Error maps can indicate when a correctly attended region may be processed incorrectly leading to an incorrect answer, and hence, improve users' understanding of those cases. To evaluate our new explanations, we further introduce a metric that simulates users' interpretation of explanations to evaluate their potential helpfulness to understand model correctness. We finally conduct user studies to see that our new explanations help users understand model correctness better than baselines by an expected 30% and that our proxy helpfulness metrics correlate strongly (<math>\u0000 <mi>ρ</mi>\u0000 <mo>&gt;</mo>\u0000 <mn>0.97</mn></math>) with how well users can predict model correctness.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.51","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137830967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computer Vision and Machine Learning Techniques for Quantification and Predictive Modeling of Intracellular Anti‐Cancer Drug Delivery by Nanocarriers 计算机视觉和机器学习技术用于纳米载体细胞内抗癌症药物递送的定量和预测建模
Applied AI letters Pub Date : 2021-11-10 DOI: 10.1002/ail2.50
S. Goswami, Kshama D. Dhobale, R. Wavhale, B. Goswami, S. Banerjee
{"title":"Computer Vision and Machine Learning Techniques for Quantification and Predictive Modeling of Intracellular\u0000 Anti‐Cancer\u0000 Drug Delivery by Nanocarriers","authors":"S. Goswami, Kshama D. Dhobale, R. Wavhale, B. Goswami, S. Banerjee","doi":"10.1002/ail2.50","DOIUrl":"https://doi.org/10.1002/ail2.50","url":null,"abstract":"","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45346424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving users' mental model with attention-directed counterfactual edits 通过注意力导向的反事实编辑改善用户的心智模型
Applied AI letters Pub Date : 2021-11-06 DOI: 10.1002/ail2.47
Kamran Alipour, Arijit Ray, Xiao Lin, Michael Cogswell, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas
{"title":"Improving users' mental model with attention-directed counterfactual edits","authors":"Kamran Alipour,&nbsp;Arijit Ray,&nbsp;Xiao Lin,&nbsp;Michael Cogswell,&nbsp;Jurgen P. Schulze,&nbsp;Yi Yao,&nbsp;Giedrius T. Burachas","doi":"10.1002/ail2.47","DOIUrl":"https://doi.org/10.1002/ail2.47","url":null,"abstract":"<p>In the domain of visual question answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain image-question (IQ) pairs. In this work, we show that showing controlled counterfactual IQ examples are more effective at improving the mental model of users as compared to simply showing random examples. We compare a generative approach and a retrieval-based approach to show counterfactual examples. We use recent advances in generative adversarial networks to generate counterfactual images by deleting and inpainting certain regions of interest in the image. We then expose users to changes in the VQA system's answer on those altered images. To select the region of interest for inpainting, we experiment with using both human-annotated attention maps and a fully automatic method that uses the VQA system's attention values. Finally, we test the user's mental model by asking them to predict the model's performance on a test counterfactual image. We note an overall improvement in users' accuracy to predict answer change when shown counterfactual explanations. While realistic retrieved counterfactuals obviously are the most effective at improving the mental model, we show that a generative approach can also be equally effective.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.47","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137648971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Remembering for the right reasons: Explanations reduce catastrophic forgetting 记住正确的原因:解释可以减少灾难性的遗忘
Applied AI letters Pub Date : 2021-11-05 DOI: 10.1002/ail2.44
Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell
{"title":"Remembering for the right reasons: Explanations reduce catastrophic forgetting","authors":"Sayna Ebrahimi,&nbsp;Suzanne Petryk,&nbsp;Akash Gokul,&nbsp;William Gan,&nbsp;Joseph E. Gonzalez,&nbsp;Marcus Rohrbach,&nbsp;Trevor Darrell","doi":"10.1002/ail2.44","DOIUrl":"https://doi.org/10.1002/ail2.44","url":null,"abstract":"<p>The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the <i>evidence</i> for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has “the right reasons” for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.44","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137488003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patching interpretable And-Or-Graph knowledge representation using augmented reality 使用增强现实修补可解释的And-Or-Graph知识表示
Applied AI letters Pub Date : 2021-10-20 DOI: 10.1002/ail2.43
Hangxin Liu, Yixin Zhu, Song-Chun Zhu
{"title":"Patching interpretable And-Or-Graph knowledge representation using augmented reality","authors":"Hangxin Liu,&nbsp;Yixin Zhu,&nbsp;Song-Chun Zhu","doi":"10.1002/ail2.43","DOIUrl":"10.1002/ail2.43","url":null,"abstract":"<p>We present a novel augmented reality (AR) interface to provide effective means to diagnose a robot's erroneous behaviors, endow it with new skills, and patch its knowledge structure represented by an And-Or-Graph (AOG). Specifically, an AOG representation of opening medicine bottles is learned from human demonstration and yields a hierarchical structure that captures the spatiotemporal compositional nature of the given task, which is highly interpretable for the users. Through a series of psychological experiments, we demonstrate that the explanations of a robotic system, inherited from and produced by the AOG, can better foster human trust compared to other forms of explanations. Moreover, by visualizing the knowledge structure and robot states, the AR interface allows human users to intuitively understand what the robot knows, supervise the robot's task planner, and interactively teach the robot with new actions. Together, users can quickly identify the reasons for failures and conveniently patch the current knowledge structure to prevent future errors. This capability demonstrates the interpretability of our knowledge representation and the new forms of interactions afforded by the proposed AR interface.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.43","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46548240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explainable, interactive content-based image retrieval 可解释的,交互式的基于内容的图像检索
Applied AI letters Pub Date : 2021-10-19 DOI: 10.1002/ail2.41
Bhavan Vasu, Brian Hu, Bo Dong, Roddy Collins, Anthony Hoogs
{"title":"Explainable, interactive content-based image retrieval","authors":"Bhavan Vasu,&nbsp;Brian Hu,&nbsp;Bo Dong,&nbsp;Roddy Collins,&nbsp;Anthony Hoogs","doi":"10.1002/ail2.41","DOIUrl":"10.1002/ail2.41","url":null,"abstract":"<p>Quantifying the value of explanations in a human-in-the-loop (HITL) system is difficult. Previous methods either measure explanation-specific values that do not correspond to user tasks and needs or poll users on how useful they find the explanations to be. In this work, we quantify how much explanations help the user through a utility-based paradigm that measures change in task performance when using explanations vs not. Our chosen task is content-based image retrieval (CBIR), which has well-established baselines and performance metrics independent of explainability. We extend an existing HITL image retrieval system that incorporates user feedback with similarity-based saliency maps (SBSM) that indicate to the user which parts of the retrieved images are most similar to the query image. The system helps the user understand what it is paying attention to through saliency maps, and the user helps the system understand their goal through saliency-guided relevance feedback. Using the MS-COCO dataset, a standard object detection and segmentation dataset, we conducted extensive, crowd-sourced experiments validating that SBSM improves interactive image retrieval. Although the performance increase is modest in the general case, in more difficult cases such as cluttered scenes, using explanations yields an 6.5% increase in accuracy. To the best of our knowledge, this is the first large-scale user study showing that visual saliency map explanations improve performance on a real-world, interactive task. Our utility-based evaluation paradigm is general and potentially applicable to any task for which explainability can be incorporated.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.41","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"102959774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-guided global explanations for deep image recognition: A user study 深度图像识别的用户导向全局解释:用户研究
Applied AI letters Pub Date : 2021-10-19 DOI: 10.1002/ail2.42
Mandana Hamidi-Haines, Zhongang Qi, Alan Fern, Fuxin Li, Prasad Tadepalli
{"title":"User-guided global explanations for deep image recognition: A user study","authors":"Mandana Hamidi-Haines,&nbsp;Zhongang Qi,&nbsp;Alan Fern,&nbsp;Fuxin Li,&nbsp;Prasad Tadepalli","doi":"10.1002/ail2.42","DOIUrl":"https://doi.org/10.1002/ail2.42","url":null,"abstract":"<p>We study a user-guided approach for producing global explanations of deep networks for image recognition. The global explanations are produced with respect to a test data set and give the overall frequency of different “recognition reasons” across the data. Each reason corresponds to a small number of the most significant human-recognizable visual concepts used by the network. The key challenge is that the visual concepts cannot be predetermined and those concepts will often not correspond to existing vocabulary or have labeled data sets. We address this issue via an interactive-naming interface, which allows users to freely cluster significant image regions in the data into visually similar concepts. Our main contribution is a user study on two visual recognition tasks. The results show that the participants were able to produce a small number of visual concepts sufficient for explanation and that there was significant agreement among the concepts, and hence global explanations, produced by different participants.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.42","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137863524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XAITK: The explainable AI toolkit XAITK:可解释的AI工具包
Applied AI letters Pub Date : 2021-10-18 DOI: 10.1002/ail2.40
Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, Anthony Hoogs
{"title":"XAITK: The explainable AI toolkit","authors":"Brian Hu,&nbsp;Paul Tunison,&nbsp;Bhavan Vasu,&nbsp;Nitesh Menon,&nbsp;Roddy Collins,&nbsp;Anthony Hoogs","doi":"10.1002/ail2.40","DOIUrl":"10.1002/ail2.40","url":null,"abstract":"<p>Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave “in the wild” impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human-interpretable representations or visualizations that are meant to “explain” how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA-sponsored effort that builds on results from the 4-year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python-based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system-level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.40","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48237805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信