Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing最新文献

筛选
英文 中文
StreamCollab: A Streaming Crowd-AI Collaborative System to Smart Urban Infrastructure Monitoring in Social Sensing 流协同:社会传感中智能城市基础设施监测的流人群-人工智能协同系统
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18950
Yang Zhang, Lanyu Shang, Ruohan Zong, Zengwu Wang, Ziyi Kou, Dong Wang
{"title":"StreamCollab: A Streaming Crowd-AI Collaborative System to Smart Urban Infrastructure Monitoring in Social Sensing","authors":"Yang Zhang, Lanyu Shang, Ruohan Zong, Zengwu Wang, Ziyi Kou, Dong Wang","doi":"10.1609/hcomp.v9i1.18950","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18950","url":null,"abstract":"Social sensing has emerged as a pervasive and scalable sensing paradigm to collect observations of the physical world from human sensors. A key advantage of social sensing is its infrastructure-free nature. In this paper, we focus on a streaming urban infrastructure monitoring (Streaming UIM) problem in social sensing. The goal is to automatically detect the urban infrastructure damages from the streaming imagery data posted on social media by exploring the collective power of both AI and human intelligence from crowdsourcing systems. Our work is motivated by the limitation of current AI and crowdsourcing solutions that either fail in many critical time-sensitive UIM application scenarios or are not easily generalizable to monitor the damage of different types of urban infrastructures. We identify two critical challenges in solving our problem: i) it is difficult to dynamically integrate AI and crowd intelligence to effectively identify and fix the failure cases of AI solutions; ii) it is non-trivial to obtain accurate human intelligence from unreliable crowd workers in streaming UIM applications. In this paper, we propose StreamCollab, a streaming crowd-AI collaborative system that explores the collaborative intelligence from AI and crowd to solve the streaming UIM problem. The evaluation results on a real-world urban infrastructure imagery dataset collected from social media demonstrate that StreamCollab consistently outperforms both state-of-the-art AI and crowd-AI baselines in UIM accuracy while maintaining the lowest computational cost.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82088068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Iterative Human-in-the-Loop Discovery of Unknown Unknowns in Image Datasets 图像数据集中未知未知数的迭代人在循环发现
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18941
Lei Han, Xiao-Lu Dong, Gianluca Demartini
{"title":"Iterative Human-in-the-Loop Discovery of Unknown Unknowns in Image Datasets","authors":"Lei Han, Xiao-Lu Dong, Gianluca Demartini","doi":"10.1609/hcomp.v9i1.18941","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18941","url":null,"abstract":"Automatic predictions (e.g., recognizing objects in images) may result in systematic errors if certain classes are not well represented by training instances (these errors are called unknowns). When a model assigns high confidence scores to these wrong predictions (this type of error is called unknown unknowns), it becomes challenging to automatically identify them. In this paper, we present the first work on leveraging human intelligence to discover unknown unknowns (UUs) in an iterative way. The proposed methodology first differentiates the feature space generated by crowd workers labelling instances (e.g., images) in an active learning fashion from the space learned by the prediction model over a batch training phase, and thus identifies the predictions most likely to be UUs. Next, we add crowd labels collected for these discovered UUs to the training set and re-train the model with this extended dataset. This process is then repeated iteratively to discover more instances of both unknown and under-represented classes. Our experimental results show that the proposed methodology is able to (1) efficiently discover UUs, (2) significantly improve the quality of model predictions, and (3) to push UUs into known unknowns (i.e., the model makes mistakes but at least its classification confidence on those instances is low so those predictions can be discarded or post-processed) for further investigation. We additionally discuss the trade-off between prediction quality improvements and the human effort required to achieve those improvements. Our results bear implications on building cost-effective systems to discover UUs with humans in the loop.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75045079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
On the Bayesian Rational Assumption in Information Design 论信息设计中的贝叶斯理性假设
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18945
Wei Tang, Chien-Ju Ho
{"title":"On the Bayesian Rational Assumption in Information Design","authors":"Wei Tang, Chien-Ju Ho","doi":"10.1609/hcomp.v9i1.18945","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18945","url":null,"abstract":"We study the problem of information design in human-in-the-loop systems, where the sender (the system) aims to design an information disclosure policy to influence the receiver (the user) in making decisions. This problem is ubiquitous in systems with humans in the loop, e.g., recommendation systems might choose whether to present others' reviews to encourage users to follow recommendations, online retailers might choose which set of product features to present to persuade buyers to make the purchase. Among the flourish literature on information design, Bayesian persuasion has been one of the most prominent efforts in formalizing this problem and has spurred various research studies in both economics and computer science. While there has been significant progress in characterizing the optimal information disclosure policies and the corresponding computational complexity, one common assumption in this line of research is that the receiver is Bayesian rational, i.e., the receiver processes the information in a Bayesian manner and takes actions to maximize her expected utility. However, as empirically observed in the literature, this assumption might not be true in real-world scenarios. In this work, we relax this common Bayesian rational assumption in information design in the persuasion setting. In particular, we develop an alternative framework for information design based on discrete choice model and probability weighting to account for this relaxation. Moreover, we conduct online behavioral experiments on Amazon Mechanical Turk and demonstrate that our framework better explains real-world user behavior and leads to more effective information design policy.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81153737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance to Evaluate Error-Rate Models on GUIs 利用众包用户实验测量用户表现的集中趋势来评估gui上的错误率模型
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18948
Shota Yamanaka
{"title":"Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance to Evaluate Error-Rate Models on GUIs","authors":"Shota Yamanaka","doi":"10.1609/hcomp.v9i1.18948","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18948","url":null,"abstract":"The usage of crowdsourcing to recruit numerous participants has been recognized as beneficial in the human-computer interaction (HCI) field, such as for designing user interfaces and validating user performance models.\u0000In this work, we investigate its effectiveness for evaluating an error-rate prediction model in target pointing tasks.\u0000In contrast to models for operational times, a clicking error (i.e., missing a target) occurs by chance at a certain probability, e.g., 5%.\u0000Therefore, in traditional laboratory-based experiments, a lot of repetitions are needed to measure the central tendency of error rates.\u0000We hypothesize that recruiting many workers would enable us to keep the number of repetitions per worker much smaller.\u0000We collected data from 384 workers and found that existing models on operational time and error rate showed good fits (both R^2 > 0.95).\u0000A simulation where we changed the number of participants N_P and the number of repetitions N_repeat showed that the time prediction model was robust against small N_P and N_repeat, although the error-rate model fitness was considerably degraded.\u0000These findings empirically demonstrate a new utility of crowdsourced user experiments for collecting numerous participants, which should be of great use to HCI researchers for their evaluation studies.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87211581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Human+AI Crowd Task Assignment Considering Result Quality Requirements 考虑结果质量要求的人类+人工智能人群任务分配
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18943
Masaki Kobayashi, Kei Wakabayashi, Atsuyuki Morishima
{"title":"Human+AI Crowd Task Assignment Considering Result Quality Requirements","authors":"Masaki Kobayashi, Kei Wakabayashi, Atsuyuki Morishima","doi":"10.1609/hcomp.v9i1.18943","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18943","url":null,"abstract":"This paper addresses the problem of dynamically assigning tasks to a crowd consisting of AI and human workers.\u0000Currently, crowdsourcing the creation of AI programs is a common practice.\u0000To apply such kinds of AI programs to the set of tasks, we often take the ``all-or-nothing'' approach that waits for the AI to be good enough.\u0000However, this approach may prevent us from exploiting the answers provided by the AI until the process is completed, and also prevents the exploration of different AI candidates.\u0000Therefore, integrating the created AI, both with other AIs and human computation, to obtain a more efficient human-AI team is not trivial.\u0000In this paper, we propose a method that addresses these issues by adopting a ``divide-and-conquer'' strategy for AI worker evaluation.\u0000Here, the assignment is optimal when the number of task assignments to humans is minimal, as long as the final results satisfy a given quality requirement. \u0000This paper presents some theoretical analyses of the proposed method and an extensive set of experiments conducted with open benchmarks and real-world datasets.\u0000The results show that the algorithm can assign many more tasks than the baselines to AI when it is difficult for AIs to satisfy the quality requirement for the whole set of tasks. They also show that it can flexibly change the number of tasks assigned to multiple AI workers in accordance with the performance of the available AI workers.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86541081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Pterodactyl: Two-Step Redaction of Images for Robust Face Deidentification 翼手龙:用于鲁棒人脸去识别的两步图像编辑
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18937
Abdullah B. Alshaibani, Alexander J. Quinn
{"title":"Pterodactyl: Two-Step Redaction of Images for Robust Face Deidentification","authors":"Abdullah B. Alshaibani, Alexander J. Quinn","doi":"10.1609/hcomp.v9i1.18937","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18937","url":null,"abstract":"Redacting faces in images is trivial when the number of faces is small and the annotator is trusted. For large batches, automated face detection has been the only currently viable solution, yet even the best ML-based solutions have error rates that would be unacceptable for sensitive applications. Crowd-based face detection/redaction systems exist, yet the process and the cost make them not feasible. We present Pterodactyl, a system for detecting (and redacting) faces at scale. It uses the AdaptiveFocus filter, which splits the image into smaller regions and uses machine learning to select a median filter for each region to hide the facial identities in the image while simultaneously allowing those faces to be detectable by crowd workers. The filter uses a convolutional neural network trained on images associated with the median filter level that allows detection and prevents identification. This filter allows Pterodactyl to achieve human-level detection with just 14% crowd labor as another recent crowd-based face detection/redaction system (IntoFocus). Our evaluation found that the redaction accuracy was higher than a commercial machine-based application and on par with IntoFocus while requiring 86% less crowd work (number of comparable tasks).","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75576036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Iterative Quality Control Strategies for Expert Medical Image Labeling 专家医学图像标注的迭代质量控制策略
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-10-04 DOI: 10.1609/hcomp.v9i1.18940
B. Freeman, N. Hammel, Sonia Phene, Abigail E. Huang, Rebecca Ackermann, Olga Kanzheleva, Miles Hutson, Caitlin Taggart, Q. Duong, R. Sayres
{"title":"Iterative Quality Control Strategies for Expert Medical Image Labeling","authors":"B. Freeman, N. Hammel, Sonia Phene, Abigail E. Huang, Rebecca Ackermann, Olga Kanzheleva, Miles Hutson, Caitlin Taggart, Q. Duong, R. Sayres","doi":"10.1609/hcomp.v9i1.18940","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18940","url":null,"abstract":"Data quality is a key concern for artificial intelligence (AI) efforts that rely on crowdsourced data collection. In the domain of medicine in particular, labeled data must meet high quality standards, or the resulting AI may perpetuate biases or lead to patient harm. What are the challenges involved in expert medical labeling? How do AI practitioners address such challenges? In this study, we interviewed members of teams developing AI for medical imaging in four subdomains (ophthalmology, radiology, pathology, and dermatology) about their quality-related practices. We describe one instance of low-quality labeling being caught by automated monitoring. The more proactive strategy, however, is to partner with experts in a collaborative, iterative process prior to the start of high-volume data collection. Best practices including 1) co-designing labeling tasks and instructional guidelines with experts, 2) piloting and revising the tasks and guidelines, and 3) onboarding workers enable teams to identify and address issues before they proliferate.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77163816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems 解释人类在循环中的小型无人机系统的自主决策
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-09-05 DOI: 10.1609/hcomp.v9i1.18936
Ankit Agrawal, J. Cleland-Huang
{"title":"Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems","authors":"Ankit Agrawal, J. Cleland-Huang","doi":"10.1609/hcomp.v9i1.18936","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18936","url":null,"abstract":"Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. \u0000This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89908391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence 从人为解释到模型可解释性:一个基于证据权重的框架
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2021-04-27 DOI: 10.1609/hcomp.v9i1.18938
David Alvarez-Melis, Harmanpreet Kaur, Hal Daum'e, Hanna M. Wallach, Jennifer Wortman Vaughan
{"title":"From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence","authors":"David Alvarez-Melis, Harmanpreet Kaur, Hal Daum'e, Hanna M. Wallach, Jennifer Wortman Vaughan","doi":"10.1609/hcomp.v9i1.18938","DOIUrl":"https://doi.org/10.1609/hcomp.v9i1.18938","url":null,"abstract":"We take inspiration from the study of human explanation to inform the design and evaluation of interpretability methods in machine learning. First, we survey the literature on human explanation in philosophy, cognitive science, and the social sciences, and propose a list of design principles for machine-generated explanations that are meaningful to humans. Using the concept of weight of evidence from information theory, we develop a method for generating explanations that adhere to these principles. We show that this method can be adapted to handle high-dimensional, multi-class settings, yielding a flexible framework for generating explanations. We demonstrate that these explanations can be estimated accurately from finite samples and are robust to small perturbations of the inputs. We also evaluate our method through a qualitative user study with machine learning practitioners, where we observe that the resulting explanations are usable despite some participants struggling with background concepts like prior class probabilities. Finally, we conclude by surfacing design implications for interpretability tools in general.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91243498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Case for Soft Loss Functions 软损失函数的一种情况
Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing Pub Date : 2020-10-01 DOI: 10.1609/hcomp.v8i1.7478
Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio
{"title":"A Case for Soft Loss Functions","authors":"Alexandra Uma, Tommaso Fornaciari, Dirk Hovy, Silviu Paun, Barbara Plank, Massimo Poesio","doi":"10.1609/hcomp.v8i1.7478","DOIUrl":"https://doi.org/10.1609/hcomp.v8i1.7478","url":null,"abstract":"Recently, Peterson et al. provided evidence of the benefits of using probabilistic soft labels generated from crowd annotations for training a computer vision model, showing that using such labels maximizes performance of the models over unseen data. In this paper, we generalize these results by showing that training with soft labels is an effective method for using crowd annotations in several other ai tasks besides the one studied by Peterson et al., and also when their performance is compared with that of state-of-the-art methods for learning from crowdsourced data.","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78424285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信