arXiv - CS - Computation and Language最新文献

筛选
英文 中文
Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs 从宣传到仇恨:利用多代理 LLM 对阿拉伯语备忘录进行多模态分析
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07246
Firoj Alam, Md. Rafiul Biswas, Uzair Shah, Wajdi Zaghouani, Georgios Mikros
{"title":"Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs","authors":"Firoj Alam, Md. Rafiul Biswas, Uzair Shah, Wajdi Zaghouani, Georgios Mikros","doi":"arxiv-2409.07246","DOIUrl":"https://doi.org/arxiv-2409.07246","url":null,"abstract":"In the past decade, social media platforms have been used for information\u0000dissemination and consumption. While a major portion of the content is posted\u0000to promote citizen journalism and public awareness, some content is posted to\u0000mislead users. Among different content types such as text, images, and videos,\u0000memes (text overlaid on images) are particularly prevalent and can serve as\u0000powerful vehicles for propaganda, hate, and humor. In the current literature,\u0000there have been efforts to individually detect such content in memes. However,\u0000the study of their intersection is very limited. In this study, we explore the\u0000intersection between propaganda and hate in memes using a multi-agent LLM-based\u0000approach. We extend the propagandistic meme dataset with coarse and\u0000fine-grained hate labels. Our finding suggests that there is an association\u0000between propaganda and hate in memes. We provide detailed experimental results\u0000that can serve as a baseline for future studies. We will make the experimental\u0000resources publicly available to the community.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Refine: Improving Natural Language Explanation Generation by Learning in Tandem 交叉定义:通过串联学习改进自然语言解释生成
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07123
Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian Möller, Vera Schmitt
{"title":"Cross-Refine: Improving Natural Language Explanation Generation by Learning in Tandem","authors":"Qianli Wang, Tatiana Anikina, Nils Feldhus, Simon Ostermann, Sebastian Möller, Vera Schmitt","doi":"arxiv-2409.07123","DOIUrl":"https://doi.org/arxiv-2409.07123","url":null,"abstract":"Natural language explanations (NLEs) are vital for elucidating the reasoning\u0000behind large language model (LLM) decisions. Many techniques have been\u0000developed to generate NLEs using LLMs. However, like humans, LLMs might not\u0000always produce optimal NLEs on first attempt. Inspired by human learning\u0000processes, we introduce Cross-Refine, which employs role modeling by deploying\u0000two LLMs as generator and critic, respectively. The generator outputs a first\u0000NLE and then refines this initial explanation using feedback and suggestions\u0000provided by the critic. Cross-Refine does not require any supervised training\u0000data or additional training. We validate Cross-Refine across three NLP tasks\u0000using three state-of-the-art open-source LLMs through automatic and human\u0000evaluation. We select Self-Refine (Madaan et al., 2023) as the baseline, which\u0000only utilizes self-feedback to refine the explanations. Our findings from\u0000automatic evaluation and a user study indicate that Cross-Refine outperforms\u0000Self-Refine. Meanwhile, Cross-Refine can perform effectively with less powerful\u0000LLMs, whereas Self-Refine only yields strong results with ChatGPT.\u0000Additionally, we conduct an ablation study to assess the importance of feedback\u0000and suggestions. Both of them play an important role in refining explanations.\u0000We further evaluate Cross-Refine on a bilingual dataset in English and German.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Knowledge Drift in LLMs through Misinformation 通过错误信息了解法律硕士的知识漂移
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07085
Alina Fastowski, Gjergji Kasneci
{"title":"Understanding Knowledge Drift in LLMs through Misinformation","authors":"Alina Fastowski, Gjergji Kasneci","doi":"arxiv-2409.07085","DOIUrl":"https://doi.org/arxiv-2409.07085","url":null,"abstract":"Large Language Models (LLMs) have revolutionized numerous applications,\u0000making them an integral part of our digital ecosystem. However, their\u0000reliability becomes critical, especially when these models are exposed to\u0000misinformation. We primarily analyze the susceptibility of state-of-the-art\u0000LLMs to factual inaccuracies when they encounter false information in a QnA\u0000scenario, an issue that can lead to a phenomenon we refer to as *knowledge\u0000drift*, which significantly undermines the trustworthiness of these models. We\u0000evaluate the factuality and the uncertainty of the models' responses relying on\u0000Entropy, Perplexity, and Token Probability metrics. Our experiments reveal that\u0000an LLM's uncertainty can increase up to 56.6% when the question is answered\u0000incorrectly due to the exposure to false information. At the same time,\u0000repeated exposure to the same false information can decrease the models\u0000uncertainty again (-52.8% w.r.t. the answers on the untainted prompts),\u0000potentially manipulating the underlying model's beliefs and introducing a drift\u0000from its original knowledge. These findings provide insights into LLMs'\u0000robustness and vulnerability to adversarial inputs, paving the way for\u0000developing more reliable LLM applications across various domains. The code is\u0000available at https://github.com/afastowski/knowledge_drift.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge AdaCAD:自适应解码以平衡上下文知识和参数知识之间的冲突
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07394
Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal
{"title":"AdaCAD: Adaptively Decoding to Balance Conflicts between Contextual and Parametric Knowledge","authors":"Han Wang, Archiki Prasad, Elias Stengel-Eskin, Mohit Bansal","doi":"arxiv-2409.07394","DOIUrl":"https://doi.org/arxiv-2409.07394","url":null,"abstract":"Knowledge conflict arises from discrepancies between information in the\u0000context of a large language model (LLM) and the knowledge stored in its\u0000parameters. This can hurt performance when using standard decoding techniques,\u0000which tend to ignore the context. Existing test-time contrastive methods seek\u0000to address this by comparing the LLM's output distribution with and without the\u0000context and adjust the model according to the contrast between them. However,\u0000we find that these methods frequently misjudge the degree of conflict and\u0000struggle to handle instances that vary in their amount of conflict, with static\u0000methods over-adjusting when conflict is absent. We propose a fine-grained,\u0000instance-level approach called AdaCAD, which dynamically infers the weight of\u0000adjustment based on the degree of conflict, as measured by the Jensen-Shannon\u0000divergence between distributions representing contextual and parametric\u0000knowledge. Our experiments across four models on six diverse question-answering\u0000(QA) datasets and three summarization tasks demonstrate that our training-free\u0000adaptive method consistently outperforms other decoding methods on QA, with\u0000average accuracy gains of 14.21% (absolute) over a static contrastive baseline,\u0000and improves the factuality of summaries by 5.59 (AlignScore). Furthermore, our\u0000analysis shows that while decoding with contrastive baselines hurts performance\u0000when conflict is absent, AdaCAD mitigates these losses, making it more\u0000applicable to real-world datasets in which some examples have conflict and\u0000others do not.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model 利用大型语言模型进行无本体泛域知识图到文本生成数据集合成
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07088
Daehee Kim, Deokhyung Kang, Sangwon Ryu, Gary Geunbae Lee
{"title":"Ontology-Free General-Domain Knowledge Graph-to-Text Generation Dataset Synthesis using Large Language Model","authors":"Daehee Kim, Deokhyung Kang, Sangwon Ryu, Gary Geunbae Lee","doi":"arxiv-2409.07088","DOIUrl":"https://doi.org/arxiv-2409.07088","url":null,"abstract":"Knowledge Graph-to-Text (G2T) generation involves verbalizing structured\u0000knowledge graphs into natural language text. Recent advancements in Pretrained\u0000Language Models (PLMs) have improved G2T performance, but their effectiveness\u0000depends on datasets with precise graph-text alignment. However, the scarcity of\u0000high-quality, general-domain G2T generation datasets restricts progress in the\u0000general-domain G2T generation research. To address this issue, we introduce\u0000Wikipedia Ontology-Free Graph-text dataset (WikiOFGraph), a new large-scale G2T\u0000dataset generated using a novel method that leverages Large Language Model\u0000(LLM) and Data-QuestEval. Our new dataset, which contains 5.85M general-domain\u0000graph-text pairs, offers high graph-text consistency without relying on\u0000external ontologies. Experimental results demonstrate that PLM fine-tuned on\u0000WikiOFGraph outperforms those trained on other datasets across various\u0000evaluation metrics. Our method proves to be a scalable and effective solution\u0000for generating high-quality G2T data, significantly advancing the field of G2T\u0000generation.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing adversarial robustness in Natural Language Inference using explanations 利用解释增强自然语言推理的对抗鲁棒性
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07423
Alexandros Koulakos, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou
{"title":"Enhancing adversarial robustness in Natural Language Inference using explanations","authors":"Alexandros Koulakos, Maria Lymperaiou, Giorgos Filandrianos, Giorgos Stamou","doi":"arxiv-2409.07423","DOIUrl":"https://doi.org/arxiv-2409.07423","url":null,"abstract":"The surge of state-of-the-art Transformer-based models has undoubtedly pushed\u0000the limits of NLP model performance, excelling in a variety of tasks. We cast\u0000the spotlight on the underexplored task of Natural Language Inference (NLI),\u0000since models trained on popular well-suited datasets are susceptible to\u0000adversarial attacks, allowing subtle input interventions to mislead the model.\u0000In this work, we validate the usage of natural language explanation as a\u0000model-agnostic defence strategy through extensive experimentation: only by\u0000fine-tuning a classifier on the explanation rather than premise-hypothesis\u0000inputs, robustness under various adversarial attacks is achieved in comparison\u0000to explanation-free baselines. Moreover, since there is no standard strategy of\u0000testing the semantic validity of the generated explanations, we research the\u0000correlation of widely used language generation metrics with human perception,\u0000in order for them to serve as a proxy towards robust NLI models. Our approach\u0000is resource-efficient and reproducible without significant computational\u0000limitations.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"2019 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Agent Workflow Memory 代理工作流程内存
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07429
Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, Graham Neubig
{"title":"Agent Workflow Memory","authors":"Zora Zhiruo Wang, Jiayuan Mao, Daniel Fried, Graham Neubig","doi":"arxiv-2409.07429","DOIUrl":"https://doi.org/arxiv-2409.07429","url":null,"abstract":"Despite the potential of language model-based agents to solve real-world\u0000tasks such as web navigation, current methods still struggle with long-horizon\u0000tasks with complex action trajectories. In contrast, humans can flexibly solve\u0000complex tasks by learning reusable task workflows from past experiences and\u0000using them to guide future actions. To build agents that can similarly benefit\u0000from this process, we introduce Agent Workflow Memory (AWM), a method for\u0000inducing commonly reused routines, i.e., workflows, and selectively providing\u0000workflows to the agent to guide subsequent generations. AWM flexibly applies to\u0000both offline and online scenarios, where agents induce workflows from training\u0000examples beforehand or from test queries on the fly. We experiment on two major\u0000web navigation benchmarks -- Mind2Web and WebArena -- that collectively cover\u00001000+ tasks from 200+ domains across travel, shopping, and social media, among\u0000others. AWM substantially improves the baseline results by 24.6% and 51.1%\u0000relative success rate on Mind2Web and WebArena while reducing the number of\u0000steps taken to solve WebArena tasks successfully. Furthermore, online AWM\u0000robustly generalizes in cross-task, website, and domain evaluations, surpassing\u0000baselines from 8.9 to 14.0 absolute points as train-test task distribution gaps\u0000widen.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"12 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Think Together and Work Better: Combining Humans' and LLMs' Think-Aloud Outcomes for Effective Text Evaluation 共同思考,更好地工作:结合人类和 LLM 的大声思考结果,实现有效的文本评估
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07355
SeongYeub Chu, JongWoo Kim, MunYong Yi
{"title":"Think Together and Work Better: Combining Humans' and LLMs' Think-Aloud Outcomes for Effective Text Evaluation","authors":"SeongYeub Chu, JongWoo Kim, MunYong Yi","doi":"arxiv-2409.07355","DOIUrl":"https://doi.org/arxiv-2409.07355","url":null,"abstract":"This study introduces textbf{InteractEval}, a framework that integrates\u0000human expertise and Large Language Models (LLMs) using the Think-Aloud (TA)\u0000method to generate attributes for checklist-based text evaluation. By combining\u0000human flexibility and reasoning with LLM consistency, InteractEval outperforms\u0000traditional non-LLM-based and LLM-based baselines across four distinct\u0000dimensions, consisting of Coherence, Fluency, Consistency, and Relevance. The\u0000experiment also investigates the effectiveness of the TA method, showing that\u0000it promotes divergent thinking in both humans and LLMs, leading to the\u0000generation of a wider range of relevant attributes and enhance text evaluation\u0000performance. Comparative analysis reveals that humans excel at identifying\u0000attributes related to internal quality (Coherence and Fluency), but LLMs\u0000perform better at those attributes related to external alignment (Consistency\u0000and Relevance). Consequently, leveraging both humans and LLMs together produces\u0000the best evaluation outcomes. In other words, this study emphasizes the\u0000necessity of effectively combining humans and LLMs in an automated\u0000checklist-based text evaluation framework. The code is available at\u0000textbf{url{https://github.com/BBeeChu/InteractEval.git}}.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Efficient Recursive Numeral Systems via Reinforcement Learning 通过强化学习学习高效递归数字系统
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07170
Jonathan D. Thomas, Andrea Silvi, Devdatt Dubhashi, Emil Carlsson, Moa Johansson
{"title":"Learning Efficient Recursive Numeral Systems via Reinforcement Learning","authors":"Jonathan D. Thomas, Andrea Silvi, Devdatt Dubhashi, Emil Carlsson, Moa Johansson","doi":"arxiv-2409.07170","DOIUrl":"https://doi.org/arxiv-2409.07170","url":null,"abstract":"The emergence of mathematical concepts, such as number systems, is an\u0000understudied area in AI for mathematics and reasoning. It has previously been\u0000shown Carlsson et al. (2021) that by using reinforcement learning (RL), agents\u0000can derive simple approximate and exact-restricted numeral systems. However, it\u0000is a major challenge to show how more complex recursive numeral systems,\u0000similar to the one utilised in English, could arise via a simple learning\u0000mechanism such as RL. Here, we introduce an approach towards deriving a\u0000mechanistic explanation of the emergence of recursive number systems where we\u0000consider an RL agent which directly optimizes a lexicon under a given\u0000meta-grammar. Utilising a slightly modified version of the seminal meta-grammar\u0000of Hurford (1975), we demonstrate that our RL agent can effectively modify the\u0000lexicon towards Pareto-optimal configurations which are comparable to those\u0000observed within human numeral systems.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"102 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Native vs Non-Native Language Prompting: A Comparative Analysis 母语提示与非母语提示:比较分析
arXiv - CS - Computation and Language Pub Date : 2024-09-11 DOI: arxiv-2409.07054
Mohamed Bayan Kmainasi, Rakif Khan, Ali Ezzat Shahroor, Boushra Bendou, Maram Hasanain, Firoj Alam
{"title":"Native vs Non-Native Language Prompting: A Comparative Analysis","authors":"Mohamed Bayan Kmainasi, Rakif Khan, Ali Ezzat Shahroor, Boushra Bendou, Maram Hasanain, Firoj Alam","doi":"arxiv-2409.07054","DOIUrl":"https://doi.org/arxiv-2409.07054","url":null,"abstract":"Large language models (LLMs) have shown remarkable abilities in different\u0000fields, including standard Natural Language Processing (NLP) tasks. To elicit\u0000knowledge from LLMs, prompts play a key role, consisting of natural language\u0000instructions. Most open and closed source LLMs are trained on available labeled\u0000and unlabeled resources--digital content such as text, images, audio, and\u0000videos. Hence, these models have better knowledge for high-resourced languages\u0000but struggle with low-resourced languages. Since prompts play a crucial role in\u0000understanding their capabilities, the language used for prompts remains an\u0000important research question. Although there has been significant research in\u0000this area, it is still limited, and less has been explored for medium to\u0000low-resourced languages. In this study, we investigate different prompting\u0000strategies (native vs. non-native) on 11 different NLP tasks associated with 12\u0000different Arabic datasets (9.7K data points). In total, we conducted 197\u0000experiments involving 3 LLMs, 12 datasets, and 3 prompting strategies. Our\u0000findings suggest that, on average, the non-native prompt performs the best,\u0000followed by mixed and native prompts.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142184356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信