Transactions of the Association for Computational Linguistics最新文献

筛选
英文 中文
Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies 自动修正大型语言模型:勘测各种自动校正策略的前景
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-05-01 DOI: 10.1162/tacl_a_00660
Liangming Pan, Michael Stephen Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, William Yang Wang
{"title":"Automatically Correcting Large Language Models: Surveying the Landscape of Diverse Automated Correction Strategies","authors":"Liangming Pan, Michael Stephen Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, William Yang Wang","doi":"10.1162/tacl_a_00660","DOIUrl":"https://doi.org/10.1162/tacl_a_00660","url":null,"abstract":"Abstract While large language models (LLMs) have shown remarkable effectiveness in various NLP tasks, they are still prone to issues such as hallucination, unfaithful reasoning, and toxicity. A promising approach to rectify these flaws is correcting LLMs with feedback, where the LLM itself is prompted or guided with feedback to fix problems in its own output. Techniques leveraging automated feedback—either produced by the LLM itself (self-correction) or some external system—are of particular interest as they make LLM-based solutions more practical and deployable with minimal human intervention. This paper provides an exhaustive review of the recent advances in correcting LLMs with automated feedback, categorizing them into training-time, generation-time, and post-hoc approaches. We also identify potential challenges and future directions in this emerging field.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141030487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simultaneous Selection and Adaptation of Source Data via Four-Level Optimization 通过四级优化同时选择和调整源数据
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-05-01 DOI: 10.1162/tacl_a_00658
Pengtao Xie, Xingchen Zhao, Xuehai He
{"title":"Simultaneous Selection and Adaptation of Source Data via Four-Level Optimization","authors":"Pengtao Xie, Xingchen Zhao, Xuehai He","doi":"10.1162/tacl_a_00658","DOIUrl":"https://doi.org/10.1162/tacl_a_00658","url":null,"abstract":"Abstract In many NLP applications, to mitigate data deficiency in a target task, source data is collected to help with target model training. Existing transfer learning methods either select a subset of source examples that are close to the target domain or try to adapt all source examples into the target domain, then use selected or adapted source examples to train the target model. These methods either incur significant information loss or bear the risk that after adaptation, source examples which are originally already in the target domain may be outside the target domain. To address the limitations of these methods, we propose a four-level optimization based framework which simultaneously selects and adapts source data. Our method can automatically identify in-domain and out-of-domain source examples and apply example-specific processing methods: selection for in-domain examples and adaptation for out-of-domain examples. Experiments on various datasets demonstrate the effectiveness of our proposed method.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141030855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Thai Discourse Treebank: Annotating and Classifying Thai Discourse Connectives 泰语话语树库:泰语话语连接词的注释和分类
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-05-01 DOI: 10.1162/tacl_a_00650
Ponrawee Prasertsom, Apiwat Jaroonpol, Attapol T. Rutherford
{"title":"The Thai Discourse Treebank: Annotating and Classifying Thai Discourse Connectives","authors":"Ponrawee Prasertsom, Apiwat Jaroonpol, Attapol T. Rutherford","doi":"10.1162/tacl_a_00650","DOIUrl":"https://doi.org/10.1162/tacl_a_00650","url":null,"abstract":"Abstract Discourse analysis is a highly applicable area of natural language processing. In English and other languages, resources for discourse-based tasks are widely available. Thai, however, has hitherto lacked such resources. We present the Thai Discourse Treebank, the first, large Thai corpus annotated in the style of the Penn Discourse Treebank. The resulting corpus has over 10,000 sentences and 18,000 instances of connectives in 33 different relations. We release the corpus alongside our list of 148 potentially polysemous discourse connectives with a total of 340 form-sense pairs and their classification criteria to facilitate future research. We also develop models for connective identification and classification tasks. Our best models achieve an F1 of 0.96 in the identification task and 0.46 on the sense classification task. Our results serve as benchmarks for future models for Thai discourse tasks.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141024464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering 检索你所需要的:开放域问题解答的互学框架
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-04-01 DOI: 10.1162/tacl_a_00646
Dingmin Wang, Qiuyuan Huang, M. Jackson, Jianfeng Gao
{"title":"Retrieve What You Need: A Mutual Learning Framework for Open-domain Question Answering","authors":"Dingmin Wang, Qiuyuan Huang, M. Jackson, Jianfeng Gao","doi":"10.1162/tacl_a_00646","DOIUrl":"https://doi.org/10.1162/tacl_a_00646","url":null,"abstract":"Abstract An open-domain question answering (QA) system usually follows a retrieve-then-read paradigm, in which a retriever is used to retrieve relevant passages from a large corpus, and then a reader generates answers based on the retrieved passages and the original question. In this paper, we propose a simple and novel mutual learning framework to improve the performance of retrieve-then-read-style models via an intermediate module named the knowledge selector, which we train with reinforcement learning. The key benefits of our proposed intermediate module are: 1) no requirement for additional annotated question-passage pairs; 2) improvements in both retrieval and QA performance, as well as computational efficiency, compared to prior competitive retrieve-then-read models; 3) with no finetuning, improvement in the zero-shot performance of large-scale pre-trained language models, e.g., ChatGPT, by encapsulating the input with relevant knowledge without violating the input length constraint.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140788096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unifying Structured Data as Graph for Data-to-Text Pre-Training 将结构化数据统一为图表,进行数据到文本的预训练
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-01-02 DOI: 10.1162/tacl_a_00641
Shujie Li, Liang Li, Ruiying Geng, Min Yang, Binhua Li, Guanghu Yuan, Wanwei He, Shao Yuan, Can Ma, Fei Huang, Yongbin Li
{"title":"Unifying Structured Data as Graph for Data-to-Text Pre-Training","authors":"Shujie Li, Liang Li, Ruiying Geng, Min Yang, Binhua Li, Guanghu Yuan, Wanwei He, Shao Yuan, Can Ma, Fei Huang, Yongbin Li","doi":"10.1162/tacl_a_00641","DOIUrl":"https://doi.org/10.1162/tacl_a_00641","url":null,"abstract":"Abstract Data-to-text (D2T) generation aims to transform structured data into natural language text. Data-to-text pre-training has proved to be powerful in enhancing D2T generation and yields impressive performance. However, previous pre-training methods either oversimplified structured data into a sequence without considering input structures or designed training objectives tailored for a specific data structure (e.g., table or knowledge graph). In this paper, we unify different types of structured data (i.e., table, key-value data, knowledge graph) into the graph format and cast different D2T generation tasks as graph-to-text generation. To effectively exploit the structural information of the input graph, we propose a structure-enhanced pre-training method for D2T generation by designing a structure-enhanced Transformer. Concretely, we devise a position matrix for the Transformer, encoding relative positional information of connected nodes in the input graph. In addition, we propose a new attention matrix to incorporate graph structures into the original Transformer by taking the available explicit connectivity structure into account. Extensive experiments on six benchmark datasets show the effectiveness of our model. Our source codes are available at https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/unid2t.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140515261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Metric-Free Learning Network with Dual Relations Propagation for Few-Shot Aspect Category Sentiment Analysis 采用双重关系传播的无度量学习网络,用于少镜头方面类别情感分析
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2024-01-01 DOI: 10.1162/tacl_a_00635
Shiman Zhao, Yutao Xie, Wei Chen, Tengjiao Wang, Jiahui Yao, Jiabin Zheng
{"title":"Metric-Free Learning Network with Dual Relations Propagation for Few-Shot Aspect Category Sentiment Analysis","authors":"Shiman Zhao, Yutao Xie, Wei Chen, Tengjiao Wang, Jiahui Yao, Jiabin Zheng","doi":"10.1162/tacl_a_00635","DOIUrl":"https://doi.org/10.1162/tacl_a_00635","url":null,"abstract":"Abstract Few-shot Aspect Category Sentiment Analysis (ACSA) is a crucial task for aspect-based sentiment analysis, which aims to detect sentiment polarity for a given aspect category in a sentence with limited data. However, few-shot learning methods focus on distance metrics between the query and support sets to classify queries, heavily relying on aspect distributions in the embedding space. Thus, they suffer from overlapping distributions of aspect embeddings caused by irrelevant sentiment noise among sentences with multiple sentiment aspects, leading to misclassifications. To solve the above issues, we propose a metric-free method for few-shot ACSA, which models the associated relations among the aspects of support and query sentences by Dual Relations Propagation (DRP), addressing the passive effect of overlapping distributions. Specifically, DRP uses the dual relations (similarity and diversity) among the aspects of support and query sentences to explore intra-cluster commonality and inter-cluster uniqueness for alleviating sentiment noise and enhancing aspect features. Additionally, the dual relations are transformed from support-query to class-query to promote query inference by learning class knowledge. Experiments show that we achieve convincing performance on few-shot ACSA, especially an average improvement of 2.93% accuracy and 2.10% F1 score in the 3-way 1-shot setting.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140522757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning More from Mixed Emotions: A Label Refinement Method for Emotion Recognition in Conversations 从混合情绪中学习更多:对话中情绪识别的标签完善方法
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2023-12-01 DOI: 10.1162/tacl_a_00614
Jintao Wen, Geng Tu, Rui Li, Dazhi Jiang, Wenhua Zhu
{"title":"Learning More from Mixed Emotions: A Label Refinement Method for Emotion Recognition in Conversations","authors":"Jintao Wen, Geng Tu, Rui Li, Dazhi Jiang, Wenhua Zhu","doi":"10.1162/tacl_a_00614","DOIUrl":"https://doi.org/10.1162/tacl_a_00614","url":null,"abstract":"Abstract One-hot labels are commonly employed as ground truth in Emotion Recognition in Conversations (ERC). However, this approach may not fully encompass all the emotions conveyed in a single utterance, leading to suboptimal performance. Regrettably, current ERC datasets lack comprehensive emotionally distributed labels. To address this issue, we propose the Emotion Label Refinement (EmoLR) method, which utilizes context- and speaker-sensitive information to infer mixed emotional labels. EmoLR comprises an Emotion Predictor (EP) module and a Label Refinement (LR) module. The EP module recognizes emotions and provides context/speaker states for the LR module. Subsequently, the LR module calculates the similarity between these states and ground-truth labels, generating a refined label distribution (RLD). The RLD captures a more comprehensive range of emotions than the original one-hot labels. These refined labels are then used for model training in place of the one-hot labels. Experimental results on three public conversational datasets demonstrate that our EmoLR achieves state-of-the-art performance.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139015511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis MissModal:提高多模态情感分析中缺失模态的鲁棒性
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2023-12-01 DOI: 10.1162/tacl_a_00628
Ronghao Lin, Haifeng Hu
{"title":"MissModal: Increasing Robustness to Missing Modality in Multimodal Sentiment Analysis","authors":"Ronghao Lin, Haifeng Hu","doi":"10.1162/tacl_a_00628","DOIUrl":"https://doi.org/10.1162/tacl_a_00628","url":null,"abstract":"Abstract When applying multimodal machine learning in downstream inference, both joint and coordinated multimodal representations rely on the complete presence of modalities as in training. However, modal-incomplete data, where certain modalities are missing, greatly reduces performance in Multimodal Sentiment Analysis (MSA) due to varying input forms and semantic information deficiencies. This limits the applicability of the predominant MSA methods in the real world, where the completeness of multimodal data is uncertain and variable. The generation-based methods attempt to generate the missing modality, yet they require complex hierarchical architecture with huge computational costs and struggle with the representation gaps across different modalities. Diversely, we propose a novel representation learning approach named MissModal, devoting to increasing robustness to missing modality in a classification approach. Specifically, we adopt constraints with geometric contrastive loss, distribution distance loss, and sentiment semantic loss to align the representations of modal-missing and modal-complete data, without impacting the sentiment inference for the complete modalities. Furthermore, we do not demand any changes in the multimodal fusion stage, highlighting the generality of our method in other multimodal learning systems. Extensive experiments demonstrate that the proposed method achieves superior performance with minimal computational costs in various missing modalities scenarios (flexibility), including severely missing modality (efficiency) on two public MSA datasets.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138988172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training 通过正则化持续预训练消除预训练模型中的后门
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2023-12-01 DOI: 10.1162/tacl_a_00622
Biru Zhu, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun, Ming Gu
{"title":"Removing Backdoors in Pre-trained Models by Regularized Continual Pre-training","authors":"Biru Zhu, Ganqu Cui, Yangyi Chen, Yujia Qin, Lifan Yuan, Chong Fu, Yangdong Deng, Zhiyuan Liu, Maosong Sun, Ming Gu","doi":"10.1162/tacl_a_00622","DOIUrl":"https://doi.org/10.1162/tacl_a_00622","url":null,"abstract":"Abstract Recent research has revealed that pre-trained models (PTMs) are vulnerable to backdoor attacks before the fine-tuning stage. The attackers can implant transferable task-agnostic backdoors in PTMs, and control model outputs on any downstream task, which poses severe security threats to all downstream applications. Existing backdoor-removal defenses focus on task-specific classification models and they are not suitable for defending PTMs against task-agnostic backdoor attacks. To this end, we propose the first task-agnostic backdoor removal method for PTMs. Based on the selective activation phenomenon in backdoored PTMs, we design a simple and effective backdoor eraser, which continually pre-trains the backdoored PTMs with a regularization term in an end-to-end approach. The regularization term removes backdoor functionalities from PTMs while the continual pre-training maintains the normal functionalities of PTMs. We conduct extensive experiments on pre-trained models across different modalities and architectures. The experimental results show that our method can effectively remove backdoors inside PTMs and preserve benign functionalities of PTMs with a few downstream-task-irrelevant auxiliary data, e.g., unlabeled plain texts. The average attack success rate on three downstream datasets is reduced from 99.88% to 8.10% after our defense on the backdoored BERT. The codes are publicly available at https://github.com/thunlp/RECIPE.","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139013302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
General then Personal: Decoupling and Pre-training for Personalized Headline Generation 先通用后个人:个性化标题生成的解耦与预训练
IF 10.9 1区 计算机科学
Transactions of the Association for Computational Linguistics Pub Date : 2023-12-01 DOI: 10.1162/tacl_a_00621
Yun-Zhu Song, Yi-Syuan Chen, Lu Wang, Hong-Han Shuai
{"title":"General then Personal: Decoupling and Pre-training for Personalized Headline Generation","authors":"Yun-Zhu Song, Yi-Syuan Chen, Lu Wang, Hong-Han Shuai","doi":"10.1162/tacl_a_00621","DOIUrl":"https://doi.org/10.1162/tacl_a_00621","url":null,"abstract":"Abstract Personalized Headline Generation aims to generate unique headlines tailored to users’ browsing history. In this task, understanding user preferences from click history and incorporating them into headline generation pose challenges. Existing approaches typically rely on predefined styles as control codes, but personal style lacks explicit definition or enumeration, making it difficult to leverage traditional techniques. To tackle these challenges, we propose General Then Personal (GTP), a novel framework comprising user modeling, headline generation, and customization. We train the framework using tailored designs that emphasize two central ideas: (a) task decoupling and (b) model pre-training. With the decoupling mechanism separating the task into generation and customization, two mechanisms, i.e., information self-boosting and mask user modeling, are further introduced to facilitate the training and text control. Additionally, we introduce a new evaluation metric to address existing limitations. Extensive experiments conducted on the PENS dataset, considering both zero-shot and few-shot scenarios, demonstrate that GTP outperforms state-of-the-art methods. Furthermore, ablation studies and analysis emphasize the significance of decoupling and pre-training. Finally, the human evaluation validates the effectiveness of our approaches.1","PeriodicalId":33559,"journal":{"name":"Transactions of the Association for Computational Linguistics","volume":null,"pages":null},"PeriodicalIF":10.9,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138985900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信