Computer Speech and Language最新文献

筛选
英文 中文
Augmentative and alternative speech communication (AASC) aid for people with dysarthria
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2025-01-22 DOI: 10.1016/j.csl.2025.101777
Mariya Celin T.A. , Vijayalakshmi P. , Nagarajan T. , Mrinalini K.
{"title":"Augmentative and alternative speech communication (AASC) aid for people with dysarthria","authors":"Mariya Celin T.A. ,&nbsp;Vijayalakshmi P. ,&nbsp;Nagarajan T. ,&nbsp;Mrinalini K.","doi":"10.1016/j.csl.2025.101777","DOIUrl":"10.1016/j.csl.2025.101777","url":null,"abstract":"<div><div>Speech assistive aids are designed to enhance the intelligibility of speech, particularly for individuals with speech impairments such as dysarthria, by utilizing speech recognition and speech synthesis systems. The development of these devices promote independence and employability for dysarthric individuals facilitating their natural communication. However, the availability of speech assistive aids is limited due to various challenges, including the necessity to train a dysarthric speech recognition system tailored to the errors of dysarthric speakers, the portability required for use by any dysarthric individual with motor disorders, the need to sustain an adequate speech communication rate, and the financial implications associated with the development of such aids. To address this, in the current work a portable, affordable, and a personalized augmentative and alternative speech communication aid tailored to each dysarthric speaker’s need is developed. The dysarthric speech recognition system used in this aid is trained using a transfer learning approach, with normal speaker’s speech data as the source model and the target model includes the augmented dysarthric speech data. The data augmentation for dysarthric speech data is performed utilizing a virtual microphone and a multi-resolution-based feature extraction approach (VM-MRFE), previously proposed by the authors, to enhance the quantity of the target speech data and improve recognition accuracy. The recognized text is synthesized into intelligible speech using a hidden Markov model (HMM)-based text-to-speech synthesis system. To enhance accessibility, the recognizer and synthesizer systems are ported on to the raspberry pi platform, along with a collar microphone and loudspeaker. The real-time performance of the aid by the dysarthric user is examined, also, the aid provides speech communication, with recognition achieved in under 3 s and synthesis in 1.4 s, resulting in a speech delivery rate of roughly 4.4 s.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"92 ","pages":"Article 101777"},"PeriodicalIF":3.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143158006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative aspect-based sentiment analysis with a grid tagging matching auxiliary task
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2025-01-09 DOI: 10.1016/j.csl.2025.101776
Linan Zhu, Xiaolei Guo, Zhechao Zhu, Yifei Xu, Zehai Zhou, Xiangfan Chen, Xiangjie Kong
{"title":"Generative aspect-based sentiment analysis with a grid tagging matching auxiliary task","authors":"Linan Zhu,&nbsp;Xiaolei Guo,&nbsp;Zhechao Zhu,&nbsp;Yifei Xu,&nbsp;Zehai Zhou,&nbsp;Xiangfan Chen,&nbsp;Xiangjie Kong","doi":"10.1016/j.csl.2025.101776","DOIUrl":"10.1016/j.csl.2025.101776","url":null,"abstract":"<div><div>Aspect-based sentiment analysis has gained significant attention in recent years. Particularly, the employment of generative models to address the Aspect-Category-Opinion-Sentiment (ACOS) quadruple extraction task has emerged as a prominent research focus. However, existing studies have not thoroughly explored the inherent connections among sentiment elements, which could potentially enhance the extraction capabilities of the model. To this end, we propose a novel Generative Model with a Grid Tagging Matching auxiliary task, dubbed as GM-GTM. First, to fully harness the logical interaction flourishing within sentiment elements, a newly output template is designed for generative extraction task, which conforms to causality and human intuition. Besides, we technically introduce a grid tagging matching auxiliary task. Specifically, a grid tagging matrix is designed, in which various tags are defined to represent different relationships among sentiment elements. In this way, a comprehensive understanding of the relationships among sentiment elements is obtained. Consequently, the model’s reasoning ability is enhanced, enabling it to make more informed inferences regarding new sentiment elements based on existing ones. Extensive experimental results on ACOS datasets demonstrated the superior performance of our model compared with existing state-of-the-art methods.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"92 ","pages":"Article 101776"},"PeriodicalIF":3.1,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143158429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMGVox-GAN: A transformative approach to EMG-based speech synthesis, enhancing clarity, and efficiency via extensive dataset utilization
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-12-24 DOI: 10.1016/j.csl.2024.101754
Sara Sualiheen, Deok-Hwan Kim
{"title":"EMGVox-GAN: A transformative approach to EMG-based speech synthesis, enhancing clarity, and efficiency via extensive dataset utilization","authors":"Sara Sualiheen,&nbsp;Deok-Hwan Kim","doi":"10.1016/j.csl.2024.101754","DOIUrl":"10.1016/j.csl.2024.101754","url":null,"abstract":"<div><div>This study introduces EMGVox-GAN, a groundbreaking synthesis approach that combines electromyography (EMG) signals with advanced deep learning techniques to produce speech, departing from conventional vocoder technology. The EMGVox-GAN was crafted with a Scale-Adaptive-Frequency-Enhanced Discriminator (SAFE-Disc) composed of three individual sub-discriminators specializing in processing signals of varying frequency scales. Each subdiscriminator includes two downblocks, strengthening the discriminators in discriminating between real and fake audio (generated audio). The proposed EMGVox-GAN was validated on a speech dataset (LJSpeech) and three EMG datasets (Silent Speech, CSL-EMG-Array, and XVoice_Speech_EMG). We have significantly enhanced speech quality, achieving a Mean Opinion Score (MOS) of 4.14 on our largest dataset. Additionally, the Word Error Rate (WER) was notably reduced from 47 % to 36 %, as defined in the state-of-the-art work, underscoring the improved clarity in the synthesized speech. This breakthrough offers a transformative shift in speech synthesis by utilizing silent EMG signals to generate intelligible, high-quality speech. Beyond the advancement in speech quality, the EMGVox-GAN's successful integration of EMG signals opens new possibilities for applications in assistive technology, human-computer interaction, and other domains where clear and efficient speech synthesis is crucial.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"92 ","pages":"Article 101754"},"PeriodicalIF":3.1,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143158428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction: Explainability, AI literacy, and language development
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-12-13 DOI: 10.1016/j.csl.2024.101766
Gyu-Ho Shin , Natalie Parde
{"title":"Introduction: Explainability, AI literacy, and language development","authors":"Gyu-Ho Shin ,&nbsp;Natalie Parde","doi":"10.1016/j.csl.2024.101766","DOIUrl":"10.1016/j.csl.2024.101766","url":null,"abstract":"","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"91 ","pages":"Article 101766"},"PeriodicalIF":3.1,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-enhanced meta-prompt for few-shot relation extraction
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-12-13 DOI: 10.1016/j.csl.2024.101762
Jinman Cui , Fu Xu , Xinyang Wang , Yakun Li , Xiaolong Qu , Lei Yao , Dongmei Li
{"title":"Knowledge-enhanced meta-prompt for few-shot relation extraction","authors":"Jinman Cui ,&nbsp;Fu Xu ,&nbsp;Xinyang Wang ,&nbsp;Yakun Li ,&nbsp;Xiaolong Qu ,&nbsp;Lei Yao ,&nbsp;Dongmei Li","doi":"10.1016/j.csl.2024.101762","DOIUrl":"10.1016/j.csl.2024.101762","url":null,"abstract":"<div><div>Few-shot relation extraction (RE) aims to identity and extract the relation between head and tail entities in a given context by utilizing a few annotated instances. Recent studies have shown that prompt-tuning models can improve the performance of few-shot learning by bridging the gap between pre-training and downstream tasks. The core idea of prompt-tuning is to leverage prompt templates to wrap the original input text into a cloze question and map the output words to corresponding labels via a language verbalizer for predictions. However, designing an appropriate prompt template and language verbalizer for RE task is cumbersome and time-consuming. Furthermore, the rich prior knowledge and semantic information contained in the relations are easily ignored, which can be used to construct prompts. To address these issues, we propose a novel Knowledge-enhanced Meta-Prompt (Know-MP) framework, which can improve meta-learning capabilities by introducing external knowledge to construct prompts. Specifically, we first inject the entity types of head and tail entities to construct prompt templates, thereby encoding the prior knowledge contained in the relations into prompt-tuning. Then, we expand rich label words for each relation type from their relation name to construct a knowledge-enhanced soft verbalizer. Finally, we adopt the meta-learning algorithm based on the attention mechanisms to mitigate the impact of noisy data on few-shot RE to accurately predict the relation of query instances and optimize the parameters of meta-learner. Experiments on FewRel 1.0 and FewRel 2.0, two benchmark datasets of few-shot RE, demonstrate the effectiveness of Know-MP.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"91 ","pages":"Article 101762"},"PeriodicalIF":3.1,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Affective knowledge assisted bi-directional learning for Multi-modal Aspect-based Sentiment Analysis
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-12-10 DOI: 10.1016/j.csl.2024.101755
Xuefeng Shi , Ming Yang , Min Hu , Fuji Ren , Xin Kang , Weiping Ding
{"title":"Affective knowledge assisted bi-directional learning for Multi-modal Aspect-based Sentiment Analysis","authors":"Xuefeng Shi ,&nbsp;Ming Yang ,&nbsp;Min Hu ,&nbsp;Fuji Ren ,&nbsp;Xin Kang ,&nbsp;Weiping Ding","doi":"10.1016/j.csl.2024.101755","DOIUrl":"10.1016/j.csl.2024.101755","url":null,"abstract":"<div><div>As a fine-grained task in the community of Multi-modal Sentiment Analysis (MSA), Multi-modal Aspect-based Sentiment Analysis (MABSA) is challenging and has attracted numerous researchers’ attention, and prominent progress has been achieved in recent years. However, there is still a lack of effective strategies for feature alignment between different modalities, and further exploration is urgently needed. Thus, this paper proposed a novel MABSA method to enhance the sentiment feature alignment, namely Affective Knowledge-Assisted Bi-directional Learning (AKABL) networks, which learn sentiment information from different modalities through multiple perspectives. Specifically, AKABL gains the textual semantic and syntactic features through encoding text modality via pre-trained language model BERT and Syntax Parser SpaCy, respectively. And then, to strengthen the expression of sentiment information in the syntactic graph, affective knowledge SenticNet is introduced to assist AKABL in comprehending textual sentiment information. On the other side, to leverage image modality efficiently, the pre-trained model Visual Transformer (ViT) is employed to extract the necessary image features. Additionally, to integrate the obtained features, this paper utilizes the module Single Modality GCN (SMGCN) to achieve the joint textual semantic and syntactic representation. And to bridge the textual and image features, the module Double Modalities GCN (DMGCN) is devised and applied to extract the sentiment information from different modalities simultaneously. Besides, to bridge the alignment gap between text and image features, this paper devises a novel alignment strategy to build the relationship between these two representations, which measures that difference with the Jensen–Shannon divergence from bi-directional perspectives. It is worth noting that cross-attention and cosine distance-based similarity are also applied in the proposed AKABL. To validate the effectiveness of the proposed method, extensive experiments are conducted on two widely used and public benchmark datasets, and the experimental results demonstrate that AKABL can improve the tasks’ performance obviously, which outperforms the optimal baseline with accuracy improvement of 0.47% and 0.72% on the two datasets.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"91 ","pages":"Article 101755"},"PeriodicalIF":3.1,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143145830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling correlated causal-effect structure with a hypergraph for document-level event causality identification 利用超图建立相关因果效应结构模型,进行文档级事件因果关系识别
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-11-19 DOI: 10.1016/j.csl.2024.101752
Wei Xiang , Cheng Liu , Bang Wang
{"title":"Modeling correlated causal-effect structure with a hypergraph for document-level event causality identification","authors":"Wei Xiang ,&nbsp;Cheng Liu ,&nbsp;Bang Wang","doi":"10.1016/j.csl.2024.101752","DOIUrl":"10.1016/j.csl.2024.101752","url":null,"abstract":"<div><div>Document-level event causality identification (ECI) aims to detect causal relations in between event mentions in a document. Existing approaches for document-level event causality identification detect the causal relation for each pair of event mentions independently, while ignoring latent correlated cause–effect structure in a document, i.e., one cause (effect) with multiple effects (causes). We argue that identifying the causal relation of one event pair may facilitate the causality identification for other event pairs. In light of such considerations, we propose to model the correlated causal-effect structure by a hypergraph and jointly identify multiple causal relations with the same cause (effect). In particular, we propose an event-hypergraph neural encoding model, called EHNEM, for document-level event causality identification. In EHNEM, we first initialize event mentions’ embeddings via a pre-trained language model and obtain potential causal relation of each event pair via a multilayer perceptron. To capture causal correlations, we construct a hypergraph by integrating potential causal relations for the same event as a hyperedge. On the constructed event-hypergraph, we use a hypergraph convolutional network to learn the representation of each event node for final causality identification. Experiments on both EventStoryLine corpus and English-MECI corpus show that our EHNEM model significantly outperforms the state-of-the-art algorithms.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101752"},"PeriodicalIF":3.1,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Are What You Write: Author re-identification privacy attacks in the era of pre-trained language models 你就是你写的东西预训练语言模型时代的作者再识别隐私攻击
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-11-16 DOI: 10.1016/j.csl.2024.101746
Richard Plant , Valerio Giuffrida , Dimitra Gkatzia
{"title":"You Are What You Write: Author re-identification privacy attacks in the era of pre-trained language models","authors":"Richard Plant ,&nbsp;Valerio Giuffrida ,&nbsp;Dimitra Gkatzia","doi":"10.1016/j.csl.2024.101746","DOIUrl":"10.1016/j.csl.2024.101746","url":null,"abstract":"<div><div>The widespread use of pre-trained language models has revolutionised knowledge transfer in natural language processing tasks. However, there is a concern regarding potential breaches of user trust due to the risk of re-identification attacks, where malicious users could extract Personally Identifiable Information (PII) from other datasets. To assess the extent of extractable personal information on popular pre-trained models, we conduct the first wide coverage evaluation and comparison of state-of-the-art privacy-preserving algorithms on a large multi-lingual dataset for sentiment analysis annotated with demographic information (including location, age, and gender). Our results suggest a link between model complexity, pre-training data volume, and the efficacy of privacy-preserving embeddings. We found that privacy-preserving methods demonstrate greater effectiveness when applied to larger and more complex models, with improvements exceeding <span><math><mrow><mo>&gt;</mo><mn>20</mn><mtext>%</mtext></mrow></math></span> over non-private baselines. Additionally, we observe that local differential privacy imposes serious performance penalties of <span><math><mrow><mo>≈</mo><mn>20</mn><mtext>%</mtext></mrow></math></span> in our test setting, which can be mitigated using hybrid or metric-DP techniques.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101746"},"PeriodicalIF":3.1,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Speech-to-Text Translation: A Survey 端到端语音到文本翻译:调查
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-11-14 DOI: 10.1016/j.csl.2024.101751
Nivedita Sethiya, Chandresh Kumar Maurya
{"title":"End-to-End Speech-to-Text Translation: A Survey","authors":"Nivedita Sethiya,&nbsp;Chandresh Kumar Maurya","doi":"10.1016/j.csl.2024.101751","DOIUrl":"10.1016/j.csl.2024.101751","url":null,"abstract":"<div><div>Speech-to-Text (ST) translation pertains to the task of converting speech signals in one language to text in another language. It finds its application in various domains, such as hands-free communication, dictation, video lecture transcription, and translation, to name a few. Automatic Speech Recognition (ASR), as well as Machine Translation(MT) models, play crucial roles in traditional ST translation, enabling the conversion of spoken language in its original form to written text and facilitating seamless cross-lingual communication. ASR recognizes spoken words, while MT translates the transcribed text into the target language. Such integrated models suffer from cascaded error propagation and high resource and training costs. As a result, researchers have been exploring end-to-end (E2E) models for ST translation. However, to our knowledge, there is no comprehensive review of existing works on E2E ST. The present survey, therefore, discusses the works in this direction. We have attempted to provide a comprehensive review of models employed, metrics, and datasets used for ST tasks, providing challenges and future research direction with new insights. We believe this review will be helpful to researchers working on various applications of ST models.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"90 ","pages":"Article 101751"},"PeriodicalIF":3.1,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corpus and unsupervised benchmark: Towards Tagalog grammatical error correction 语料库和无监督基准:实现他加禄语语法错误校正
IF 3.1 3区 计算机科学
Computer Speech and Language Pub Date : 2024-11-14 DOI: 10.1016/j.csl.2024.101750
Nankai Lin , Hongbin Zhang , Menglan Shen , Yu Wang , Shengyi Jiang , Aimin Yang
{"title":"Corpus and unsupervised benchmark: Towards Tagalog grammatical error correction","authors":"Nankai Lin ,&nbsp;Hongbin Zhang ,&nbsp;Menglan Shen ,&nbsp;Yu Wang ,&nbsp;Shengyi Jiang ,&nbsp;Aimin Yang","doi":"10.1016/j.csl.2024.101750","DOIUrl":"10.1016/j.csl.2024.101750","url":null,"abstract":"<div><div>Grammatical error correction (GEC) is a challenging task for natural language processing techniques. Many efforts to address GEC have been made for high-resource languages such as English or Chinese. However, limited work has been done for low-resource languages because of the lack of large annotated corpora. In low-resource languages, the current unsupervised GEC based on language model scoring performs well. However, the pre-trained language model is still to be explored in this context. This study proposes a BERT-based unsupervised GEC framework that primarily addresses word-level errors, where GEC is viewed as a multi-class classification task. The framework contains three modules: a data flow construction module, a sentence perplexity scoring module, and an error detecting and correcting module. We propose a novel scoring method for pseudo-perplexity to evaluate a sentence’s probable correctness and construct a Tagalog corpus for Tagalog GEC research. It obtains competitive performance on the self-constructed Tagalog corpus and the open-source Indonesian corpus, and it demonstrates that our framework is complementary to the baseline methods for low-resource GEC tasks. Our corpus can be obtained from <span><span>https://github.com/GKLMIP/TagalogGEC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":"91 ","pages":"Article 101750"},"PeriodicalIF":3.1,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信