Natural Language Processing Journal最新文献

筛选
英文 中文
Adapting language generation to dialogue environments and users for task-oriented dialogue systems 使语言生成适应对话环境和面向任务的对话系统的用户
Natural Language Processing Journal Pub Date : 2025-05-07 DOI: 10.1016/j.nlp.2025.100153
Atsumoto Ohashi, Ryuichiro Higashinaka
{"title":"Adapting language generation to dialogue environments and users for task-oriented dialogue systems","authors":"Atsumoto Ohashi,&nbsp;Ryuichiro Higashinaka","doi":"10.1016/j.nlp.2025.100153","DOIUrl":"10.1016/j.nlp.2025.100153","url":null,"abstract":"<div><div>When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e.g., noise from environmental sounds) and the user (e.g., users with low levels of understanding ability). Inspired by recent advances in reinforcement learning (RL) for language generation tasks, we propose ANTOR, a method for <strong>A</strong>daptive <strong>N</strong>atural language generation for <strong>T</strong>ask-<strong>O</strong>riented dialogue via <strong>R</strong>einforcement learning. In ANTOR, a natural language understanding (NLU) module, which corresponds to the user’s understanding of system utterances, is incorporated into the objective function of RL. If the NLG’s intentions are correctly conveyed to the NLU, the NLG is given a positive reward. We conducted experiments on the two major task-oriented dialogue datasets, MultiWOZ and Schema-Guided Dialogue, and we confirmed that ANTOR could generate adaptive utterances against speech recognition errors and the different vocabulary levels of users. Further analysis revealed that ANTOR adapts to noisy environments and users with different vocabulary levels by prioritizing words that are less likely to cause speech recognition errors and by using words that match the user’s vocabulary level.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100153"},"PeriodicalIF":0.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143928216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel Data Extraction Framework Using Natural Language Processing (DEFNLP) techniques
Natural Language Processing Journal Pub Date : 2025-05-07 DOI: 10.1016/j.nlp.2025.100149
Tayyaba Hussain, Muhammad Usman Akram, Anum Abdul Salam
{"title":"A novel Data Extraction Framework Using Natural Language Processing (DEFNLP) techniques","authors":"Tayyaba Hussain,&nbsp;Muhammad Usman Akram,&nbsp;Anum Abdul Salam","doi":"10.1016/j.nlp.2025.100149","DOIUrl":"10.1016/j.nlp.2025.100149","url":null,"abstract":"<div><div>Evidence through data is critical if government has to address threats faced by the nation, such as pandemics or climate change. Yet several facts about data necessary to inform evidence and science are locked inside publications. We used scientific literature dataset, Coleridge Initiative — Show US the Data, to discover how the data can be used for the public good. In this research, we demonstrate a general Data Extraction Framework Using Natural Language Processing (DEFNLP) Techniques which challenge data scientists to show how publicly funded data has been used to serve science and society. The proposed framework uses NLP libraries and techniques like SpaCy and NER respectively and different huggingface Question Answering (QA) models to predict the datasets used in publications. DEFNLP findings can assist the government in immediate decisions making, accountability, transparent public investments, economic and public health benefits. Until now such an issue having large dataset which belongs to numerous research areas has not been addressed. This approach is domain independent and therefore can be applied to all kind of case studies and scenarios which require data extraction. Our methodology sets the state-of-the-art on Coleridge Initiative dataset, reaching the highest score of 0.554 using salti bert QA model with the less runtime i.e. 417.4 and output of 819 bytes than other QA models e.g., Longformer (runtime: 2710.2, output: 1780 bytes) and BigBird (runtime: 839.4, output: 177020 bytes) with 0.444 and 0.387 score respectively which impressively raised the leaderboard score with an outcome of 0.711. Its computation time to answer each query on CPU is far less i.e. 0.0696s (than 0.3556s and 0.8967s) and has suitable hyperparameters for our dataset as maximum answer length is 64, greater batch size as well as learning rate. In terms of timing and performance, each epoch took around 5 min on average on a computer with output size of 3.27kB which is again far better than other frameworks.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100149"},"PeriodicalIF":0.0,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143942112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Q-learning in multi-objective reward model for homophobic and transphobic text classification in low-resource languages: A hypothesis testing framework in multi-objective setting 基于贝叶斯q学习的低资源语言恐同和恐跨文本分类多目标奖励模型:一个多目标假设检验框架
Natural Language Processing Journal Pub Date : 2025-05-06 DOI: 10.1016/j.nlp.2025.100152
Vivek Suresh Raj , Ruba Priyadharshini , Saranya Rajiakodi , Bharathi Raja Chakravarthi
{"title":"Bayesian Q-learning in multi-objective reward model for homophobic and transphobic text classification in low-resource languages: A hypothesis testing framework in multi-objective setting","authors":"Vivek Suresh Raj ,&nbsp;Ruba Priyadharshini ,&nbsp;Saranya Rajiakodi ,&nbsp;Bharathi Raja Chakravarthi","doi":"10.1016/j.nlp.2025.100152","DOIUrl":"10.1016/j.nlp.2025.100152","url":null,"abstract":"<div><div>Most Reinforcement Learning (RL) algorithms optimize a single-objective function, whereas real-world decision-making involves multiple aspects. For hate comment classification, an agent must balance maximizing the F1-score while minimizing False Positives (FP) to enhance precision and reduce misclassifications. However, such multi-objective optimization introduces uncertainties in decision-making. To address this, we propose a Bayesian Q-Learning framework with a convolutional neural network policy. The policy outputs action logits, integrated with Q-value estimates sampled via Thompson Sampling from a Gaussian posterior. Our reward function combines F1-score (objective 1) and a penalty for misclassification (objective 2) to optimize learning. To validate our framework, firstly we show that our framework classifies the hate-comments comparatively better than other baselines by scoring an F1-score of 83%, 93%, 77% and 71% in English-Tamil, English, Kannada and Malayalam datasets for detecting homophobic and transphobic comments respectively. Secondly, we demonstrate that the variance of Q-value estimates in our Bayesian posterior decreases significantly over time, indicating that the agent has learned an optimal policy that effectively balances the competing objectives. This finding is further supported by statistical t-tests conducted across all datasets, which confirm the significance of the observed variance reduction. Additionally, we observe our agent’s multi-objective optimization path in 3D space, which shows its ability to balance reward (F1-score) and regret. Furthermore, we compare the action selection between our Bayesian approach and non-Bayesian action clustering using K-Means algorithms, where our analysis highlights coherent clustering which indicates structure exploration, while non-Bayesian approach shows premature convergence to suboptimal policies.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100152"},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143924105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
It is all in the [MASK]: Simple instruction-tuning enables BERT-like masked language models as generative classifiers 这一切都在[MASK]中:简单的指令调优使bert类屏蔽语言模型成为生成分类器
Natural Language Processing Journal Pub Date : 2025-05-06 DOI: 10.1016/j.nlp.2025.100150
Benjamin Clavié, Nathan Cooper, Benjamin Warner
{"title":"It is all in the [MASK]: Simple instruction-tuning enables BERT-like masked language models as generative classifiers","authors":"Benjamin Clavié,&nbsp;Nathan Cooper,&nbsp;Benjamin Warner","doi":"10.1016/j.nlp.2025.100150","DOIUrl":"10.1016/j.nlp.2025.100150","url":null,"abstract":"<div><div>While encoder-only models such as BERT and ModernBERT are ubiquitous in real-world NLP applications, their conventional reliance on task-specific classification heads can limit their applicability compared to decoder-based large language models (LLMs). In this work, we introduce ModernBERT-Large-Instruct, a 0.4B-parameter encoder model that leverages its masked language modeling (MLM) head for generative classification. We design a simple approach, extracting all single-token answers from the FLAN dataset collection, and re-purposing standard MLM pre-training to only mask this single token answer. Our approach employs an intentionally simple training loop and inference mechanism that requires no heavy pre-processing, heavily engineered prompting, or architectural modifications. ModernBERT-Large-Instruct exhibits strong zero-shot performance on both classification and knowledge-based tasks, outperforming similarly sized LLMs on MMLU and achieving 93% of Llama3-1B’s MMLU performance with 60% less parameters. We also demonstrate that, when fine-tuned, the generative approach using the MLM head matches or even surpasses traditional classification-head methods across diverse NLU tasks. This capability emerges specifically in models trained on contemporary, diverse data mixes, with models trained on lower volume, less-diverse data yielding considerably weaker performance. Although preliminary, these results demonstrate the potential of using the original generative masked language modeling head over traditional task-specific heads for downstream tasks. Our work suggests that further exploration into this area is warranted, highlighting many avenues for future improvements.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100150"},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143937154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bag-of-Word approach is not dead: A performance analysis on a myriad of text classification challenges Bag-of-Word方法并没有消亡:对无数文本分类挑战的性能分析
Natural Language Processing Journal Pub Date : 2025-05-06 DOI: 10.1016/j.nlp.2025.100154
Mario Graff , Daniela Moctezuma , Eric S. Téllez
{"title":"Bag-of-Word approach is not dead: A performance analysis on a myriad of text classification challenges","authors":"Mario Graff ,&nbsp;Daniela Moctezuma ,&nbsp;Eric S. Téllez","doi":"10.1016/j.nlp.2025.100154","DOIUrl":"10.1016/j.nlp.2025.100154","url":null,"abstract":"<div><div>The Bag-of-Words (BoW) representation, enhanced with a classifier, was a pioneering approach to solving text classification problems. However, with the advent of transformers and, in general, deep learning architectures, the field has dynamically shifted its focus towards customizing these architectures for various natural language processing tasks, including text classification problems. For a newcomer, it might be impossible to realize that for some text classification problems, the traditional approach is still competitive. This research analyzes the competitiveness of BoW-based representations in different text-classification competitions run in English, Spanish, and Italian. To analyze the performance of these BoW-based representations, we participated in 12 text classification international competitions, summing up 24 tasks comprising five English tasks, seven in Italian, and twelve in Spanish. The results show that the proposed BoW representations have a difference of just 10% w.r.t. the competition winner and less than 2% in three tasks corresponding to author profiling. BoW outperforms BERT solutions and dominates in author profiling tasks.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100154"},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143924104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Financial sentiment analysis for pre-trained language models incorporating dictionary knowledge and neutral features 结合字典知识和中性特征的预训练语言模型的金融情绪分析
Natural Language Processing Journal Pub Date : 2025-04-23 DOI: 10.1016/j.nlp.2025.100148
Yongyong Sun, Haiping Yuan, Fei Xu
{"title":"Financial sentiment analysis for pre-trained language models incorporating dictionary knowledge and neutral features","authors":"Yongyong Sun,&nbsp;Haiping Yuan,&nbsp;Fei Xu","doi":"10.1016/j.nlp.2025.100148","DOIUrl":"10.1016/j.nlp.2025.100148","url":null,"abstract":"<div><div>With increasing financial market complexity, accurate sentiment analysis of financial texts has become crucial. Traditional methods often misinterpret financial terminology and show high error rates in neutral sentiment recognition. This study aims to improve financial sentiment analysis accuracy through developing EnhancedFinSentiBERT, a model incorporating financial domain pre-training, dictionary knowledge embedding, and neutral feature extraction. Experiments on the FinancialPhraseBank, FiQA and Headline datasets demonstrate the model’s superior performance compared to mainstream methods, particularly in neutral sentiment recognition. Ablation analysis reveals that dictionary knowledge embedding and neutral feature extraction contribute most significantly to model improvement.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100148"},"PeriodicalIF":0.0,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OVALYTICS: Enhancing Offensive Video Detection with YouTube Transcriptions and Advanced Language Models OVALYTICS:增强攻击性视频检测与YouTube转录和先进的语言模型
Natural Language Processing Journal Pub Date : 2025-04-21 DOI: 10.1016/j.nlp.2025.100147
Sneha Chinivar , Roopa M.S. , Arunalatha J.S. , Venugopal K.R.
{"title":"OVALYTICS: Enhancing Offensive Video Detection with YouTube Transcriptions and Advanced Language Models","authors":"Sneha Chinivar ,&nbsp;Roopa M.S. ,&nbsp;Arunalatha J.S. ,&nbsp;Venugopal K.R.","doi":"10.1016/j.nlp.2025.100147","DOIUrl":"10.1016/j.nlp.2025.100147","url":null,"abstract":"<div><div>The exponential growth of offensive content online underscores the need for robust content moderation. In response, this work presents OVALYTICS (Offensive Video Analysis Leveraging YouTube Transcriptions with Intelligent Classification System), a comprehensive framework that introduces novel integrations of advanced technologies for offensive video detection. Unlike existing approaches, OVALYTICS uniquely combines Whisper AI for accurate audio-to-text transcription with state-of-the-art large language models (LLMs) such as BERT, ALBERT, XLM-R, MPNet, and T5 for semantic analysis. The framework also features a newly curated dataset tailored for fine-grained evaluation, achieving significant improvements in accuracy and F1-scores over traditional methods and advancing the state of automated content moderation.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100147"},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPT-4o in radiology: In-context learning based automatic generation of radiology impressions 放射学中的gpt - 40:基于上下文学习的放射学印象自动生成
Natural Language Processing Journal Pub Date : 2025-04-12 DOI: 10.1016/j.nlp.2025.100145
Mohammed Mahyoub , Yong Wang , Mohammad T. Khasawneh
{"title":"GPT-4o in radiology: In-context learning based automatic generation of radiology impressions","authors":"Mohammed Mahyoub ,&nbsp;Yong Wang ,&nbsp;Mohammad T. Khasawneh","doi":"10.1016/j.nlp.2025.100145","DOIUrl":"10.1016/j.nlp.2025.100145","url":null,"abstract":"<div><div>Translating radiological findings into clinical impressions is critical for effective medical communication but is often labor-intensive and prone to variability. This study investigates the potential of the GPT-4o large language model (LLM) to automate the generation of radiology impressions from reports, using in-context learning techniques to improve accuracy. Using the MIMIC-IV-CXR dataset, the study compares three generative AI approaches: zero-shot generation (ZS), in-context learning with random examples (ICLR), and in-context learning with semantic nearest neighbors (ICLSN). These methods were evaluated using text summarization metrics such as BERT Score, ROUGE, and METEOR. Statistical tests, including the Kruskal–Wallis and Mann–Whitney U tests, were employed to validate the results. The ICLSN approach significantly outperformed ZS and ICLR, achieving the highest precision (0.9002 ± 0.0471), recall (0.8914 ± 0.0501), and F1 scores (0.8952 ± 0.0432) according to BERT Score. ROUGE and METEOR metrics confirmed these findings, with ICLSN showing notable improvements in ROUGE-1, ROUGE-2, and ROUGE-L scores (0.4673 ± 0.2606, 0.3130 ± 0.2863, and 0.4198 ± 0.2674, respectively). METEOR scores also improved significantly with ICLSN (0.4448 ± 0.2804). The study demonstrates that GPT-4o, particularly when using semantic nearest neighbors for in-context learning, can effectively generate clinically relevant radiology impressions. The method enhances the accuracy and reliability of automated clinical text summarization, suggesting a valuable tool for improving the efficiency and consistency of radiological assessments. Future work should explore fine-tuning to further optimize these outcomes and extend applications to other clinical texts.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100145"},"PeriodicalIF":0.0,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting cognitive engagement in online course forums: A review of frameworks and methodologies 检测在线课程论坛中的认知参与:框架和方法的回顾
Natural Language Processing Journal Pub Date : 2025-04-11 DOI: 10.1016/j.nlp.2025.100146
Nazmus Sakeef, M. Ali Akber Dewan, Fuhua Lin, Dharamjit Parmar
{"title":"Detecting cognitive engagement in online course forums: A review of frameworks and methodologies","authors":"Nazmus Sakeef,&nbsp;M. Ali Akber Dewan,&nbsp;Fuhua Lin,&nbsp;Dharamjit Parmar","doi":"10.1016/j.nlp.2025.100146","DOIUrl":"10.1016/j.nlp.2025.100146","url":null,"abstract":"<div><div>A key aspect of online learning in higher education involves the utilization of course discussion forums. Assessing the quality of posts, such as cognitive engagement, within online course discussion forums, and determining students’ interest and participation is challenging yet beneficial. This research investigates existing literature on identifying the cognitive engagement of online learners through the analysis of course discussion forums. Essentially, this review examines three educational frameworks - <em>Van Der Meijden’s Knowledge Construction in Synchronous and Asynchronous Discussion Posts (KCSA), Community of Inquiry (CoI), and Interactive, Constructive, Active, and Passive (ICAP)</em>, which have been widely used for students’ cognitive engagement detection analyzing their posts in course discussion forums. This study also examines the natural language processing and deep learning approaches employed and integrated with the above three educational frameworks in the existing literature concerning the detection of cognitive engagement in the context of online learning. The article provides recommendations for enhancing instructional design and fostering student engagement by leveraging cognitive engagement detection. This research underscores the significance of automating the identification of cognitive engagement in online learning and puts forth suggestions for future research directions.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100146"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143848527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise length control for large language models 精确长度控制大型语言模型
Natural Language Processing Journal Pub Date : 2025-04-01 DOI: 10.1016/j.nlp.2025.100143
Bradley Butcher, Michael O’Keefe, James Titchener
{"title":"Precise length control for large language models","authors":"Bradley Butcher,&nbsp;Michael O’Keefe,&nbsp;James Titchener","doi":"10.1016/j.nlp.2025.100143","DOIUrl":"10.1016/j.nlp.2025.100143","url":null,"abstract":"<div><div>Large Language Models (LLMs) are increasingly used in production systems, powering applications such as chatbots, summarization, and question answering. Despite their success, controlling the length of their response remains a significant challenge, particularly for tasks requiring brevity or specific levels of detail. In this work, we propose a method to adapt pre-trained decoder-only LLMs for precise control of response length. Our approach incorporates a secondary length-difference positional encoding (LDPE) into the input embeddings, which counts down to a user-set response termination length. Fine-tuning with LDPE allows the model to learn to terminate responses coherently at the desired length, achieving mean token errors of less than 3 tokens. We also introduce Max New Tokens++, an extension that enables flexible upper-bound length control, rather than an exact target. Experimental results on tasks such as question answering and document summarization demonstrate that our method enables precise length control without compromising response quality.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100143"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143800015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信