Natural Language Processing Journal最新文献

筛选
英文 中文
Token and part-of-speech fusion for pretraining of transformers with application in automatic cyberbullying detection
Natural Language Processing Journal Pub Date : 2025-02-07 DOI: 10.1016/j.nlp.2025.100132
Nor Saiful Azam Bin Nor Azmi , Michal Ptaszynski , Fumito Masui , Juuso Eronen , Karol Nowakowski
{"title":"Token and part-of-speech fusion for pretraining of transformers with application in automatic cyberbullying detection","authors":"Nor Saiful Azam Bin Nor Azmi ,&nbsp;Michal Ptaszynski ,&nbsp;Fumito Masui ,&nbsp;Juuso Eronen ,&nbsp;Karol Nowakowski","doi":"10.1016/j.nlp.2025.100132","DOIUrl":"10.1016/j.nlp.2025.100132","url":null,"abstract":"<div><div>Cyberbullying detection remains a significant challenge in the context of expanding internet and social media usage. This study proposes a novel pretraining methodology for transformer models, integrating Part-of-Speech (POS) information with a unique way of tokenization. The proposed model, based on the ELECTRA architecture, undergoes pretraining and fine-tuning and is referred to as ELECTRA_POS. By leveraging linguistic structures, this approach improves understanding of context and subtle meaning in the text. Through evaluation using the GLUE benchmark and a dedicated cyberbullying detection dataset, ELECTRA_POS consistently delivers enhanced performance compared to conventional transformer models. Key contributions include the introduction of POS-token fusion techniques and their application to improve cyberbullying detection, as well as insights into how linguistic features influence transformer-based models. The result highlights how integrating POS information into the transformer model improves the detection of harmful online behavior while benefiting other natural language processing (NLP) tasks.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative analysis of encoder only and decoder only models for challenging LLM-generated STEM MCQs using a self-evaluation approach
Natural Language Processing Journal Pub Date : 2025-02-05 DOI: 10.1016/j.nlp.2025.100131
Ghada Soliman Ph.D. , Hozaifa Zaki , Mohamed Kilany
{"title":"A comparative analysis of encoder only and decoder only models for challenging LLM-generated STEM MCQs using a self-evaluation approach","authors":"Ghada Soliman Ph.D. ,&nbsp;Hozaifa Zaki ,&nbsp;Mohamed Kilany","doi":"10.1016/j.nlp.2025.100131","DOIUrl":"10.1016/j.nlp.2025.100131","url":null,"abstract":"<div><div>Large Language Models (LLMs) have demonstrated impressive capabilities in various tasks, including Multiple-Choice Question Answering (MCQA) evaluated on benchmark datasets with few-shot prompting. Given the absence of benchmark Science, Technology, Engineering, and Mathematics (STEM) datasets on Multiple-Choice Questions (MCQs) created by LLMs, we employed various LLMs (e.g., Vicuna-13B, Bard, and GPT-3.5) to generate MCQs on STEM topics curated from Wikipedia. We evaluated open-source LLM models such as Llama 2-7B and Mistral-7B Instruct, along with an encoder model such as DeBERTa v3 Large, on inference by adding context in addition to fine-tuning with and without context. The results showed that DeBERTa v3 Large and Mistral-7B Instruct outperform Llama 2-7B, highlighting the potential of LLMs with fewer parameters in answering hard MCQs when given the appropriate context through fine-tuning. We also benchmarked the results of these models against closed-source models such as Gemini and GPT-4 on inference with context, showcasing the potential of narrowing the gap between open-source and closed-source models when context is provided. Our work demonstrates the capabilities of LLMs in creating more challenging tasks that can be used as self-evaluation for other models. It also contributes to understanding LLMs’ capabilities in STEM MCQs tasks and emphasizes the importance of context for LLMs with fewer parameters in enhancing their performance.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on chatbots and large language models: Testing and evaluation techniques
Natural Language Processing Journal Pub Date : 2025-01-25 DOI: 10.1016/j.nlp.2025.100128
Sonali Uttam Singh, Akbar Siami Namin
{"title":"A survey on chatbots and large language models: Testing and evaluation techniques","authors":"Sonali Uttam Singh,&nbsp;Akbar Siami Namin","doi":"10.1016/j.nlp.2025.100128","DOIUrl":"10.1016/j.nlp.2025.100128","url":null,"abstract":"<div><div>Chatbots have been quite developed in the recent decades and evolved along with the field of Artificial Intelligence (AI), enabling powerful capabilities in tasks such as text generation and summarization, sentiment analysis, and many other interesting Natural Language Processing (NLP) based tasks. Advancements in language models (LMs), specifically LLMs, have played an important role in improving the capabilities of chatbots. This survey paper provides a comprehensive overview in chatbot with the integration of LLMs, primarily focusing on the testing, evaluation and performance techniques and frameworks associated with it. The paper discusses the foundational concepts of chatbots and their evolution, highlights the challenges and opportunities they present by reviewing the state-of-the-art papers associated with the chatbots design, testing and evaluation. The survey also delves into the key components of chatbot systems, including Natural Language Understanding (NLU), dialogue management, and Natural Language Generation (NLG), and examine how LLMs have influenced each of these components. Furthermore, the survey examines the ethical considerations and limitations associated with LLMs. The paper primarily focuses on investigating the evaluation techniques and metrics used to assess the performance and effectiveness of these language models. This paper aims to provide an overview of chatbots and highlights the need for an appropriate framework in regards to testing and evaluating these chatbots and the LLMs associated with it in order to provide efficient and proper knowledge to user and potentially improve its quality based on advancements in the field of machine learning.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning vs. rule-based methods for document classification of electronic health records within mental health care—A systematic literature review
Natural Language Processing Journal Pub Date : 2025-01-25 DOI: 10.1016/j.nlp.2025.100129
Emil Rijcken , Kalliopi Zervanou , Pablo Mosteiro , Floortje Scheepers , Marco Spruit , Uzay Kaymak
{"title":"Machine learning vs. rule-based methods for document classification of electronic health records within mental health care—A systematic literature review","authors":"Emil Rijcken ,&nbsp;Kalliopi Zervanou ,&nbsp;Pablo Mosteiro ,&nbsp;Floortje Scheepers ,&nbsp;Marco Spruit ,&nbsp;Uzay Kaymak","doi":"10.1016/j.nlp.2025.100129","DOIUrl":"10.1016/j.nlp.2025.100129","url":null,"abstract":"<div><div>Document classification is a widely used task for analyzing mental healthcare texts. This systematic literature review focuses on the document classification of electronic health records in mental healthcare. Over the last decade, there has been a shift from rule-based to machine-learning methods. Despite this shift, no systematic comparison of these two approaches exists for mental healthcare applications. This review examines the evolution, applications, and performance of these methods over time. We find that for most of the last decade, rule-based methods have outperformed machine-learning approaches. However, with the development of more advanced machine-learning techniques, performance has improved. In particular, Transformer-based models enable machine learning approaches to outperform rule-based methods for the first time.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion on the edge: An evaluation of feature representations and machine learning models
Natural Language Processing Journal Pub Date : 2025-01-23 DOI: 10.1016/j.nlp.2025.100127
James Thomas Black, Muhammad Zeeshan Shakir
{"title":"Emotion on the edge: An evaluation of feature representations and machine learning models","authors":"James Thomas Black,&nbsp;Muhammad Zeeshan Shakir","doi":"10.1016/j.nlp.2025.100127","DOIUrl":"10.1016/j.nlp.2025.100127","url":null,"abstract":"<div><div>This paper presents a comprehensive analysis of textual emotion classification, employing a tweet-based dataset to classify emotions such as surprise, love, fear, anger, sadness, and joy. We compare the performances of nine distinct machine learning classification models using Bag of Words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF) feature representations, as well as a fine-tuned DistilBERT transformer model. We examine the training and inference times of models to determine the most efficient combination when employing an edge architecture, investigating each model’s performance from training to inference using an edge board. The study underscores the significance of combinations of models and features in machine learning, detailing how these choices affect model performance when low computation power needs to be considered. The findings reveal that feature representations significantly influence model efficacy, with BoW and TF-IDF models outperforming DistilBERT. The results show that while BoW models tend to have higher accuracy, the overall performance of TF-IDF models is superior, requiring less time for fitting, Stochastic Gradient Descent and Support Vector Machines proving to be the most efficient in terms of performance and inference times.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RESPECT: A framework for promoting inclusive and respectful conversations in online communications
Natural Language Processing Journal Pub Date : 2025-01-16 DOI: 10.1016/j.nlp.2025.100126
Shaina Raza , Abdullah Y. Muaad , Emrul Hasan , Muskan Garg , Zainab Al-Zanbouri , Syed Raza Bashir
{"title":"RESPECT: A framework for promoting inclusive and respectful conversations in online communications","authors":"Shaina Raza ,&nbsp;Abdullah Y. Muaad ,&nbsp;Emrul Hasan ,&nbsp;Muskan Garg ,&nbsp;Zainab Al-Zanbouri ,&nbsp;Syed Raza Bashir","doi":"10.1016/j.nlp.2025.100126","DOIUrl":"10.1016/j.nlp.2025.100126","url":null,"abstract":"<div><div>Toxicity and bias in online conversations hinder respectful interactions, leading to issues such as harassment and discrimination. While advancements in natural language processing (NLP) have improved the detection and mitigation of toxicity on digital platforms, the evolving nature of social media conversations demands continuous innovation. Previous efforts have made strides in identifying and reducing toxicity; however, a unified and adaptable framework for managing toxic content across diverse online discourse remains essential. This paper introduces a comprehensive framework <strong>R</strong><span>ESPECT</span> designed to effectively identify and mitigate toxicity in online conversations. The framework comprises two components: an encoder-only model for detecting toxicity and a decoder-only model for generating debiased versions of the text. By leveraging the capabilities of transformer-based models, toxicity is addressed as a binary classification problem. Subsequently, open-source and proprietary large language models are utilized through prompt-based approaches to rewrite toxic text into non-toxic, and making sure these are contextually accurate alternatives. Empirical results demonstrate that this approach significantly reduces toxicity across various conversational styles, fostering safer and more respectful communication in online environments.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentiment analysis for stock market research: A bibliometric study
Natural Language Processing Journal Pub Date : 2025-01-10 DOI: 10.1016/j.nlp.2025.100125
Xieling Chen , Haoran Xie , Zongxi Li , Han Zhang , Xiaohui Tao , Fu Lee Wang
{"title":"Sentiment analysis for stock market research: A bibliometric study","authors":"Xieling Chen ,&nbsp;Haoran Xie ,&nbsp;Zongxi Li ,&nbsp;Han Zhang ,&nbsp;Xiaohui Tao ,&nbsp;Fu Lee Wang","doi":"10.1016/j.nlp.2025.100125","DOIUrl":"10.1016/j.nlp.2025.100125","url":null,"abstract":"<div><div>Sentiment analysis is widely utilized in stock market research. To comprehensively review the field, a bibliometric analysis was performed on 223 articles relating to sentiment analysis for stock markets from 2010 to 2022 collected from Web of Science database. Specifically, we recognized active affiliations, countries/regions, publication sources, and subject areas, identified top cited research articles, visualized scientific collaborations among authors, affiliations, and countries/regions, and revealed main research topics. Findings indicate that computer science journals are active in publishing works on sentiment analysis-facilitated stock market research. The research on sentiment analysis-facilitated stock market has attracted researchers from a wide geographic distribution, who have made significant contributions. The intensity of intra-regional collaborations is higher than that of inter-regional collaborations. Thematic topics regarding stock market research using sentiment analysis were detected using keyword mapping, with the following research topics being widely concerned by scholars: deep learning for stock market prediction, financial news sentiment empowered stock trend forecasting, effects of investor sentiment on financial market, and microblog sentiment classification for market prediction. Findings are helpful in depicting research status to researchers and practitioners, raising their awareness of research frontiers when planning research projects concerning sentiment analysis’s application in stock markets.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of open and closed-source LLMs for low-resource language with zero-shot, few-shot, and chain-of-thought prompting
Natural Language Processing Journal Pub Date : 2025-01-03 DOI: 10.1016/j.nlp.2024.100124
Zabir Al Nazi , Md. Rajib Hossain , Faisal Al Mamun
{"title":"Evaluation of open and closed-source LLMs for low-resource language with zero-shot, few-shot, and chain-of-thought prompting","authors":"Zabir Al Nazi ,&nbsp;Md. Rajib Hossain ,&nbsp;Faisal Al Mamun","doi":"10.1016/j.nlp.2024.100124","DOIUrl":"10.1016/j.nlp.2024.100124","url":null,"abstract":"<div><div>As the global deployment of Large Language Models (LLMs) increases, the demand for multilingual capabilities becomes more crucial. While many LLMs excel in real-time applications for high-resource languages, few are tailored specifically for low-resource languages. The limited availability of text corpora for low-resource languages, coupled with their minimal utilization during LLM training, hampers the models’ ability to perform effectively in real-time applications. Additionally, evaluations of LLMs are significantly less extensive for low-resource languages. This study offers a comprehensive evaluation of both open-source and closed-source multilingual LLMs focused on low-resource language like Bengali, a language that remains notably underrepresented in computational linguistics. Despite the limited number of pre-trained models exclusively on Bengali, we assess the performance of six prominent LLMs, i.e., three closed-source (GPT-3.5, GPT-4o, Gemini) and three open-source (Aya 101, BLOOM, LLaMA) across key natural language processing (NLP) tasks, including text classification, sentiment analysis, summarization, and question answering. These tasks were evaluated using three prompting techniques: Zero-Shot, Few-Shot, and Chain-of-Thought (CoT). This study found that the default hyperparameters of these pre-trained models, such as temperature, maximum token limit, and the number of few-shot examples, did not yield optimal outcomes and led to hallucination issues in many instances. To address these challenges, ablation studies were conducted on key hyperparameters, particularly temperature and the number of shots, to optimize Few-Shot learning and enhance model performance. The focus of this research is on understanding how these LLMs adapt to low-resource downstream tasks, emphasizing their linguistic flexibility and contextual understanding. Experimental results demonstrated that the closed-source GPT-4o model, utilizing Few-Shot learning and Chain-of-Thought prompting, achieved the highest performance across multiple tasks: an F1 score of 84.54% for text classification, 99.00% for sentiment analysis, a <span><math><mrow><mi>F</mi><msub><mrow><mn>1</mn></mrow><mrow><mi>b</mi><mi>e</mi><mi>r</mi><mi>t</mi></mrow></msub></mrow></math></span> score of 72.87% for summarization, and 58.22% for question answering. For transparency and reproducibility, all methodologies and code from this study are available on our GitHub repository: <span><span>https://github.com/zabir-nabil/bangla-multilingual-llm-eval</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bibliometric analysis of natural language processing using CiteSpace and VOSviewer
Natural Language Processing Journal Pub Date : 2024-12-19 DOI: 10.1016/j.nlp.2024.100123
Xiuming Chen , Wenjie Tian , Haoyun Fang
{"title":"Bibliometric analysis of natural language processing using CiteSpace and VOSviewer","authors":"Xiuming Chen ,&nbsp;Wenjie Tian ,&nbsp;Haoyun Fang","doi":"10.1016/j.nlp.2024.100123","DOIUrl":"10.1016/j.nlp.2024.100123","url":null,"abstract":"<div><div>Natural Language Processing (NLP) holds a pivotal position in the domains of computer science and artificial intelligence (AI). Its focus is on exploring and developing theories and methodologies that facilitate seamless and effective communication between humans and computers through the use of natural language. First of all, In this paper, we employ the bibliometric analysis tools, namely CiteSpace and VOSviewer (Visualization of Similarities viewer) are used as the bibliometric analysis software in this paper to summarize the domain of NLP research and gain insights into its core research priorities. What is more, the Web of Science(WoS) Core Collection database serves as the primary source for data acquisition in this study. The data includes 4803 articles on NLP published from 2011 to May 15, 2024. The trends and types of articles reveal the developmental trajectory and current hotspots in NLP. Finally, the analysis covers eight aspects: volume of published articles, classification, countries, institutional collaboration, author collaboration network, cited author network, co-cited journals, and co-cited references. The applications of NLP are vast, spanning areas such as AI, electronic health records, risk, task analysis, data mining, computational modeling. The findings suggest that the emphasis of future research ought to focus on areas like AI, risk, task analysis, and computational modeling. This paper provides learners and practitioners with a comprehensive insight into the current status and emerging trends in NLP.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic-based temporal attention network for Arabic Video Captioning
Natural Language Processing Journal Pub Date : 2024-12-18 DOI: 10.1016/j.nlp.2024.100122
Adel Jalal Yousif , Mohammed H. Al-Jammas
{"title":"Semantic-based temporal attention network for Arabic Video Captioning","authors":"Adel Jalal Yousif ,&nbsp;Mohammed H. Al-Jammas","doi":"10.1016/j.nlp.2024.100122","DOIUrl":"10.1016/j.nlp.2024.100122","url":null,"abstract":"<div><div>In recent years, there has been a surge in active research aiming to bridge the gap between computer vision and natural language. In a linguistically diverse region like the Arab world, it is essential to establish a mechanism that facilitates the understanding of visual aspects in native languages. Presents an Arabic video captioning method using an encoder–decoder paradigm based on CNN and LSTM. We employ a temporal attention mechanism along with semantic features to align keyframes with relevant semantic tags. Due to the lack of an Arabic captioning dataset, we use Google’s machine translation system to generate Arabic captions for the MSVD and MSR-VTT datasets, which can be used to train end-to-end Arabic video captioning models. The semantic features are extracted from a neural semantic representation network, which has been specifically trained on Arabic tags for better understanding. Semitic languages like Arabic are heavily attributed to complex morphology, which poses challenges for video captioning. We alleviate these difficulties by employing the AraBERT model as a preprocessing tool. Comprehensive experimental results demonstrate the superior performance of the proposed method compared to state-of-the-art models on two widely-used benchmarks: achieving a CIDEr score of 72.1% on MSVD and 38.0% on MSR-VTT.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"10 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143140091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信