Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)最新文献

筛选
英文 中文
cTBLS: Augmenting Large Language Models with Conversational Tables cTBLS:用会话表增强大型语言模型
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 2023-03-21 DOI: 10.18653/v1/2023.nlp4convai-1.6
Anirudh S. Sundar, Larry Heck
{"title":"cTBLS: Augmenting Large Language Models with Conversational Tables","authors":"Anirudh S. Sundar, Larry Heck","doi":"10.18653/v1/2023.nlp4convai-1.6","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.6","url":null,"abstract":"Optimizing accuracy and performance while eliminating hallucinations of open-domain conversational large language models (LLMs) is an open research challenge. A particularly promising direction is to augment and ground LLMs with information from structured sources. This paper introduces Conversational Tables cTBLS, a three-step architecture to retrieve and generate dialogue responses grounded on retrieved tabular information. cTBLS uses Transformer encoder embeddings for Dense Table Retrieval and obtains up to 125% relative improvement over the retriever in the previous state-of-the-art system on the HyrbiDialogue dataset. cTBLS then uses a shared process between encoder and decoder models to perform a coarse+fine tabular knowledge (e.g., cell) ranking combined with a GPT-3.5 LLM response generator to yield a 2x relative improvement in ROUGE scores. Finally, human evaluators prefer cTBLs +80% of the time (coherency, fluency) and judge informativeness to be 4x better than the previous state-of-the-art.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114387719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Underspecification of Situations in Open-domain Conversational Datasets 关于开放域会话数据集中情境的欠规范问题
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 1900-01-01 DOI: 10.18653/v1/2023.nlp4convai-1.2
Naoki Otani, J. Araki, Hyeongsik Kim, E. Hovy
{"title":"On the Underspecification of Situations in Open-domain Conversational Datasets","authors":"Naoki Otani, J. Araki, Hyeongsik Kim, E. Hovy","doi":"10.18653/v1/2023.nlp4convai-1.2","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.2","url":null,"abstract":"Advances of open-domain conversational systems have been achieved through the creation of numerous conversation datasets. However, many of the commonly used datasets contain little or no information about the conversational situation, such as relevant objects/people, their properties, and relationships. This absence leads to underspecification of the problem space and typically results in undesired dialogue system behavior. This position paper discusses the current state of the field associated with processing situational information. An analysis of response generation using three datasets shows that explicitly provided situational information can improve the coherence and specificity of generated responses, but further experiments reveal that generation systems can be misled by irrelevant information. Our conclusions from this evaluation provide insights into the problem and directions for future research.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132305028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User Simulator Assisted Open-ended Conversational Recommendation System 用户模拟器辅助开放式会话推荐系统
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 1900-01-01 DOI: 10.18653/v1/2023.nlp4convai-1.8
Qiusi Zhan, Xiaojie Guo, Heng Ji, Lingfei Wu
{"title":"User Simulator Assisted Open-ended Conversational Recommendation System","authors":"Qiusi Zhan, Xiaojie Guo, Heng Ji, Lingfei Wu","doi":"10.18653/v1/2023.nlp4convai-1.8","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.8","url":null,"abstract":"Conversational recommendation systems (CRS) have gained popularity in e-commerce as they can recommend items during user interactions. However, current open-ended CRS have limited recommendation performance due to their short-sighted training process, which only predicts one utterance at a time without considering its future impact. To address this, we propose a User Simulator (US) that communicates with the CRS using natural language based on given user preferences, enabling long-term reinforcement learning. We also introduce a framework that uses reinforcement learning (RL) with two novel rewards, i.e., recommendation and conversation rewards, to train the CRS. This approach considers the long-term goals and improves both the conversation and recommendation performance of the CRS. Our experiments show that our proposed framework improves the recall of recommendations by almost 100%. Moreover, human evaluation demonstrates the superiority of our framework in enhancing the informativeness of generated utterances.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130163790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating Video Game Scripts with Style 生成具有风格的电子游戏脚本
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 1900-01-01 DOI: 10.18653/v1/2023.nlp4convai-1.11
Gaetan Lopez Latouche, Laurence Marcotte, Ben Swanson
{"title":"Generating Video Game Scripts with Style","authors":"Gaetan Lopez Latouche, Laurence Marcotte, Ben Swanson","doi":"10.18653/v1/2023.nlp4convai-1.11","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.11","url":null,"abstract":"While modern language models can generate a scripted scene in the format of a play, movie, or video game cutscene the quality of machine generated text remains behind that of human authors. In this work, we focus on one aspect of this quality gap; generating text in the style of an arbitrary and unseen character. We propose the Style Adaptive Semiparametric Scriptwriter (SASS) which leverages an adaptive weighted style memory to generate dialog lines in accordance with a character’s speaking patterns. Using the LIGHT dataset as well as a new corpus of scripts from twenty-three AAA video games, we show that SASS not only outperforms similar models but in some cases can also be used in conjunction with them to yield further improvement.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123555078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Survey of Challenges and Methods in the Computational Modeling of Multi-Party Dialog 多方对话计算建模的挑战与方法综述
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 1900-01-01 DOI: 10.18653/v1/2023.nlp4convai-1.12
Ananya Ganesh, Martha Palmer, Katharina Kann
{"title":"A Survey of Challenges and Methods in the Computational Modeling of Multi-Party Dialog","authors":"Ananya Ganesh, Martha Palmer, Katharina Kann","doi":"10.18653/v1/2023.nlp4convai-1.12","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.12","url":null,"abstract":"Advances in conversational AI systems, powered in particular by large language models, have facilitated rapid progress in understanding and generating dialog. Typically, task-oriented or open-domain dialog systems have been designed to work with two-party dialog, i.e., the exchange of utterances between a single user and a dialog system. However, modern dialog systems may be deployed in scenarios such as classrooms or meetings where conversational analysis of multiple speakers is required. This survey will present research around computational modeling of “multi-party dialog”, outlining differences from two-party dialog, challenges and issues in working with multi-party dialog, and methods for representing multi-party dialog. We also provide an overview of dialog datasets created for the study of multi-party dialog, as well as tasks that are of interest in this domain.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117201177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dialogue State Tracking with Sparse Local Slot Attention 稀疏局部时隙注意的对话状态跟踪
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023) Pub Date : 1900-01-01 DOI: 10.18653/v1/2023.nlp4convai-1.4
Longfei Yang, Jiyi Li, Sheng Li, T. Shinozaki
{"title":"Dialogue State Tracking with Sparse Local Slot Attention","authors":"Longfei Yang, Jiyi Li, Sheng Li, T. Shinozaki","doi":"10.18653/v1/2023.nlp4convai-1.4","DOIUrl":"https://doi.org/10.18653/v1/2023.nlp4convai-1.4","url":null,"abstract":"Dialogue state tracking (DST) is designed to track the dialogue state during the conversations between users and systems, which is the core of task-oriented dialogue systems. Mainstream models predict the values for each slot with fully token-wise slot attention from dialogue history. However, such operations may result in overlooking the neighboring relationship. Moreover, it may lead the model to assign probability mass to irrelevant parts, while these parts contribute little. It becomes severe with the increase in dialogue length. Therefore, we investigate sparse local slot attention for DST in this work. Slot-specific local semantic information is obtained at a sub-sampled temporal resolution capturing local dependencies for each slot. Then these local representations are attended with sparse attention weights to guide the model to pay attention to relevant parts of local information for subsequent state value prediction. The experimental results on MultiWOZ 2.0 and 2.4 datasets show that the proposed approach effectively improves the performance of ontology-based dialogue state tracking, and performs better than token-wise attention for long dialogues.","PeriodicalId":169166,"journal":{"name":"Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129449139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信