Adapting language generation to dialogue environments and users for task-oriented dialogue systems

Atsumoto Ohashi, Ryuichiro Higashinaka
{"title":"Adapting language generation to dialogue environments and users for task-oriented dialogue systems","authors":"Atsumoto Ohashi,&nbsp;Ryuichiro Higashinaka","doi":"10.1016/j.nlp.2025.100153","DOIUrl":null,"url":null,"abstract":"<div><div>When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e.g., noise from environmental sounds) and the user (e.g., users with low levels of understanding ability). Inspired by recent advances in reinforcement learning (RL) for language generation tasks, we propose ANTOR, a method for <strong>A</strong>daptive <strong>N</strong>atural language generation for <strong>T</strong>ask-<strong>O</strong>riented dialogue via <strong>R</strong>einforcement learning. In ANTOR, a natural language understanding (NLU) module, which corresponds to the user’s understanding of system utterances, is incorporated into the objective function of RL. If the NLG’s intentions are correctly conveyed to the NLU, the NLG is given a positive reward. We conducted experiments on the two major task-oriented dialogue datasets, MultiWOZ and Schema-Guided Dialogue, and we confirmed that ANTOR could generate adaptive utterances against speech recognition errors and the different vocabulary levels of users. Further analysis revealed that ANTOR adapts to noisy environments and users with different vocabulary levels by prioritizing words that are less likely to cause speech recognition errors and by using words that match the user’s vocabulary level.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100153"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719125000299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e.g., noise from environmental sounds) and the user (e.g., users with low levels of understanding ability). Inspired by recent advances in reinforcement learning (RL) for language generation tasks, we propose ANTOR, a method for Adaptive Natural language generation for Task-Oriented dialogue via Reinforcement learning. In ANTOR, a natural language understanding (NLU) module, which corresponds to the user’s understanding of system utterances, is incorporated into the objective function of RL. If the NLG’s intentions are correctly conveyed to the NLU, the NLG is given a positive reward. We conducted experiments on the two major task-oriented dialogue datasets, MultiWOZ and Schema-Guided Dialogue, and we confirmed that ANTOR could generate adaptive utterances against speech recognition errors and the different vocabulary levels of users. Further analysis revealed that ANTOR adapts to noisy environments and users with different vocabulary levels by prioritizing words that are less likely to cause speech recognition errors and by using words that match the user’s vocabulary level.
使语言生成适应对话环境和面向任务的对话系统的用户
当自然语言生成(NLG)组件在现实世界面向任务的对话系统中实现时,不仅需要生成从训练数据中学习到的自然话语,还需要生成适应对话环境(例如,来自环境声音的噪声)和用户(例如,理解能力较低的用户)的话语。受用于语言生成任务的强化学习(RL)的最新进展的启发,我们提出了ANTOR,一种通过强化学习用于面向任务的对话的自适应自然语言生成方法。在ANTOR中,自然语言理解(NLU)模块被纳入强化学习的目标函数中,该模块对应于用户对系统话语的理解。如果NLG的意图被正确地传达给NLU, NLG就会得到积极的奖励。我们在两个主要面向任务的对话数据集MultiWOZ和Schema-Guided dialogue上进行了实验,我们证实了ANTOR可以针对语音识别错误和用户不同的词汇水平生成自适应的话语。进一步的分析表明,ANTOR通过优先考虑不太可能导致语音识别错误的单词和使用与用户词汇水平匹配的单词来适应嘈杂的环境和不同词汇水平的用户。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信