{"title":"Adapting language generation to dialogue environments and users for task-oriented dialogue systems","authors":"Atsumoto Ohashi, Ryuichiro Higashinaka","doi":"10.1016/j.nlp.2025.100153","DOIUrl":null,"url":null,"abstract":"<div><div>When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e.g., noise from environmental sounds) and the user (e.g., users with low levels of understanding ability). Inspired by recent advances in reinforcement learning (RL) for language generation tasks, we propose ANTOR, a method for <strong>A</strong>daptive <strong>N</strong>atural language generation for <strong>T</strong>ask-<strong>O</strong>riented dialogue via <strong>R</strong>einforcement learning. In ANTOR, a natural language understanding (NLU) module, which corresponds to the user’s understanding of system utterances, is incorporated into the objective function of RL. If the NLG’s intentions are correctly conveyed to the NLU, the NLG is given a positive reward. We conducted experiments on the two major task-oriented dialogue datasets, MultiWOZ and Schema-Guided Dialogue, and we confirmed that ANTOR could generate adaptive utterances against speech recognition errors and the different vocabulary levels of users. Further analysis revealed that ANTOR adapts to noisy environments and users with different vocabulary levels by prioritizing words that are less likely to cause speech recognition errors and by using words that match the user’s vocabulary level.</div></div>","PeriodicalId":100944,"journal":{"name":"Natural Language Processing Journal","volume":"11 ","pages":"Article 100153"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Processing Journal","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949719125000299","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
When a natural language generation (NLG) component is implemented in a real-world task-oriented dialogue system, it is necessary to generate not only natural utterances as learned on training data but also utterances adapted to the dialogue environment (e.g., noise from environmental sounds) and the user (e.g., users with low levels of understanding ability). Inspired by recent advances in reinforcement learning (RL) for language generation tasks, we propose ANTOR, a method for Adaptive Natural language generation for Task-Oriented dialogue via Reinforcement learning. In ANTOR, a natural language understanding (NLU) module, which corresponds to the user’s understanding of system utterances, is incorporated into the objective function of RL. If the NLG’s intentions are correctly conveyed to the NLU, the NLG is given a positive reward. We conducted experiments on the two major task-oriented dialogue datasets, MultiWOZ and Schema-Guided Dialogue, and we confirmed that ANTOR could generate adaptive utterances against speech recognition errors and the different vocabulary levels of users. Further analysis revealed that ANTOR adapts to noisy environments and users with different vocabulary levels by prioritizing words that are less likely to cause speech recognition errors and by using words that match the user’s vocabulary level.