会话搜索风险的模拟和建模

Zhenduo Wang, Qingyao Ai
{"title":"会话搜索风险的模拟和建模","authors":"Zhenduo Wang, Qingyao Ai","doi":"10.1145/3507357","DOIUrl":null,"url":null,"abstract":"In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions. In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.","PeriodicalId":6934,"journal":{"name":"ACM Transactions on Information Systems (TOIS)","volume":"64 1","pages":"1 - 33"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Simulating and Modeling the Risk of Conversational Search\",\"authors\":\"Zhenduo Wang, Qingyao Ai\",\"doi\":\"10.1145/3507357\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions. In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.\",\"PeriodicalId\":6934,\"journal\":{\"name\":\"ACM Transactions on Information Systems (TOIS)\",\"volume\":\"64 1\",\"pages\":\"1 - 33\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Information Systems (TOIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3507357\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Information Systems (TOIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3507357","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

在会话搜索中,代理可以通过询问澄清性问题来与用户互动,以增加他们找到更好结果的机会。在自然语言处理和信息检索社区中,许多最近的工作和共同的任务都集中在确定提出澄清性问题的需要和产生这些问题的方法上。这些工作假设问一个澄清的问题是一个安全的替代检索结果。由于现有的会话搜索模型还远远不够完美,它们可能会检索/生成不好的澄清问题。当用户更喜欢搜索效率而不是正确性时,问太多澄清性问题也会耗尽用户的耐心。因此,这些模型可能会适得其反,损害用户的搜索体验,因为这些风险来自于提出澄清性问题。在这项工作中,我们提出了一个模拟框架来模拟会话搜索中提问的风险,并进一步修改风险感知会话搜索模型来控制风险。我们通过在三个会话数据集(MSDialog、Ubuntu Dialog语料库和Opendialkg)上进行广泛的实验来展示模型的鲁棒性和有效性,并将其与多个基线进行比较。我们证明了风险控制模块可以与两种不同的重新排序模型一起工作,并且在我们的大多数实验中优于所有基线。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Simulating and Modeling the Risk of Conversational Search
In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions. In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信