{"title":"Investigating Result Presentation in Conversational IR","authors":"Souvick Ghosh","doi":"10.1145/3295750.3298974","DOIUrl":null,"url":null,"abstract":"Recent researches in conversational IR have explored problems related to context enhancement, question-answering, and query reformulations. However, very few researches have focused on result presentation over audio channels. The linear and transient nature of speech makes it cognitively challenging for the user to process a large amount of information. Presenting the search results (from SERP) is equally challenging as it is not feasible to read out the list of results. In this paper, we propose a study to evaluate the users' preference of modalities when using conversational search systems. The study will help us to understand how results should be presented in a conversational search system. As we observe how users search using audio queries, interact with the intermediary, and process the results presented, we aim to develop an insight on how to present results more efficiently in a conversational search setting. We also plan on exploring the effectiveness and consistency of different media in a conversational search setting. Our observations will inform future designs and help to create a better understanding of such systems.","PeriodicalId":187771,"journal":{"name":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","volume":"458 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 Conference on Human Information Interaction and Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3295750.3298974","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Recent researches in conversational IR have explored problems related to context enhancement, question-answering, and query reformulations. However, very few researches have focused on result presentation over audio channels. The linear and transient nature of speech makes it cognitively challenging for the user to process a large amount of information. Presenting the search results (from SERP) is equally challenging as it is not feasible to read out the list of results. In this paper, we propose a study to evaluate the users' preference of modalities when using conversational search systems. The study will help us to understand how results should be presented in a conversational search system. As we observe how users search using audio queries, interact with the intermediary, and process the results presented, we aim to develop an insight on how to present results more efficiently in a conversational search setting. We also plan on exploring the effectiveness and consistency of different media in a conversational search setting. Our observations will inform future designs and help to create a better understanding of such systems.