Julien Albert, Martin Balfroid, Miriam Doh, Jeremie Bogaert, Luca La Fisca, Liesbet De Vos, Bryan Renard, Vincent Stragier, Emmanuel Jean
{"title":"用户对大语言模型和基于模板的电影推荐解释的偏好:试点研究","authors":"Julien Albert, Martin Balfroid, Miriam Doh, Jeremie Bogaert, Luca La Fisca, Liesbet De Vos, Bryan Renard, Vincent Stragier, Emmanuel Jean","doi":"arxiv-2409.06297","DOIUrl":null,"url":null,"abstract":"Recommender systems have become integral to our digital experiences, from\nonline shopping to streaming platforms. Still, the rationale behind their\nsuggestions often remains opaque to users. While some systems employ a\ngraph-based approach, offering inherent explainability through paths\nassociating recommended items and seed items, non-experts could not easily\nunderstand these explanations. A popular alternative is to convert graph-based\nexplanations into textual ones using a template and an algorithm, which we\ndenote here as ''template-based'' explanations. Yet, these can sometimes come\nacross as impersonal or uninspiring. A novel method would be to employ large\nlanguage models (LLMs) for this purpose, which we denote as ''LLM-based''. To\nassess the effectiveness of LLMs in generating more resonant explanations, we\nconducted a pilot study with 25 participants. They were presented with three\nexplanations: (1) traditional template-based, (2) LLM-based rephrasing of the\ntemplate output, and (3) purely LLM-based explanations derived from the\ngraph-based explanations. Although subject to high variance, preliminary\nfindings suggest that LLM-based explanations may provide a richer and more\nengaging user experience, further aligning with user expectations. This study\nsheds light on the potential limitations of current explanation methods and\noffers promising directions for leveraging large language models to improve\nuser satisfaction and trust in recommender systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"49 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study\",\"authors\":\"Julien Albert, Martin Balfroid, Miriam Doh, Jeremie Bogaert, Luca La Fisca, Liesbet De Vos, Bryan Renard, Vincent Stragier, Emmanuel Jean\",\"doi\":\"arxiv-2409.06297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recommender systems have become integral to our digital experiences, from\\nonline shopping to streaming platforms. Still, the rationale behind their\\nsuggestions often remains opaque to users. While some systems employ a\\ngraph-based approach, offering inherent explainability through paths\\nassociating recommended items and seed items, non-experts could not easily\\nunderstand these explanations. A popular alternative is to convert graph-based\\nexplanations into textual ones using a template and an algorithm, which we\\ndenote here as ''template-based'' explanations. Yet, these can sometimes come\\nacross as impersonal or uninspiring. A novel method would be to employ large\\nlanguage models (LLMs) for this purpose, which we denote as ''LLM-based''. To\\nassess the effectiveness of LLMs in generating more resonant explanations, we\\nconducted a pilot study with 25 participants. They were presented with three\\nexplanations: (1) traditional template-based, (2) LLM-based rephrasing of the\\ntemplate output, and (3) purely LLM-based explanations derived from the\\ngraph-based explanations. Although subject to high variance, preliminary\\nfindings suggest that LLM-based explanations may provide a richer and more\\nengaging user experience, further aligning with user expectations. This study\\nsheds light on the potential limitations of current explanation methods and\\noffers promising directions for leveraging large language models to improve\\nuser satisfaction and trust in recommender systems.\",\"PeriodicalId\":501541,\"journal\":{\"name\":\"arXiv - CS - Human-Computer Interaction\",\"volume\":\"49 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Human-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.06297\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.06297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study
Recommender systems have become integral to our digital experiences, from
online shopping to streaming platforms. Still, the rationale behind their
suggestions often remains opaque to users. While some systems employ a
graph-based approach, offering inherent explainability through paths
associating recommended items and seed items, non-experts could not easily
understand these explanations. A popular alternative is to convert graph-based
explanations into textual ones using a template and an algorithm, which we
denote here as ''template-based'' explanations. Yet, these can sometimes come
across as impersonal or uninspiring. A novel method would be to employ large
language models (LLMs) for this purpose, which we denote as ''LLM-based''. To
assess the effectiveness of LLMs in generating more resonant explanations, we
conducted a pilot study with 25 participants. They were presented with three
explanations: (1) traditional template-based, (2) LLM-based rephrasing of the
template output, and (3) purely LLM-based explanations derived from the
graph-based explanations. Although subject to high variance, preliminary
findings suggest that LLM-based explanations may provide a richer and more
engaging user experience, further aligning with user expectations. This study
sheds light on the potential limitations of current explanation methods and
offers promising directions for leveraging large language models to improve
user satisfaction and trust in recommender systems.