在高风险的保险应用案例中解决与 ChatGPT 有关的信任问题

IF 10.1 1区 社会学 Q1 SOCIAL ISSUES
Juliane Ressel , Michaele Völler , Finbarr Murphy , Martin Mullins
{"title":"在高风险的保险应用案例中解决与 ChatGPT 有关的信任问题","authors":"Juliane Ressel ,&nbsp;Michaele Völler ,&nbsp;Finbarr Murphy ,&nbsp;Martin Mullins","doi":"10.1016/j.techsoc.2024.102644","DOIUrl":null,"url":null,"abstract":"<div><p>The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.</p></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":null,"pages":null},"PeriodicalIF":10.1000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0160791X24001921/pdfft?md5=e3ca72087c09bc695b525f008d640018&pid=1-s2.0-S0160791X24001921-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Addressing the notion of trust around ChatGPT in the high-stakes use case of insurance\",\"authors\":\"Juliane Ressel ,&nbsp;Michaele Völler ,&nbsp;Finbarr Murphy ,&nbsp;Martin Mullins\",\"doi\":\"10.1016/j.techsoc.2024.102644\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.</p></div>\",\"PeriodicalId\":47979,\"journal\":{\"name\":\"Technology in Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2024-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24001921/pdfft?md5=e3ca72087c09bc695b525f008d640018&pid=1-s2.0-S0160791X24001921-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24001921\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL ISSUES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X24001921","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
引用次数: 0

摘要

关于 ChatGPT 和其他基于大型语言模型(LLMs)的应用程序的(不)信任程度的公开讨论充斥着泛泛而谈、令人生畏的风险术语,而相关理论概念和信任度实证测量的异质性进一步阻碍了深入分析。因此,要避免不必要的信任,就必须对驱动信任判断的因素有更细致入微的了解。在这篇评论文章中,我们提出,在整个保险行业中,在面向消费者的基于本地法律知识的系统中探讨信任的概念,可以为这场辩论赋予更强的针对性。由于产品的高度无形性,再加上风险、复杂性和信息不对称程度的增加,信任的概念和作用在这一特定环境中至关重要。此外,鉴于文本文件数量庞大,特别是一般保单条件或索赔协议,在这一领域广泛使用 LLM 是意料之中的事。保险作为一种实践,与公民的福利息息相关,并对更广泛的公共政策领域产生众多溢出效应。因此,我们认为,针对特定领域的良好人工智能治理方法对于避免金融包容性的负面外部效应至关重要。事实上,作为信任的一个构成要素,脆弱性在这一系列高风险交易中尤其具有挑战性,而采用 LLMs 则会增加社会道德风险。有鉴于此,我们的评论为监管者和政策制定者提供了一个宝贵的基线,帮助他们了解在保险中采用面向消费者的本地法律知识可能会产生的深远社会经济后果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Addressing the notion of trust around ChatGPT in the high-stakes use case of insurance

The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
17.90
自引率
14.10%
发文量
316
审稿时长
60 days
期刊介绍: Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信