{"title":"Beware of botshit: How to manage the epistemic risks of generative chatbots","authors":"","doi":"10.1016/j.bushor.2024.03.001","DOIUrl":null,"url":null,"abstract":"<div><p>Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as <em>hallucinations</em>. When humans uncritically use this untruthful content, it becomes what we call <em>botshit</em>. This article focuses on how to use chatbots for content generation work while mitigating the <em>epistemic</em> (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability and response veracity importance. The framework identifies four modes of chatbot work (<em>authenticated</em>, <em>autonomous</em>, <em>automated</em>, and <em>augmented</em>) with a botshit-related risk (<em>ignorance</em>, <em>miscalibration</em>, <em>routinization,</em> and <em>black boxing</em>). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.</p></div>","PeriodicalId":48347,"journal":{"name":"Business Horizons","volume":"67 5","pages":"Pages 471-486"},"PeriodicalIF":5.8000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Business Horizons","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0007681324000272","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0
Abstract
Advances in large language model (LLM) technology enable chatbots to generate and analyze content for our work. Generative chatbots do this work by predicting responses rather than knowing the meaning of their responses. In other words, chatbots can produce coherent-sounding but inaccurate or fabricated content, referred to as hallucinations. When humans uncritically use this untruthful content, it becomes what we call botshit. This article focuses on how to use chatbots for content generation work while mitigating the epistemic (i.e., the process of producing knowledge) risks associated with botshit. Drawing on risk management research, we introduce a typology framework that orients how chatbots can be used based on two dimensions: response veracity verifiability and response veracity importance. The framework identifies four modes of chatbot work (authenticated, autonomous, automated, and augmented) with a botshit-related risk (ignorance, miscalibration, routinization, and black boxing). We describe and illustrate each mode and offer advice to help chatbot users guard against the botshit risks that come with each mode.
期刊介绍:
Business Horizons, the bimonthly journal of the Kelley School of Business at Indiana University, is dedicated to publishing original articles that appeal to both business academics and practitioners. Our editorial focus is on covering a diverse array of topics within the broader field of business, with a particular emphasis on identifying critical business issues and proposing practical solutions. Our goal is to inspire readers to approach business practices from new and innovative perspectives. Business Horizons occupies a distinctive position among business publications by offering articles that strike a balance between academic rigor and practical relevance. As such, our articles are grounded in scholarly research yet presented in a clear and accessible format, making them relevant to a broad audience within the business community.