{"title":"我们会使用相对糟糕的(算法)建议吗?性能反馈和建议表示对建议使用的影响","authors":"Stefan Daschner, Robert Obermaier","doi":"10.1002/bdm.70001","DOIUrl":null,"url":null,"abstract":"<p>Algorithms are capable of advising human decision-makers in an increasing number of management accounting tasks such as business forecasts. Due to expected potential of these (intelligent) algorithms, there are growing research efforts to explore ways how to boost algorithmic advice usage in forecasting tasks. However, algorithmic advice can also be erroneous. Yet, the risk of using relatively bad advice is largely ignored in this research stream. Therefore, we conduct two online experiments to examine this risk of using relatively bad advice in a forecasting task. In Experiment 1, we examine the influence of performance feedback (revealing previous relative advice quality) and source of advice on advice usage in business forecasts. The results indicate that the provision of performance feedback increases subsequent advice usage but also the usage of subsequent relatively bad advice. In Experiment 2, we investigate whether advice representation, that is, displaying forecast intervals instead of a point estimate, helps to calibrate advice usage towards relative advice quality. The results suggest that advice representation might be a potential countermeasure to the usage of relatively bad advice. However, the effect of this antidote weakens when forecast intervals become less informative.</p>","PeriodicalId":48112,"journal":{"name":"Journal of Behavioral Decision Making","volume":"37 5","pages":""},"PeriodicalIF":1.8000,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bdm.70001","citationCount":"0","resultStr":"{\"title\":\"Do We Use Relatively Bad (Algorithmic) Advice? The Effects of Performance Feedback and Advice Representation on Advice Usage\",\"authors\":\"Stefan Daschner, Robert Obermaier\",\"doi\":\"10.1002/bdm.70001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Algorithms are capable of advising human decision-makers in an increasing number of management accounting tasks such as business forecasts. Due to expected potential of these (intelligent) algorithms, there are growing research efforts to explore ways how to boost algorithmic advice usage in forecasting tasks. However, algorithmic advice can also be erroneous. Yet, the risk of using relatively bad advice is largely ignored in this research stream. Therefore, we conduct two online experiments to examine this risk of using relatively bad advice in a forecasting task. In Experiment 1, we examine the influence of performance feedback (revealing previous relative advice quality) and source of advice on advice usage in business forecasts. The results indicate that the provision of performance feedback increases subsequent advice usage but also the usage of subsequent relatively bad advice. In Experiment 2, we investigate whether advice representation, that is, displaying forecast intervals instead of a point estimate, helps to calibrate advice usage towards relative advice quality. The results suggest that advice representation might be a potential countermeasure to the usage of relatively bad advice. However, the effect of this antidote weakens when forecast intervals become less informative.</p>\",\"PeriodicalId\":48112,\"journal\":{\"name\":\"Journal of Behavioral Decision Making\",\"volume\":\"37 5\",\"pages\":\"\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2024-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bdm.70001\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Behavioral Decision Making\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/bdm.70001\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PSYCHOLOGY, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Behavioral Decision Making","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/bdm.70001","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
Do We Use Relatively Bad (Algorithmic) Advice? The Effects of Performance Feedback and Advice Representation on Advice Usage
Algorithms are capable of advising human decision-makers in an increasing number of management accounting tasks such as business forecasts. Due to expected potential of these (intelligent) algorithms, there are growing research efforts to explore ways how to boost algorithmic advice usage in forecasting tasks. However, algorithmic advice can also be erroneous. Yet, the risk of using relatively bad advice is largely ignored in this research stream. Therefore, we conduct two online experiments to examine this risk of using relatively bad advice in a forecasting task. In Experiment 1, we examine the influence of performance feedback (revealing previous relative advice quality) and source of advice on advice usage in business forecasts. The results indicate that the provision of performance feedback increases subsequent advice usage but also the usage of subsequent relatively bad advice. In Experiment 2, we investigate whether advice representation, that is, displaying forecast intervals instead of a point estimate, helps to calibrate advice usage towards relative advice quality. The results suggest that advice representation might be a potential countermeasure to the usage of relatively bad advice. However, the effect of this antidote weakens when forecast intervals become less informative.
期刊介绍:
The Journal of Behavioral Decision Making is a multidisciplinary journal with a broad base of content and style. It publishes original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged.