An empirical investigation of judgment feedback and computerized decision support in a prediction task

Vairam Arunachalam, Bonita A. Daly
{"title":"An empirical investigation of judgment feedback and computerized decision support in a prediction task","authors":"Vairam Arunachalam,&nbsp;Bonita A. Daly","doi":"10.1016/0959-8022(96)00020-3","DOIUrl":null,"url":null,"abstract":"<div><p>This study examines the effects on judgment accuracy of cognitive and outcome feedback provided using a computerized decision support tool. Five feedback conditions were examined in a two-stage experiment utilizing 294 participants: an outcome feedback condition, two cognitive feedback conditions (judgment policy feedback and model predictions feedback), and two joint feedback conditions (judgment policy plus outcome feedback, and model predictions plus outcome feedback). In the first stage, decision makers specified the judgment policies (i.e. cue weights and function forms) that they believed they would use in making their earnings predictions. They were then asked to forecast earnings per share for several companies based on average earnings for the last three years, current year gross margin percentage, quick ratio and eamings yield. Using appropriately modified end-user software, feedback was then provided to all participants, except those receiving outcome feedback only. Judgment policy feedback consisted of informing decision makers of the cue weights and function forms underlying their actual predictions, while model predictions feedback consisted of earnings predictions generated from the decision makers' stated judgment policies. In the second stage, decision makers revised or retained their original judgment policies and then made another set of earnings predictions. Outcome feedback, consisting of information about the actual earnings attained by the companies, was then provided to participants in the outcome feedback and joint feedback conditions. This process was then repeated for a new set of companies to determine how the various forms of feedback influenced judgment accuracy. Results indicated that providing decision makers with either type of cognitive feedback, relative to providing outcome feedback, contributed to improvements in judgment accuracy. There were no significant differences between the judgment accuracy of the cognitive feedback conditions and of the respective joint feedback conditions, indicating that adding outcome feedback did not enhance judgment accuracy. Results also suggested that model predictions feedback may be more effective than judgment policy feedback, which in turn is superior to outcome feedback. All cognitive feedback conditions, relative to outcome feedback only, also demonstrated convergence between stated model predictions and actual predictions. These results are discussed in terms of implications for the design of decision support systems for individual judgment tasks.</p></div>","PeriodicalId":100011,"journal":{"name":"Accounting, Management and Information Technologies","volume":"6 3","pages":"Pages 139-156"},"PeriodicalIF":0.0000,"publicationDate":"1996-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0959-8022(96)00020-3","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounting, Management and Information Technologies","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/0959802296000203","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

This study examines the effects on judgment accuracy of cognitive and outcome feedback provided using a computerized decision support tool. Five feedback conditions were examined in a two-stage experiment utilizing 294 participants: an outcome feedback condition, two cognitive feedback conditions (judgment policy feedback and model predictions feedback), and two joint feedback conditions (judgment policy plus outcome feedback, and model predictions plus outcome feedback). In the first stage, decision makers specified the judgment policies (i.e. cue weights and function forms) that they believed they would use in making their earnings predictions. They were then asked to forecast earnings per share for several companies based on average earnings for the last three years, current year gross margin percentage, quick ratio and eamings yield. Using appropriately modified end-user software, feedback was then provided to all participants, except those receiving outcome feedback only. Judgment policy feedback consisted of informing decision makers of the cue weights and function forms underlying their actual predictions, while model predictions feedback consisted of earnings predictions generated from the decision makers' stated judgment policies. In the second stage, decision makers revised or retained their original judgment policies and then made another set of earnings predictions. Outcome feedback, consisting of information about the actual earnings attained by the companies, was then provided to participants in the outcome feedback and joint feedback conditions. This process was then repeated for a new set of companies to determine how the various forms of feedback influenced judgment accuracy. Results indicated that providing decision makers with either type of cognitive feedback, relative to providing outcome feedback, contributed to improvements in judgment accuracy. There were no significant differences between the judgment accuracy of the cognitive feedback conditions and of the respective joint feedback conditions, indicating that adding outcome feedback did not enhance judgment accuracy. Results also suggested that model predictions feedback may be more effective than judgment policy feedback, which in turn is superior to outcome feedback. All cognitive feedback conditions, relative to outcome feedback only, also demonstrated convergence between stated model predictions and actual predictions. These results are discussed in terms of implications for the design of decision support systems for individual judgment tasks.

预测任务中判断反馈与计算机决策支持的实证研究
本研究考察了使用计算机化决策支持工具提供的认知反馈和结果反馈对判断准确性的影响。在294名参与者的两阶段实验中,研究了五种反馈条件:结果反馈条件、两种认知反馈条件(判断政策反馈和模型预测反馈)和两种联合反馈条件(判断政策加结果反馈和模型预测加结果反馈)。在第一阶段,决策者指定判断策略(即线索权重和函数形式),他们认为他们将在做出收益预测时使用这些判断策略。然后,他们被要求根据过去三年的平均收益、本年度毛利率百分比、速动比率和收益收益率预测几家公司的每股收益。使用适当修改的最终用户软件,然后向所有参与者提供反馈,除了那些只接收结果反馈的参与者。判断政策反馈包括告知决策者其实际预测的线索权重和函数形式,而模型预测反馈包括由决策者陈述的判断政策产生的收益预测。在第二阶段,决策者修改或保留原有的判断政策,然后进行另一组盈利预测。结果反馈,包括有关公司实际收益的信息,然后在结果反馈和联合反馈条件下提供给参与者。然后在一组新的公司中重复这个过程,以确定各种形式的反馈如何影响判断的准确性。结果表明,与提供结果反馈相比,向决策者提供任何一种类型的认知反馈都有助于提高判断准确性。认知反馈条件与各自联合反馈条件的判断准确度无显著差异,说明添加结果反馈并没有提高判断准确度。结果还表明,模型预测反馈可能比判断性政策反馈更有效,而判断性政策反馈又优于结果反馈。所有的认知反馈条件,仅相对于结果反馈,也证明了陈述模型预测和实际预测之间的收敛性。这些结果讨论了对个人判断任务的决策支持系统设计的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信