David R. Mandel, Daniel Irwin, Mandeep K. Dhami, David V. Budescu
{"title":"Meta-informational cue inconsistency and judgment of information accuracy: Spotlight on intelligence analysis","authors":"David R. Mandel, Daniel Irwin, Mandeep K. Dhami, David V. Budescu","doi":"10.1002/bdm.2307","DOIUrl":null,"url":null,"abstract":"<p>Meta-information is information about information that can be used as cues to guide judgments and decisions. Three types of meta-information that are routinely used in intelligence analysis are source reliability, information credibility, and classification level. The first two cues are intended to speak to information quality (in particular, the probability that the information is accurate), and classification level is intended to describe the information's security sensitivity. Two experiments involving professional intelligence analysts (<i>N</i> = 25 and 27, respectively) manipulated meta-information in a 6 (source reliability) × 6 (information credibility) × 2 (classification) repeated-measures design. Ten additional items were retested to measure intra-individual reliability. Analysts judged the probability of information accuracy based on its meta-informational profile. In both experiments, the judged probability of information accuracy was sensitive to ordinal position on the scales and the directionality of linguistic terms used to anchor the levels of the two scales. Directionality led analysts to group the first three levels of each scale in a positive group and the fourth and fifth levels in a negative group, with the neutral term “cannot be judged” falling between these groups. Critically, as reliability and credibility cue inconsistency increased, there was a corresponding decrease in intra-analyst reliability, interanalyst agreement, and effective cue utilization. Neither experiment found a significant effect of classification on probability judgments.</p>","PeriodicalId":48112,"journal":{"name":"Journal of Behavioral Decision Making","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/bdm.2307","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Behavioral Decision Making","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/bdm.2307","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
引用次数: 0
Abstract
Meta-information is information about information that can be used as cues to guide judgments and decisions. Three types of meta-information that are routinely used in intelligence analysis are source reliability, information credibility, and classification level. The first two cues are intended to speak to information quality (in particular, the probability that the information is accurate), and classification level is intended to describe the information's security sensitivity. Two experiments involving professional intelligence analysts (N = 25 and 27, respectively) manipulated meta-information in a 6 (source reliability) × 6 (information credibility) × 2 (classification) repeated-measures design. Ten additional items were retested to measure intra-individual reliability. Analysts judged the probability of information accuracy based on its meta-informational profile. In both experiments, the judged probability of information accuracy was sensitive to ordinal position on the scales and the directionality of linguistic terms used to anchor the levels of the two scales. Directionality led analysts to group the first three levels of each scale in a positive group and the fourth and fifth levels in a negative group, with the neutral term “cannot be judged” falling between these groups. Critically, as reliability and credibility cue inconsistency increased, there was a corresponding decrease in intra-analyst reliability, interanalyst agreement, and effective cue utilization. Neither experiment found a significant effect of classification on probability judgments.
期刊介绍:
The Journal of Behavioral Decision Making is a multidisciplinary journal with a broad base of content and style. It publishes original empirical reports, critical review papers, theoretical analyses and methodological contributions. The Journal also features book, software and decision aiding technique reviews, abstracts of important articles published elsewhere and teaching suggestions. The objective of the Journal is to present and stimulate behavioral research on decision making and to provide a forum for the evaluation of complementary, contrasting and conflicting perspectives. These perspectives include psychology, management science, sociology, political science and economics. Studies of behavioral decision making in naturalistic and applied settings are encouraged.