{"title":"评估o1推理大语言模型的认知偏差:一项小研究","authors":"Or Degany, Sahar Laros, Daphna Idan, Sharon Einav","doi":"10.1186/s13054-025-05591-5","DOIUrl":null,"url":null,"abstract":"Cognitive biases, systematic deviations from logical judgment, are well documented in clinical decision-making, particularly in clinical settings characterized by high decision load, limited time, and diagnostic uncertainty-such as critical care. Prior work demonstrated that large language models, particularly GPT-4, reproduce many of these biases, sometimes to a greater extent than human clinicians. We tested whether the o1 model (o1-2024–12-17), a newly released AI system with enhanced reasoning capabilities, is susceptible to cognitive biases that commonly affect medical decision-making. Following the methodology established by Wang and Redelmeier [15], we used ten pairs of clinical scenarios, each designed to test a specific cognitive bias known to influence clinicians. Each scenario had two versions, differed by subtle modifications designed to trigger the bias (such as presenting mortality rates versus survival rates). The o1 model generated 90 independent clinical recommendations for each scenario version, totalling 1,800 responses. We measured cognitive bias as systematic differences in recommendation rates between the paired scenarios, which should not occur with unbiased reasoning. The o1 model's performance was compared against previously published results from both the GPT-4 model and historical human clinician studies. The o1 model showed no measurable cognitive bias in seven of the ten vignettes. In two vignettes, the o1 model showed significant bias, but its absolute magnitude was lower than values previously reported for GPT-4 and human clinicians. In a single vignette, Occam’s razor, the o1 model exhibited consistent bias. Therefore, although overall bias appears less frequent overall with the reasoning model than with GPT-4, it was worse in one vignette. The model was more prone to bias in vignettes that included a gap-closing cue, seemingly resolving the clinical uncertainty. Across eight vignette versions, intra‑scenario agreement exceeded 94%, indicating lower decision variability than previously described with GPT‑4 and human clinicians. Reasoning models may reduce cognitive bias and random variation in judgment (i.e., “noise”). However, our findings caution that reasoning models are still not entirely immune to cognitive bias. These findings suggest that reasoning models may impart some benefits as decision-support tools in medicine, but they also imply a need to explore further the circumstances in which these tools may fail.","PeriodicalId":10811,"journal":{"name":"Critical Care","volume":"53 1","pages":""},"PeriodicalIF":9.3000,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating the o1 reasoning large language model for cognitive bias: a vignette study\",\"authors\":\"Or Degany, Sahar Laros, Daphna Idan, Sharon Einav\",\"doi\":\"10.1186/s13054-025-05591-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cognitive biases, systematic deviations from logical judgment, are well documented in clinical decision-making, particularly in clinical settings characterized by high decision load, limited time, and diagnostic uncertainty-such as critical care. Prior work demonstrated that large language models, particularly GPT-4, reproduce many of these biases, sometimes to a greater extent than human clinicians. We tested whether the o1 model (o1-2024–12-17), a newly released AI system with enhanced reasoning capabilities, is susceptible to cognitive biases that commonly affect medical decision-making. Following the methodology established by Wang and Redelmeier [15], we used ten pairs of clinical scenarios, each designed to test a specific cognitive bias known to influence clinicians. Each scenario had two versions, differed by subtle modifications designed to trigger the bias (such as presenting mortality rates versus survival rates). The o1 model generated 90 independent clinical recommendations for each scenario version, totalling 1,800 responses. We measured cognitive bias as systematic differences in recommendation rates between the paired scenarios, which should not occur with unbiased reasoning. The o1 model's performance was compared against previously published results from both the GPT-4 model and historical human clinician studies. The o1 model showed no measurable cognitive bias in seven of the ten vignettes. In two vignettes, the o1 model showed significant bias, but its absolute magnitude was lower than values previously reported for GPT-4 and human clinicians. In a single vignette, Occam’s razor, the o1 model exhibited consistent bias. Therefore, although overall bias appears less frequent overall with the reasoning model than with GPT-4, it was worse in one vignette. The model was more prone to bias in vignettes that included a gap-closing cue, seemingly resolving the clinical uncertainty. Across eight vignette versions, intra‑scenario agreement exceeded 94%, indicating lower decision variability than previously described with GPT‑4 and human clinicians. Reasoning models may reduce cognitive bias and random variation in judgment (i.e., “noise”). However, our findings caution that reasoning models are still not entirely immune to cognitive bias. These findings suggest that reasoning models may impart some benefits as decision-support tools in medicine, but they also imply a need to explore further the circumstances in which these tools may fail.\",\"PeriodicalId\":10811,\"journal\":{\"name\":\"Critical Care\",\"volume\":\"53 1\",\"pages\":\"\"},\"PeriodicalIF\":9.3000,\"publicationDate\":\"2025-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Critical Care\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13054-025-05591-5\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CRITICAL CARE MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Critical Care","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13054-025-05591-5","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CRITICAL CARE MEDICINE","Score":null,"Total":0}
Evaluating the o1 reasoning large language model for cognitive bias: a vignette study
Cognitive biases, systematic deviations from logical judgment, are well documented in clinical decision-making, particularly in clinical settings characterized by high decision load, limited time, and diagnostic uncertainty-such as critical care. Prior work demonstrated that large language models, particularly GPT-4, reproduce many of these biases, sometimes to a greater extent than human clinicians. We tested whether the o1 model (o1-2024–12-17), a newly released AI system with enhanced reasoning capabilities, is susceptible to cognitive biases that commonly affect medical decision-making. Following the methodology established by Wang and Redelmeier [15], we used ten pairs of clinical scenarios, each designed to test a specific cognitive bias known to influence clinicians. Each scenario had two versions, differed by subtle modifications designed to trigger the bias (such as presenting mortality rates versus survival rates). The o1 model generated 90 independent clinical recommendations for each scenario version, totalling 1,800 responses. We measured cognitive bias as systematic differences in recommendation rates between the paired scenarios, which should not occur with unbiased reasoning. The o1 model's performance was compared against previously published results from both the GPT-4 model and historical human clinician studies. The o1 model showed no measurable cognitive bias in seven of the ten vignettes. In two vignettes, the o1 model showed significant bias, but its absolute magnitude was lower than values previously reported for GPT-4 and human clinicians. In a single vignette, Occam’s razor, the o1 model exhibited consistent bias. Therefore, although overall bias appears less frequent overall with the reasoning model than with GPT-4, it was worse in one vignette. The model was more prone to bias in vignettes that included a gap-closing cue, seemingly resolving the clinical uncertainty. Across eight vignette versions, intra‑scenario agreement exceeded 94%, indicating lower decision variability than previously described with GPT‑4 and human clinicians. Reasoning models may reduce cognitive bias and random variation in judgment (i.e., “noise”). However, our findings caution that reasoning models are still not entirely immune to cognitive bias. These findings suggest that reasoning models may impart some benefits as decision-support tools in medicine, but they also imply a need to explore further the circumstances in which these tools may fail.
期刊介绍:
Critical Care is an esteemed international medical journal that undergoes a rigorous peer-review process to maintain its high quality standards. Its primary objective is to enhance the healthcare services offered to critically ill patients. To achieve this, the journal focuses on gathering, exchanging, disseminating, and endorsing evidence-based information that is highly relevant to intensivists. By doing so, Critical Care seeks to provide a thorough and inclusive examination of the intensive care field.