Uwe Peters, Andrea Bertazzoli, Jasmine M DeJesus, Gisela J van der Velden, Benjamin Chin-Yee
{"title":"Generics in science communication: Misaligned interpretations across laypeople, scientists, and large language models.","authors":"Uwe Peters, Andrea Bertazzoli, Jasmine M DeJesus, Gisela J van der Velden, Benjamin Chin-Yee","doi":"10.1177/09636625261425891","DOIUrl":null,"url":null,"abstract":"<p><p>Scientists often use <i>generics</i>, that is, unquantified statements about whole categories of people or phenomena, when communicating research findings (e.g. \"statins reduce cardiovascular events\"). Large language models, such as ChatGPT, frequently adopt the same style when summarizing scientific texts. However, generics can prompt overgeneralizations, especially when they are interpreted differently across audiences. In a study comparing laypeople, scientists, and two leading large language models (ChatGPT-5 and DeepSeek), we found systematic differences in interpretation of generics. Compared with most scientists, laypeople judged scientific generics as more generalizable and credible, while large language models rated them even higher. These mismatches highlight significant risks for science communication. Scientists may use generics and incorrectly assume laypeople share their interpretation, while large language models may systematically overgeneralize scientific findings when summarizing research. Our findings underscore the need for greater attention to language choices in both human and large language model-mediated science communication.</p>","PeriodicalId":48094,"journal":{"name":"Public Understanding of Science","volume":" ","pages":"9636625261425891"},"PeriodicalIF":3.3000,"publicationDate":"2026-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Public Understanding of Science","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/09636625261425891","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
Scientists often use generics, that is, unquantified statements about whole categories of people or phenomena, when communicating research findings (e.g. "statins reduce cardiovascular events"). Large language models, such as ChatGPT, frequently adopt the same style when summarizing scientific texts. However, generics can prompt overgeneralizations, especially when they are interpreted differently across audiences. In a study comparing laypeople, scientists, and two leading large language models (ChatGPT-5 and DeepSeek), we found systematic differences in interpretation of generics. Compared with most scientists, laypeople judged scientific generics as more generalizable and credible, while large language models rated them even higher. These mismatches highlight significant risks for science communication. Scientists may use generics and incorrectly assume laypeople share their interpretation, while large language models may systematically overgeneralize scientific findings when summarizing research. Our findings underscore the need for greater attention to language choices in both human and large language model-mediated science communication.
期刊介绍:
Public Understanding of Science is a fully peer reviewed international journal covering all aspects of the inter-relationships between science (including technology and medicine) and the public. Public Understanding of Science is the only journal to cover all aspects of the inter-relationships between science (including technology and medicine) and the public. Topics Covered Include... ·surveys of public understanding and attitudes towards science and technology ·perceptions of science ·popular representations of science ·scientific and para-scientific belief systems ·science in schools