Tapabrata Chakraborti, Christopher R. S. Banerji, Ariane Marandon, Vicky Hellon, Robin Mitra, Brieuc Lehmann, Leandra Bräuninger, Sarah McGough, Cagatay Turkay, Alejandro F. Frangi, Ginestra Bianconi, Weizi Li, Owen Rackham, Deepak Parashar, Chris Harbron, Ben MacArthur
{"title":"Personalized uncertainty quantification in artificial intelligence","authors":"Tapabrata Chakraborti, Christopher R. S. Banerji, Ariane Marandon, Vicky Hellon, Robin Mitra, Brieuc Lehmann, Leandra Bräuninger, Sarah McGough, Cagatay Turkay, Alejandro F. Frangi, Ginestra Bianconi, Weizi Li, Owen Rackham, Deepak Parashar, Chris Harbron, Ben MacArthur","doi":"10.1038/s42256-025-01024-8","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification (PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"6 1","pages":""},"PeriodicalIF":18.8000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1038/s42256-025-01024-8","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification (PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.