{"title":"Communication of Uncertainty in AI Regulations","authors":"Aditya Sai Phutane","doi":"10.21061/cc.v4i2.a.50","DOIUrl":null,"url":null,"abstract":"Scholarship of uncertainty in artificial intelligence (AI) regulation has focused on theories, strategies, and practices to mitigate uncertainty. However, there is little understanding of how federal agencies communicate scientific uncertainties to all stakeholders including the public and regulated industries. This is important for three reasons: one, it highlights what aspects of the issue are quantifiable; two, it displays how agencies explain uncertainties about the issues that are not easily quantified; and three, it shows how knowledgeable agencies perceive the public audience in relation to the issue at hand and what they expect from such communication. By analyzing AI regulations across four categories of scientific uncertainties, this study found that uncertainty in areas of ownership, safety, and transparency are hard to quantify and hence agencies use personalized examples to explain uncertainties. In addition, agencies seek public input to gather additional data and derive consensus on issues that have moral implications. These findings are consistent with the literature on tackling uncertainty and regulatory decision-making. They can help advance our understanding of current practices of communicating science effectively to explain risks and uncertainties.","PeriodicalId":270428,"journal":{"name":"Community Change","volume":"293 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Community Change","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21061/cc.v4i2.a.50","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Scholarship of uncertainty in artificial intelligence (AI) regulation has focused on theories, strategies, and practices to mitigate uncertainty. However, there is little understanding of how federal agencies communicate scientific uncertainties to all stakeholders including the public and regulated industries. This is important for three reasons: one, it highlights what aspects of the issue are quantifiable; two, it displays how agencies explain uncertainties about the issues that are not easily quantified; and three, it shows how knowledgeable agencies perceive the public audience in relation to the issue at hand and what they expect from such communication. By analyzing AI regulations across four categories of scientific uncertainties, this study found that uncertainty in areas of ownership, safety, and transparency are hard to quantify and hence agencies use personalized examples to explain uncertainties. In addition, agencies seek public input to gather additional data and derive consensus on issues that have moral implications. These findings are consistent with the literature on tackling uncertainty and regulatory decision-making. They can help advance our understanding of current practices of communicating science effectively to explain risks and uncertainties.