{"title":"Frontier AI developers need an internal audit function.","authors":"Jonas Schuett","doi":"10.1111/risa.17665","DOIUrl":null,"url":null,"abstract":"<p><p>This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a company's risk management, control, and governance processes. It is organizationally independent from senior management and reports directly to the board of directors, typically its audit committee. In the Institute of Internal Auditors' Three Lines Model, internal audit serves as the third line and is responsible for providing assurance to the board, whereas the combined assurance framework highlights the need to coordinate the activities of internal and external assurance providers. Next, the article provides an overview of key governance challenges in frontier AI development: Dangerous capabilities can arise unpredictably and undetected; it is difficult to prevent a deployed model from causing harm; frontier models can proliferate rapidly; it is inherently difficult to assess frontier AI risks; and frontier AI developers do not seem to follow best practices in risk governance. Finally, the article discusses how an internal audit function could address some of these challenges: Internal audit could identify ineffective risk management practices; it could ensure that the board of directors has a more accurate understanding of the current level of risk and the adequacy of the developer's risk management practices; and it could serve as a contact point for whistleblowers. But frontier AI developers should also be aware of key limitations: Internal audit adds friction; it can be captured by senior management; and the benefits depend on the ability of individuals to identify ineffective practices. In light of rapid progress in AI research and development, frontier AI developers need to strengthen their risk governance. Instead of reinventing the wheel, they should follow existing best practices. Although this might not be sufficient, they should not skip this obvious first step.</p>","PeriodicalId":21472,"journal":{"name":"Risk Analysis","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk Analysis","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1111/risa.17665","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
This article argues that frontier artificial intelligence (AI) developers need an internal audit function. First, it describes the role of internal audit in corporate governance: internal audit evaluates the adequacy and effectiveness of a company's risk management, control, and governance processes. It is organizationally independent from senior management and reports directly to the board of directors, typically its audit committee. In the Institute of Internal Auditors' Three Lines Model, internal audit serves as the third line and is responsible for providing assurance to the board, whereas the combined assurance framework highlights the need to coordinate the activities of internal and external assurance providers. Next, the article provides an overview of key governance challenges in frontier AI development: Dangerous capabilities can arise unpredictably and undetected; it is difficult to prevent a deployed model from causing harm; frontier models can proliferate rapidly; it is inherently difficult to assess frontier AI risks; and frontier AI developers do not seem to follow best practices in risk governance. Finally, the article discusses how an internal audit function could address some of these challenges: Internal audit could identify ineffective risk management practices; it could ensure that the board of directors has a more accurate understanding of the current level of risk and the adequacy of the developer's risk management practices; and it could serve as a contact point for whistleblowers. But frontier AI developers should also be aware of key limitations: Internal audit adds friction; it can be captured by senior management; and the benefits depend on the ability of individuals to identify ineffective practices. In light of rapid progress in AI research and development, frontier AI developers need to strengthen their risk governance. Instead of reinventing the wheel, they should follow existing best practices. Although this might not be sufficient, they should not skip this obvious first step.
期刊介绍:
Published on behalf of the Society for Risk Analysis, Risk Analysis is ranked among the top 10 journals in the ISI Journal Citation Reports under the social sciences, mathematical methods category, and provides a focal point for new developments in the field of risk analysis. This international peer-reviewed journal is committed to publishing critical empirical research and commentaries dealing with risk issues. The topics covered include:
• Human health and safety risks
• Microbial risks
• Engineering
• Mathematical modeling
• Risk characterization
• Risk communication
• Risk management and decision-making
• Risk perception, acceptability, and ethics
• Laws and regulatory policy
• Ecological risks.