{"title":"Toward a responsible and ethical authorization to operate: A case study in AI consulting","authors":"Jason M. Pittman , Geoff Schaefer","doi":"10.1016/j.jrt.2025.100130","DOIUrl":null,"url":null,"abstract":"<div><div>The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"23 ","pages":"Article 100130"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659625000265","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The US federal government mandates all technologies receive an Authorization to Operate (ATO). The ATO serves as a testament to the technology's security compliance. This process underscores a fundamental belief: technologies must conform to established security norms. Yet, the security-centric view does not include ethical and responsible AI. Unlike security parameters, ethical and responsible AI lacks a standardized framework for evaluation. This leaves a critical gap in AI governance. This paper presents our consulting experiences in addressing such a gap and introduces a pioneering ATO assessment instrument. The instrument integrates ethical and responsible AI principles into assessment decision-making. We delve into the instrument's design, shedding light on unique attributes and features. Furthermore, we discuss emergent best practices related to this ATO instrument. These include potential decision pitfalls of interest to practitioners and policymakers alike. Looking ahead, we envision an evolved version of this ethical and responsible ATO. This future iteration incorporates continuous monitoring capabilities and novel ethical measures. Finally, we offer insights for the AI community to evaluate their AI decision-making.