{"title":"Achieving trust in Artificial General Intelligence : Secrets, precautions, and public scrutiny","authors":"G. Adamson","doi":"10.1109/ISTAS.2018.8638284","DOIUrl":null,"url":null,"abstract":"Research into the field generally referred to as Artificial Intelligence (AI) has been undertaken for at least 70 years. It now appears that the sheer weight of research effort will lead to a breakthrough in the achievement of Artificial General Intelligence (AGI) in the near or medium future. A challenge in addressing uncertainty surrounding such development is the assertion of commercial secrecy. While AGI has potentially significant implications for society, its development is generally a closely guarded secret. This paper proposes an approach based on concepts of ‘controls’ from the operational risk literature. It proposes an approach to monitoring AGI research that does not require the company to reveal its research secrets, by inviting public scrutiny of the precautions in place regarding the research. It argues that such scrutiny of precautions addresses the problem that companies undertaking research have limited knowledge of the technologies they are developing. This is argued by analogy with an early major technology development, the steam engine, where commercialization preceded scientific understanding by more than half a century. Reliance on precautions in the development of AGI has a further benefit. Where companies’ precautions fail, they would be expected to explain what went wrong and what new or additional precautions would be adopted in the future, making this a self-improving process.","PeriodicalId":122477,"journal":{"name":"2018 IEEE International Symposium on Technology and Society (ISTAS)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Symposium on Technology and Society (ISTAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISTAS.2018.8638284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Research into the field generally referred to as Artificial Intelligence (AI) has been undertaken for at least 70 years. It now appears that the sheer weight of research effort will lead to a breakthrough in the achievement of Artificial General Intelligence (AGI) in the near or medium future. A challenge in addressing uncertainty surrounding such development is the assertion of commercial secrecy. While AGI has potentially significant implications for society, its development is generally a closely guarded secret. This paper proposes an approach based on concepts of ‘controls’ from the operational risk literature. It proposes an approach to monitoring AGI research that does not require the company to reveal its research secrets, by inviting public scrutiny of the precautions in place regarding the research. It argues that such scrutiny of precautions addresses the problem that companies undertaking research have limited knowledge of the technologies they are developing. This is argued by analogy with an early major technology development, the steam engine, where commercialization preceded scientific understanding by more than half a century. Reliance on precautions in the development of AGI has a further benefit. Where companies’ precautions fail, they would be expected to explain what went wrong and what new or additional precautions would be adopted in the future, making this a self-improving process.