{"title":"低水平狭义人工智能的风险","authors":"J. Page, Michael Bain, F. Mukhlish","doi":"10.1109/IISR.2018.8535903","DOIUrl":null,"url":null,"abstract":"There is a great deal of concern expressed by many in the artificial intelligence (AI) community about the existential risk of this rapidly developing technology. This paper provides a discussion on some issues that need to be addressed to handle potential future risks and provides some new perspectives. The development of artificial intelligence is moving from relatively limited standalone to large-scale, complex distributed systems. However, potential risks such as malfunction, malicious attacks and mismatch of objective can occur from hardware and software failures or design errors. Moreover, a system controlled by high level AI can become unpredictable in its behaviours and thus ethical risks can emerge when such systems have to make a decision related to operational issues. Given that all new, disruptive, technologies have risks associated with them, what we need to do, as practitioners and users, is to find ways of mitigating those risks. We discuss applications of agent-based simulation to illustrate some of the risks and, potentially, how to mitigate them.","PeriodicalId":201828,"journal":{"name":"2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"The Risks of Low Level Narrow Artificial Intelligence\",\"authors\":\"J. Page, Michael Bain, F. Mukhlish\",\"doi\":\"10.1109/IISR.2018.8535903\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a great deal of concern expressed by many in the artificial intelligence (AI) community about the existential risk of this rapidly developing technology. This paper provides a discussion on some issues that need to be addressed to handle potential future risks and provides some new perspectives. The development of artificial intelligence is moving from relatively limited standalone to large-scale, complex distributed systems. However, potential risks such as malfunction, malicious attacks and mismatch of objective can occur from hardware and software failures or design errors. Moreover, a system controlled by high level AI can become unpredictable in its behaviours and thus ethical risks can emerge when such systems have to make a decision related to operational issues. Given that all new, disruptive, technologies have risks associated with them, what we need to do, as practitioners and users, is to find ways of mitigating those risks. We discuss applications of agent-based simulation to illustrate some of the risks and, potentially, how to mitigate them.\",\"PeriodicalId\":201828,\"journal\":{\"name\":\"2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISR.2018.8535903\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Intelligence and Safety for Robotics (ISR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISR.2018.8535903","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Risks of Low Level Narrow Artificial Intelligence
There is a great deal of concern expressed by many in the artificial intelligence (AI) community about the existential risk of this rapidly developing technology. This paper provides a discussion on some issues that need to be addressed to handle potential future risks and provides some new perspectives. The development of artificial intelligence is moving from relatively limited standalone to large-scale, complex distributed systems. However, potential risks such as malfunction, malicious attacks and mismatch of objective can occur from hardware and software failures or design errors. Moreover, a system controlled by high level AI can become unpredictable in its behaviours and thus ethical risks can emerge when such systems have to make a decision related to operational issues. Given that all new, disruptive, technologies have risks associated with them, what we need to do, as practitioners and users, is to find ways of mitigating those risks. We discuss applications of agent-based simulation to illustrate some of the risks and, potentially, how to mitigate them.