{"title":"在人工智能中问“为什么”:智能系统的可解释性——观点和挑战","authors":"A. Preece","doi":"10.1002/ISAF.1422","DOIUrl":null,"url":null,"abstract":"Recent rapid progress in machine learning (ML), particularly so†called ‘deep learning’, has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerning ML†based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today.","PeriodicalId":153549,"journal":{"name":"Intell. Syst. Account. Finance Manag.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"98","resultStr":"{\"title\":\"Asking 'Why' in AI: Explainability of intelligent systems - perspectives and challenges\",\"authors\":\"A. Preece\",\"doi\":\"10.1002/ISAF.1422\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent rapid progress in machine learning (ML), particularly so†called ‘deep learning’, has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerning ML†based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today.\",\"PeriodicalId\":153549,\"journal\":{\"name\":\"Intell. Syst. Account. Finance Manag.\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"98\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intell. Syst. Account. Finance Manag.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/ISAF.1422\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intell. Syst. Account. Finance Manag.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/ISAF.1422","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Asking 'Why' in AI: Explainability of intelligent systems - perspectives and challenges
Recent rapid progress in machine learning (ML), particularly so†called ‘deep learning’, has led to a resurgence in interest in explainability of artificial intelligence (AI) systems, reviving an area of research dating back to the 1970s. The aim of this article is to view current issues concerning ML†based AI systems from the perspective of classical AI, showing that the fundamental problems are far from new, and arguing that elements of that earlier work offer routes to making progress towards explainable AI today.