{"title":"On the evaluation of the symbolic knowledge extracted from black boxes","authors":"Federico Sabbatini, Roberta Calegari","doi":"10.1007/s43681-023-00406-1","DOIUrl":null,"url":null,"abstract":"<div><p>As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason, we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflecting readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"65 - 74"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00406-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason, we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflecting readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.