{"title":"Bayesian Rule Ontologies For XAI Classification and Regression","authors":"A. K. Panda, B. Kosko","doi":"10.1109/ICMLA55696.2022.00031","DOIUrl":null,"url":null,"abstract":"A random foam defines a modular rule-based ontology after sampling from a neural or other input-output system. A random foam combines several rule-based systems and averages the systems. It gives a Bayesian posterior over the subsystems. It also gives separate Bayesian posteriors over the rules of each subsystem. The shape of the rules controls how well the random-foam ontology performs in classification and regression. We found that a heterogenous ontology that mixes different rule shapes can perform better than a homogenous ontology based on a single Gaussian or other rule type. Random foams are also universal function approximators. So they can train on a neural black box and act as its explainable proxy system. We prove this uniform approximation theorem for the important case of bump-function random foams with throughput combination. Random foams also measure their output’s uncertainty through the conditional variance. Bump function rules performed better than Cauchy rules at classification while Cauchy rules performed better at regression. Gaussian rules performed best in both classification and regression. A homogeneous Gaussian random foam that trained on a 96.7% accurate neural classifier was itself 95.96% accurate on the MNIST data set. A heterogeneous random foam with two-thirds Gaussian rules and one-third Laplacian rules did better than did the all-Gaussian foam ontology.","PeriodicalId":128160,"journal":{"name":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA55696.2022.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
A random foam defines a modular rule-based ontology after sampling from a neural or other input-output system. A random foam combines several rule-based systems and averages the systems. It gives a Bayesian posterior over the subsystems. It also gives separate Bayesian posteriors over the rules of each subsystem. The shape of the rules controls how well the random-foam ontology performs in classification and regression. We found that a heterogenous ontology that mixes different rule shapes can perform better than a homogenous ontology based on a single Gaussian or other rule type. Random foams are also universal function approximators. So they can train on a neural black box and act as its explainable proxy system. We prove this uniform approximation theorem for the important case of bump-function random foams with throughput combination. Random foams also measure their output’s uncertainty through the conditional variance. Bump function rules performed better than Cauchy rules at classification while Cauchy rules performed better at regression. Gaussian rules performed best in both classification and regression. A homogeneous Gaussian random foam that trained on a 96.7% accurate neural classifier was itself 95.96% accurate on the MNIST data set. A heterogeneous random foam with two-thirds Gaussian rules and one-third Laplacian rules did better than did the all-Gaussian foam ontology.