{"title":"ExplaineR: 用于解释机器学习模型的 R 软件包。","authors":"Ramtin Zargari Marandi","doi":"10.1093/bioadv/vbae049","DOIUrl":null,"url":null,"abstract":"<p><strong>Summary: </strong>SHapley Additive exPlanations (SHAP) is a widely used method for model interpretation. However, its full potential often remains untapped due to the absence of dedicated software tools. In response, <i>ExplaineR</i>, an R package to facilitate interpretation of binary classification and regression models based on clustering functionality for SHAP analysis is introduced here. It additionally offers user-interactive elements in visualizations for evaluating model performance, fairness analysis, decision-curve analysis, and a diverse range of SHAP plots. It facilitates in-depth post-prediction analysis of models, enabling users to pinpoint potentially significant patterns in SHAP plots and subsequently trace them back to instances through SHAP clustering. This functionality is particularly valuable for identifying patient subgroups in clinical cohorts, thus enhancing its role as a robust profiling tool. <i>ExplaineR</i> empowers users to generate comprehensive reports on machine learning outcomes, ensuring consistent and thorough documentation of model performance and interpretations.</p><p><strong>Availability and implementation: </strong><i>ExplaineR</i> 1.0.0 is available on GitHub (https://persimune.github.io/explainer/) and CRAN (https://cran.r-project.org/web/packages/explainer/index.html).</p>","PeriodicalId":72368,"journal":{"name":"Bioinformatics advances","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10994716/pdf/","citationCount":"0","resultStr":"{\"title\":\"ExplaineR: an R package to explain machine learning models.\",\"authors\":\"Ramtin Zargari Marandi\",\"doi\":\"10.1093/bioadv/vbae049\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Summary: </strong>SHapley Additive exPlanations (SHAP) is a widely used method for model interpretation. However, its full potential often remains untapped due to the absence of dedicated software tools. In response, <i>ExplaineR</i>, an R package to facilitate interpretation of binary classification and regression models based on clustering functionality for SHAP analysis is introduced here. It additionally offers user-interactive elements in visualizations for evaluating model performance, fairness analysis, decision-curve analysis, and a diverse range of SHAP plots. It facilitates in-depth post-prediction analysis of models, enabling users to pinpoint potentially significant patterns in SHAP plots and subsequently trace them back to instances through SHAP clustering. This functionality is particularly valuable for identifying patient subgroups in clinical cohorts, thus enhancing its role as a robust profiling tool. <i>ExplaineR</i> empowers users to generate comprehensive reports on machine learning outcomes, ensuring consistent and thorough documentation of model performance and interpretations.</p><p><strong>Availability and implementation: </strong><i>ExplaineR</i> 1.0.0 is available on GitHub (https://persimune.github.io/explainer/) and CRAN (https://cran.r-project.org/web/packages/explainer/index.html).</p>\",\"PeriodicalId\":72368,\"journal\":{\"name\":\"Bioinformatics advances\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10994716/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bioinformatics advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1093/bioadv/vbae049\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioinformatics advances","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/bioadv/vbae049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
ExplaineR: an R package to explain machine learning models.
Summary: SHapley Additive exPlanations (SHAP) is a widely used method for model interpretation. However, its full potential often remains untapped due to the absence of dedicated software tools. In response, ExplaineR, an R package to facilitate interpretation of binary classification and regression models based on clustering functionality for SHAP analysis is introduced here. It additionally offers user-interactive elements in visualizations for evaluating model performance, fairness analysis, decision-curve analysis, and a diverse range of SHAP plots. It facilitates in-depth post-prediction analysis of models, enabling users to pinpoint potentially significant patterns in SHAP plots and subsequently trace them back to instances through SHAP clustering. This functionality is particularly valuable for identifying patient subgroups in clinical cohorts, thus enhancing its role as a robust profiling tool. ExplaineR empowers users to generate comprehensive reports on machine learning outcomes, ensuring consistent and thorough documentation of model performance and interpretations.
Availability and implementation: ExplaineR 1.0.0 is available on GitHub (https://persimune.github.io/explainer/) and CRAN (https://cran.r-project.org/web/packages/explainer/index.html).