{"title":"An Approach to identify Captioning Keywords in an Image using LIME","authors":"Siddharth Sahay, Nikita Omare, K. K. Shukla","doi":"10.1109/ICCCIS51004.2021.9397159","DOIUrl":null,"url":null,"abstract":"Machine Learning models are being increasingly deployed to tackle real-world problems in various domains like healthcare, crime and education among many others. However, most of the models are practically \"black-boxes\": although they may provide accurate results, they are unable to provide any conclusive reasoning for those results. In order for these decisions to be trusted, they must be explainable. Explainable AI, or XAI refers to methods and techniques in the application of AI such that the results of the solution are understandable by human experts. This paper focuses on the task of Image Captioning, and tries to employ XAI techniques such as LIME (Local Interpretable Model-Agnostic Explanations) to explain the predictions of complex image captioning models. It visually depicts the part of the image corresponding to a particular word in the caption, thereby justifying why the model predicted that word.","PeriodicalId":316752,"journal":{"name":"2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCIS51004.2021.9397159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Machine Learning models are being increasingly deployed to tackle real-world problems in various domains like healthcare, crime and education among many others. However, most of the models are practically "black-boxes": although they may provide accurate results, they are unable to provide any conclusive reasoning for those results. In order for these decisions to be trusted, they must be explainable. Explainable AI, or XAI refers to methods and techniques in the application of AI such that the results of the solution are understandable by human experts. This paper focuses on the task of Image Captioning, and tries to employ XAI techniques such as LIME (Local Interpretable Model-Agnostic Explanations) to explain the predictions of complex image captioning models. It visually depicts the part of the image corresponding to a particular word in the caption, thereby justifying why the model predicted that word.