{"title":"Advances in Machine Learning and Explainable Artificial Intelligence for Depression Prediction","authors":"H. Byeon","doi":"10.14569/ijacsa.2023.0140656","DOIUrl":null,"url":null,"abstract":"There is a growing interest in applying AI technology in the field of mental health, particularly as an alternative to complement the limitations of human analysis, judgment, and accessibility in mental health assessments and treatments. The current mental health treatment service faces a gap in which individuals who need help are not receiving it due to negative perceptions of mental health treatment, lack of professional manpower, and physical accessibility limitations. To overcome these difficulties, there is a growing need for a new approach, and AI technology is being explored as a potential solution. Explainable artificial intelligence (X-AI) with both accuracy and interpretability technology can help improve the accuracy of expert decision-making, increase the accessibility of mental health services, and solve the psychological problems of high-risk groups of depression. In this review, we examine the current use of X-AI technology in mental health assessments for depression. As a result of reviewing 6 studies that used X-AI to discriminate high-risk groups of depression, various algorithms such as SHAP (SHapley Additive exPlanations) and Local Interpretable Model-Agnostic Explanation (LIME) were used for predicting depression. In the field of psychiatry, such as predicting depression, it is crucial to ensure AI prediction justifications are clear and transparent. Therefore, ensuring interpretability of AI models will be important in future research. Keywords—Depression; LIME; Explainable artificial intelligence; Machine learning; SHAP","PeriodicalId":13824,"journal":{"name":"International Journal of Advanced Computer Science and Applications","volume":"1 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advanced Computer Science and Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14569/ijacsa.2023.0140656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
There is a growing interest in applying AI technology in the field of mental health, particularly as an alternative to complement the limitations of human analysis, judgment, and accessibility in mental health assessments and treatments. The current mental health treatment service faces a gap in which individuals who need help are not receiving it due to negative perceptions of mental health treatment, lack of professional manpower, and physical accessibility limitations. To overcome these difficulties, there is a growing need for a new approach, and AI technology is being explored as a potential solution. Explainable artificial intelligence (X-AI) with both accuracy and interpretability technology can help improve the accuracy of expert decision-making, increase the accessibility of mental health services, and solve the psychological problems of high-risk groups of depression. In this review, we examine the current use of X-AI technology in mental health assessments for depression. As a result of reviewing 6 studies that used X-AI to discriminate high-risk groups of depression, various algorithms such as SHAP (SHapley Additive exPlanations) and Local Interpretable Model-Agnostic Explanation (LIME) were used for predicting depression. In the field of psychiatry, such as predicting depression, it is crucial to ensure AI prediction justifications are clear and transparent. Therefore, ensuring interpretability of AI models will be important in future research. Keywords—Depression; LIME; Explainable artificial intelligence; Machine learning; SHAP
期刊介绍:
IJACSA is a scholarly computer science journal representing the best in research. Its mission is to provide an outlet for quality research to be publicised and published to a global audience. The journal aims to publish papers selected through rigorous double-blind peer review to ensure originality, timeliness, relevance, and readability. In sync with the Journal''s vision "to be a respected publication that publishes peer reviewed research articles, as well as review and survey papers contributed by International community of Authors", we have drawn reviewers and editors from Institutions and Universities across the globe. A double blind peer review process is conducted to ensure that we retain high standards. At IJACSA, we stand strong because we know that global challenges make way for new innovations, new ways and new talent. International Journal of Advanced Computer Science and Applications publishes carefully refereed research, review and survey papers which offer a significant contribution to the computer science literature, and which are of interest to a wide audience. Coverage extends to all main-stream branches of computer science and related applications