{"title":"How to Enlighten Novice Users on Behavior of Machine Learning Models?","authors":"Hiroto Mizutani, Masateru Tsunoda, K. Nakasai","doi":"10.1109/SNPD51163.2021.9704891","DOIUrl":null,"url":null,"abstract":"Background: Machine learning models are sometimes embedded in software to implement the required functions. As a result, non-experts in machine learning are becoming familiar with the models. However, the interpretability of the built models is often low in machine learning, such as deep learning, and the recognition process of such models is very different from that of humans. Therefore, it is not easy for novice users, such as end-users and beginners, to anticipate the behavior of models that they will use or build. Aim: We assist novice users to realize an aspect of the behavior of machine learning models relating to robustness intuitively. Method: We formalized and evaluated quiz-based analysis, which is often applied by practitioners to test the robustness of machine learning models arbitrarily. To generate test cases of the models, the analysis converts images towards the boundary of classification for both machine learning and humans. It can be regarded as a type of boundary value analysis of software development. Results: In the experiment, we evaluated whether the analysis quantitatively clarified the aspects of the models. The analysis clarified the robustness of the model for image conversion and misclassification quantitatively. Conclusion: The analysis is expected to enlighten novice users on the behavior of machine learning models. This may promote behavioral changes in the evaluation of models for novice users.","PeriodicalId":235370,"journal":{"name":"2021 IEEE/ACIS 22nd International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/ACIS 22nd International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD51163.2021.9704891","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Machine learning models are sometimes embedded in software to implement the required functions. As a result, non-experts in machine learning are becoming familiar with the models. However, the interpretability of the built models is often low in machine learning, such as deep learning, and the recognition process of such models is very different from that of humans. Therefore, it is not easy for novice users, such as end-users and beginners, to anticipate the behavior of models that they will use or build. Aim: We assist novice users to realize an aspect of the behavior of machine learning models relating to robustness intuitively. Method: We formalized and evaluated quiz-based analysis, which is often applied by practitioners to test the robustness of machine learning models arbitrarily. To generate test cases of the models, the analysis converts images towards the boundary of classification for both machine learning and humans. It can be regarded as a type of boundary value analysis of software development. Results: In the experiment, we evaluated whether the analysis quantitatively clarified the aspects of the models. The analysis clarified the robustness of the model for image conversion and misclassification quantitatively. Conclusion: The analysis is expected to enlighten novice users on the behavior of machine learning models. This may promote behavioral changes in the evaluation of models for novice users.