{"title":"Practices for Engineering Trustworthy Machine Learning Applications","authors":"A. Serban, K. Blom, H. Hoos, Joost Visser","doi":"10.1109/WAIN52551.2021.00021","DOIUrl":"https://doi.org/10.1109/WAIN52551.2021.00021","url":null,"abstract":"Following the recent surge in adoption of machine learning (ML), the negative impact that improper use of ML can have on users and society is now also widely recognised. To address this issue, policy makers and other stakeholders, such as the European Commission or NIST, have proposed high-level guidelines aiming to promote trustworthy ML (i.e., lawful, ethical and robust). However, these guidelines do not specify actions to be taken by those involved in building ML systems. In this paper, we argue that guidelines related to the development of trustworthy ML can be translated to operational practices, and should become part of the ML development life cycle. Towards this goal, we ran a multi-vocal literature review, and mined operational practices from white and grey literature. Moreover, we launched a global survey to measure practice adoption and the effects of these practices. In total, we identified 14 new practices, and used them to complement an existing catalogue of ML engineering practices. Initial analysis of the survey results reveals that so far, practice adoption for trustworthy ML is relatively low. In particular, practices related to assuring security of ML components have very low adoption. Other practices enjoy slightly larger adoption, such as providing explanations to users. Our extended practice catalogue can be used by ML development teams to bridge the gap between high-level guidelines and actual development of trustworthy ML systems; it is open for review and contributions.","PeriodicalId":224912,"journal":{"name":"2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN)","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117143736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tinghui Ouyang, Vicent Sant Marco, Yoshinao Isobe, H. Asoh, Y. Oiwa, Yoshiki Seo
{"title":"Corner Case Data Description and Detection","authors":"Tinghui Ouyang, Vicent Sant Marco, Yoshinao Isobe, H. Asoh, Y. Oiwa, Yoshiki Seo","doi":"10.1109/WAIN52551.2021.00009","DOIUrl":"https://doi.org/10.1109/WAIN52551.2021.00009","url":null,"abstract":"As the major factors affecting the safety of deep learning models, corner cases and related detection are crucial in AI quality assurance for constructing safety- and security-critical systems. The generic corner case researches involve two interesting topics. One is to enhance DL models’ robustness to corner case data via the adjustment on parameters/structure. The other is to generate new corner cases for model retraining and improvement. However, the complex architecture and the huge amount of parameters make the robust adjustment of DL models not easy, meanwhile it is not possible to generate all real-world corner cases for DL training. Therefore, this paper proposes a simple and novel approach aiming at corner case data detection via a specific metric. This metric is developed on surprise adequacy (SA) which has advantages on capture data behaviors. Furthermore, targeting at characteristics of corner case data, three modifications on distanced-based SA are developed for classification applications in this paper. Consequently, through the experiment analysis on MNIST data and industrial data, the feasibility and usefulness of the proposed method on corner case data detection are verified.","PeriodicalId":224912,"journal":{"name":"2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132429543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Bosch, I. Crnkovic, H. Holmström, Lucy Ellen Lwakatare
{"title":"Message from the WAIN 2021 Workshop Chairs","authors":"Jan Bosch, I. Crnkovic, H. Holmström, Lucy Ellen Lwakatare","doi":"10.1109/wain52551.2021.00005","DOIUrl":"https://doi.org/10.1109/wain52551.2021.00005","url":null,"abstract":"","PeriodicalId":224912,"journal":{"name":"2021 IEEE/ACM 1st Workshop on AI Engineering - Software Engineering for AI (WAIN)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115791145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}