Tara Tressel, Claudel Rheault, Masha Krol, Chris Tyler
{"title":"机器教学任务中偏差识别的交互式方法","authors":"Tara Tressel, Claudel Rheault, Masha Krol, Chris Tyler","doi":"10.1145/3379336.3381501","DOIUrl":null,"url":null,"abstract":"Supervised machine learning requires labelled data examples to train models, and those examples often come from humans who may not be experts in artificial intelligence (i.e., \"AI\"). Currently, many resources are devoted to these labelling tasks; a majority of which are outsourced by companies to reduce costs, and oversight on such tasks can be cumbersome. Concurrently, biases in machine learning models and human cognition are a growing concern in applications of AI. In this paper, we present a machine teaching platform for non-AI experts that leverages interactive data exploration approaches to identify algorithmic and human (e.g., cognitive) biases. Our main objective is to understand how data exploration and explainability might impact the machine teacher (i.e., data labeller) and their understanding of AI, subsequently improving model performance, all while reducing potential bias concerns.","PeriodicalId":335081,"journal":{"name":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"An Interactive Approach to Bias Identification in a Machine Teaching Task\",\"authors\":\"Tara Tressel, Claudel Rheault, Masha Krol, Chris Tyler\",\"doi\":\"10.1145/3379336.3381501\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Supervised machine learning requires labelled data examples to train models, and those examples often come from humans who may not be experts in artificial intelligence (i.e., \\\"AI\\\"). Currently, many resources are devoted to these labelling tasks; a majority of which are outsourced by companies to reduce costs, and oversight on such tasks can be cumbersome. Concurrently, biases in machine learning models and human cognition are a growing concern in applications of AI. In this paper, we present a machine teaching platform for non-AI experts that leverages interactive data exploration approaches to identify algorithmic and human (e.g., cognitive) biases. Our main objective is to understand how data exploration and explainability might impact the machine teacher (i.e., data labeller) and their understanding of AI, subsequently improving model performance, all while reducing potential bias concerns.\",\"PeriodicalId\":335081,\"journal\":{\"name\":\"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion\",\"volume\":\"18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3379336.3381501\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 25th International Conference on Intelligent User Interfaces Companion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3379336.3381501","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Interactive Approach to Bias Identification in a Machine Teaching Task
Supervised machine learning requires labelled data examples to train models, and those examples often come from humans who may not be experts in artificial intelligence (i.e., "AI"). Currently, many resources are devoted to these labelling tasks; a majority of which are outsourced by companies to reduce costs, and oversight on such tasks can be cumbersome. Concurrently, biases in machine learning models and human cognition are a growing concern in applications of AI. In this paper, we present a machine teaching platform for non-AI experts that leverages interactive data exploration approaches to identify algorithmic and human (e.g., cognitive) biases. Our main objective is to understand how data exploration and explainability might impact the machine teacher (i.e., data labeller) and their understanding of AI, subsequently improving model performance, all while reducing potential bias concerns.