A. Blackwell, Luke Church, M. Mahmoudi, Mariana Marasoiu
{"title":"视觉知识协商","authors":"A. Blackwell, Luke Church, M. Mahmoudi, Mariana Marasoiu","doi":"10.1109/VLHCC.2018.8506553","DOIUrl":null,"url":null,"abstract":"We ask how users interact with ‘knowledge’ in the context of artificial intelligence systems. Four examples of visual interfaces demonstrate the need for such systems to allow room for negotiation between domain experts, automated statistical models, and the people who are involved in collecting and providing data.","PeriodicalId":444336,"journal":{"name":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Visual Knowledge Negotiation\",\"authors\":\"A. Blackwell, Luke Church, M. Mahmoudi, Mariana Marasoiu\",\"doi\":\"10.1109/VLHCC.2018.8506553\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We ask how users interact with ‘knowledge’ in the context of artificial intelligence systems. Four examples of visual interfaces demonstrate the need for such systems to allow room for negotiation between domain experts, automated statistical models, and the people who are involved in collecting and providing data.\",\"PeriodicalId\":444336,\"journal\":{\"name\":\"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VLHCC.2018.8506553\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VLHCC.2018.8506553","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
We ask how users interact with ‘knowledge’ in the context of artificial intelligence systems. Four examples of visual interfaces demonstrate the need for such systems to allow room for negotiation between domain experts, automated statistical models, and the people who are involved in collecting and providing data.