{"title":"不寻常的嫌疑人","authors":"Janet Chan","doi":"10.24908/ss.v21i3.16102","DOIUrl":null,"url":null,"abstract":"The use of artificial intelligence in facial recognition systems has been controversial. Among issues of concerns is the accuracy of such systems for recognising faces of non-white people. This work turns the debate on its head by showing six images of AI generated faces using identical prompts that include the words “Asian woman” and “facial recognition biometrics person technology” via Text 2 Dream in Deep Dream Generator. Rather than investigating the level of accuracy in facial recognition systems, it demonstrates how a particular AI software creates visual representation of “Asian women.” The experiment explores the interaction between text (prompt) and a particular generative algorithm. It raises questions about the data on which the algorithm is trained, how images are labelled/interpreted in training data, and the underlying power AI algorithms have in reproducing/changing stereotypes. Not transparent to the viewers is the role of the artist in selecting/framing prompts and “starter” images.","PeriodicalId":234638,"journal":{"name":"surveillance and society","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Unusual Suspects\",\"authors\":\"Janet Chan\",\"doi\":\"10.24908/ss.v21i3.16102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The use of artificial intelligence in facial recognition systems has been controversial. Among issues of concerns is the accuracy of such systems for recognising faces of non-white people. This work turns the debate on its head by showing six images of AI generated faces using identical prompts that include the words “Asian woman” and “facial recognition biometrics person technology” via Text 2 Dream in Deep Dream Generator. Rather than investigating the level of accuracy in facial recognition systems, it demonstrates how a particular AI software creates visual representation of “Asian women.” The experiment explores the interaction between text (prompt) and a particular generative algorithm. It raises questions about the data on which the algorithm is trained, how images are labelled/interpreted in training data, and the underlying power AI algorithms have in reproducing/changing stereotypes. Not transparent to the viewers is the role of the artist in selecting/framing prompts and “starter” images.\",\"PeriodicalId\":234638,\"journal\":{\"name\":\"surveillance and society\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-09-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"surveillance and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.24908/ss.v21i3.16102\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"surveillance and society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24908/ss.v21i3.16102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The use of artificial intelligence in facial recognition systems has been controversial. Among issues of concerns is the accuracy of such systems for recognising faces of non-white people. This work turns the debate on its head by showing six images of AI generated faces using identical prompts that include the words “Asian woman” and “facial recognition biometrics person technology” via Text 2 Dream in Deep Dream Generator. Rather than investigating the level of accuracy in facial recognition systems, it demonstrates how a particular AI software creates visual representation of “Asian women.” The experiment explores the interaction between text (prompt) and a particular generative algorithm. It raises questions about the data on which the algorithm is trained, how images are labelled/interpreted in training data, and the underlying power AI algorithms have in reproducing/changing stereotypes. Not transparent to the viewers is the role of the artist in selecting/framing prompts and “starter” images.