Marcelo Feighelstein, Einat Kovalyo, Jennifer Abrams, Sarah-Elisabeth Byosiere, A. Zamansky
{"title":"人工智能模型“喜欢”黑狗吗?用视觉语言模型探索狗的感知","authors":"Marcelo Feighelstein, Einat Kovalyo, Jennifer Abrams, Sarah-Elisabeth Byosiere, A. Zamansky","doi":"10.1145/3565995.3566022","DOIUrl":null,"url":null,"abstract":"Large-scale, pretrained vision-language models such as OpenAI’s CLIP are a game changer in Computer Vision due to their unprecedented ‘zero-shot’ image classification capabilities. As they are pretrained on huge amounts of unsupervised web-scraped data, they suffer from inherent biases reflecting human perceptions, norms and beliefs. This position paper aims to highlight the potential of studying models such as CLIP in the context of human-animal relationships, in particular for understanding human perceptions and preferences with respect to physical attributes of pets and their adoptability.","PeriodicalId":432998,"journal":{"name":"Proceedings of the Ninth International Conference on Animal-Computer Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Do AI Models “Like\\\" Black Dogs? Towards Exploring Perceptions of Dogs with Vision-Language Models\",\"authors\":\"Marcelo Feighelstein, Einat Kovalyo, Jennifer Abrams, Sarah-Elisabeth Byosiere, A. Zamansky\",\"doi\":\"10.1145/3565995.3566022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large-scale, pretrained vision-language models such as OpenAI’s CLIP are a game changer in Computer Vision due to their unprecedented ‘zero-shot’ image classification capabilities. As they are pretrained on huge amounts of unsupervised web-scraped data, they suffer from inherent biases reflecting human perceptions, norms and beliefs. This position paper aims to highlight the potential of studying models such as CLIP in the context of human-animal relationships, in particular for understanding human perceptions and preferences with respect to physical attributes of pets and their adoptability.\",\"PeriodicalId\":432998,\"journal\":{\"name\":\"Proceedings of the Ninth International Conference on Animal-Computer Interaction\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Ninth International Conference on Animal-Computer Interaction\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3565995.3566022\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Ninth International Conference on Animal-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3565995.3566022","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Do AI Models “Like" Black Dogs? Towards Exploring Perceptions of Dogs with Vision-Language Models
Large-scale, pretrained vision-language models such as OpenAI’s CLIP are a game changer in Computer Vision due to their unprecedented ‘zero-shot’ image classification capabilities. As they are pretrained on huge amounts of unsupervised web-scraped data, they suffer from inherent biases reflecting human perceptions, norms and beliefs. This position paper aims to highlight the potential of studying models such as CLIP in the context of human-animal relationships, in particular for understanding human perceptions and preferences with respect to physical attributes of pets and their adoptability.