Sai Sathiesh Rajan , Ezekiel Soremekun , Yves Le Traon , Sudipta Chattopadhyay
{"title":"生成分布感知的公平性测试","authors":"Sai Sathiesh Rajan , Ezekiel Soremekun , Yves Le Traon , Sudipta Chattopadhyay","doi":"10.1016/j.jss.2024.112090","DOIUrl":null,"url":null,"abstract":"<div><p>Ensuring that all classes of objects are detected with equal accuracy is essential in AI systems. For instance, being unable to identify any one class of objects could have fatal consequences in autonomous driving systems. Hence, ensuring the reliability of image recognition systems is crucial. This work addresses <em>how to validate group fairness in image recognition software</em>. We propose a <em>distribution-aware fairness testing</em> approach (called DISTROFAIR) that systematically exposes class-level fairness violations in image classifiers via a synergistic combination of <em>out-of-distribution (OOD) testing</em> and <em>semantic-preserving image mutation</em>. DISTROFAIR automatically <em>learns the distribution</em> (e.g., number/orientation) of objects in a set of images. Then it <em>systematically mutates objects in the images</em> to become OOD using three <em>semantic-preserving image mutations</em> – <em>object deletion</em>, <em>object insertion</em> and <em>object rotation</em>. We evaluate DISTROFAIR using two well-known datasets (CityScapes and MS-COCO) and three major, commercial image recognition software (namely, Amazon Rekognition, Google Cloud Vision and Azure Computer Vision). Results show that about 21% of images generated by DISTROFAIR reveal class-level fairness violations using either ground truth or metamorphic oracles. DISTROFAIR is up to 2.3× more effective than two main <em>baselines</em>, i.e., (a) an approach which focuses on generating images only <em>within the distribution</em> (ID) and (b) fairness analysis using only the original image dataset. We further observed that DISTROFAIR is efficient, it generates 460 images per hour, on average. Finally, we evaluate the semantic validity of our approach via a user study with 81 participants, using 30 real images and 30 corresponding mutated images generated by DISTROFAIR. We found that images generated by DISTROFAIR are 80% as realistic as real-world images.</p></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Distribution-aware fairness test generation\",\"authors\":\"Sai Sathiesh Rajan , Ezekiel Soremekun , Yves Le Traon , Sudipta Chattopadhyay\",\"doi\":\"10.1016/j.jss.2024.112090\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Ensuring that all classes of objects are detected with equal accuracy is essential in AI systems. For instance, being unable to identify any one class of objects could have fatal consequences in autonomous driving systems. Hence, ensuring the reliability of image recognition systems is crucial. This work addresses <em>how to validate group fairness in image recognition software</em>. We propose a <em>distribution-aware fairness testing</em> approach (called DISTROFAIR) that systematically exposes class-level fairness violations in image classifiers via a synergistic combination of <em>out-of-distribution (OOD) testing</em> and <em>semantic-preserving image mutation</em>. DISTROFAIR automatically <em>learns the distribution</em> (e.g., number/orientation) of objects in a set of images. Then it <em>systematically mutates objects in the images</em> to become OOD using three <em>semantic-preserving image mutations</em> – <em>object deletion</em>, <em>object insertion</em> and <em>object rotation</em>. We evaluate DISTROFAIR using two well-known datasets (CityScapes and MS-COCO) and three major, commercial image recognition software (namely, Amazon Rekognition, Google Cloud Vision and Azure Computer Vision). Results show that about 21% of images generated by DISTROFAIR reveal class-level fairness violations using either ground truth or metamorphic oracles. DISTROFAIR is up to 2.3× more effective than two main <em>baselines</em>, i.e., (a) an approach which focuses on generating images only <em>within the distribution</em> (ID) and (b) fairness analysis using only the original image dataset. We further observed that DISTROFAIR is efficient, it generates 460 images per hour, on average. Finally, we evaluate the semantic validity of our approach via a user study with 81 participants, using 30 real images and 30 corresponding mutated images generated by DISTROFAIR. We found that images generated by DISTROFAIR are 80% as realistic as real-world images.</p></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-05-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121224001353\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121224001353","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
Ensuring that all classes of objects are detected with equal accuracy is essential in AI systems. For instance, being unable to identify any one class of objects could have fatal consequences in autonomous driving systems. Hence, ensuring the reliability of image recognition systems is crucial. This work addresses how to validate group fairness in image recognition software. We propose a distribution-aware fairness testing approach (called DISTROFAIR) that systematically exposes class-level fairness violations in image classifiers via a synergistic combination of out-of-distribution (OOD) testing and semantic-preserving image mutation. DISTROFAIR automatically learns the distribution (e.g., number/orientation) of objects in a set of images. Then it systematically mutates objects in the images to become OOD using three semantic-preserving image mutations – object deletion, object insertion and object rotation. We evaluate DISTROFAIR using two well-known datasets (CityScapes and MS-COCO) and three major, commercial image recognition software (namely, Amazon Rekognition, Google Cloud Vision and Azure Computer Vision). Results show that about 21% of images generated by DISTROFAIR reveal class-level fairness violations using either ground truth or metamorphic oracles. DISTROFAIR is up to 2.3× more effective than two main baselines, i.e., (a) an approach which focuses on generating images only within the distribution (ID) and (b) fairness analysis using only the original image dataset. We further observed that DISTROFAIR is efficient, it generates 460 images per hour, on average. Finally, we evaluate the semantic validity of our approach via a user study with 81 participants, using 30 real images and 30 corresponding mutated images generated by DISTROFAIR. We found that images generated by DISTROFAIR are 80% as realistic as real-world images.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
• Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
• Agile, model-driven, service-oriented, open source and global software development
• Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
• Human factors and management concerns of software development
• Data management and big data issues of software systems
• Metrics and evaluation, data mining of software development resources
• Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.