Guanyu Hu;Dimitrios Kollias;Eleni Papadopoulou;Paraskevi Tzouveli;Jie Wei;Xinyu Yang
{"title":"重新思考影响分析:确保公平性和一致性的协议","authors":"Guanyu Hu;Dimitrios Kollias;Eleni Papadopoulou;Paraskevi Tzouveli;Jie Wei;Xinyu Yang","doi":"10.1109/TBIOM.2025.3550000","DOIUrl":null,"url":null,"abstract":"Evaluating affect analysis methods presents challenges due to inconsistencies in database partitioning and evaluation protocols, leading to unfair and biased results. Previous studies claim continuous performance improvements, but our findings challenge such assertions. Using these insights, we propose a unified protocol for database partitioning that ensures fairness and comparability. Specifically, our contributions include extending detailed demographic annotations (in terms of race, gender, and age) for six commonly used affective databases, providing fairness evaluation metrics, and establishing a common framework for expression recognition, action unit detection, and valence-arousal estimation. Additionally, we conduct extensive experiments using state-of-the-art and baseline methods under the new protocol, revealing previously unobserved fairness discrepancies and biases. We also rerun the methods with the new protocol and introduce new leaderboards to encourage future research in affect recognition with fairer comparisons. Our annotations, codes and pre-trained models are available here.","PeriodicalId":73307,"journal":{"name":"IEEE transactions on biometrics, behavior, and identity science","volume":"7 4","pages":"914-923"},"PeriodicalIF":5.0000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Rethinking Affect Analysis: A Protocol for Ensuring Fairness and Consistency\",\"authors\":\"Guanyu Hu;Dimitrios Kollias;Eleni Papadopoulou;Paraskevi Tzouveli;Jie Wei;Xinyu Yang\",\"doi\":\"10.1109/TBIOM.2025.3550000\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluating affect analysis methods presents challenges due to inconsistencies in database partitioning and evaluation protocols, leading to unfair and biased results. Previous studies claim continuous performance improvements, but our findings challenge such assertions. Using these insights, we propose a unified protocol for database partitioning that ensures fairness and comparability. Specifically, our contributions include extending detailed demographic annotations (in terms of race, gender, and age) for six commonly used affective databases, providing fairness evaluation metrics, and establishing a common framework for expression recognition, action unit detection, and valence-arousal estimation. Additionally, we conduct extensive experiments using state-of-the-art and baseline methods under the new protocol, revealing previously unobserved fairness discrepancies and biases. We also rerun the methods with the new protocol and introduce new leaderboards to encourage future research in affect recognition with fairer comparisons. Our annotations, codes and pre-trained models are available here.\",\"PeriodicalId\":73307,\"journal\":{\"name\":\"IEEE transactions on biometrics, behavior, and identity science\",\"volume\":\"7 4\",\"pages\":\"914-923\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2025-03-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on biometrics, behavior, and identity science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10919136/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on biometrics, behavior, and identity science","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10919136/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Rethinking Affect Analysis: A Protocol for Ensuring Fairness and Consistency
Evaluating affect analysis methods presents challenges due to inconsistencies in database partitioning and evaluation protocols, leading to unfair and biased results. Previous studies claim continuous performance improvements, but our findings challenge such assertions. Using these insights, we propose a unified protocol for database partitioning that ensures fairness and comparability. Specifically, our contributions include extending detailed demographic annotations (in terms of race, gender, and age) for six commonly used affective databases, providing fairness evaluation metrics, and establishing a common framework for expression recognition, action unit detection, and valence-arousal estimation. Additionally, we conduct extensive experiments using state-of-the-art and baseline methods under the new protocol, revealing previously unobserved fairness discrepancies and biases. We also rerun the methods with the new protocol and introduce new leaderboards to encourage future research in affect recognition with fairer comparisons. Our annotations, codes and pre-trained models are available here.