ICPR WorkshopsPub Date : 2022-09-08DOI: 10.48550/arXiv.2209.03638
Heba Mohamed, Sebastiano Vascon, Feliks Hibraj, Stuart James, Diego Pilutti, A. D. Bue, M. Pelillo
{"title":"Geolocation of Cultural Heritage using Multi-View Knowledge Graph Embedding","authors":"Heba Mohamed, Sebastiano Vascon, Feliks Hibraj, Stuart James, Diego Pilutti, A. D. Bue, M. Pelillo","doi":"10.48550/arXiv.2209.03638","DOIUrl":"https://doi.org/10.48550/arXiv.2209.03638","url":null,"abstract":"Knowledge Graphs (KGs) have proven to be a reliable way of structuring data. They can provide a rich source of contextual information about cultural heritage collections. However, cultural heritage KGs are far from being complete. They are often missing important attributes such as geographical location, especially for sculptures and mobile or indoor entities such as paintings. In this paper, we first present a framework for ingesting knowledge about tangible cultural heritage entities from various data sources and their connected multi-hop knowledge into a geolocalized KG. Secondly, we propose a multi-view learning model for estimating the relative distance between a given pair of cultural heritage entities, based on the geographical as well as the knowledge connections of the entities.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122558115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-08-21DOI: 10.1007/978-3-031-37731-0_41
M. Schuckers, Sandip Purnapatra, K. Fatima, Daqing Hou, S. Schuckers
{"title":"Statistical Methods for Assessing Differences in False Non-match Rates Across Demographic Groups","authors":"M. Schuckers, Sandip Purnapatra, K. Fatima, Daqing Hou, S. Schuckers","doi":"10.1007/978-3-031-37731-0_41","DOIUrl":"https://doi.org/10.1007/978-3-031-37731-0_41","url":null,"abstract":"","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127074170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-08-17DOI: 10.48550/arXiv.2208.08382
Sreeraj Ramachandran, A. Rattani
{"title":"Deep Generative Views to Mitigate Gender Classification Bias Across Gender-Race Groups","authors":"Sreeraj Ramachandran, A. Rattani","doi":"10.48550/arXiv.2208.08382","DOIUrl":"https://doi.org/10.48550/arXiv.2208.08382","url":null,"abstract":"Published studies have suggested the bias of automated face-based gender classification algorithms across gender-race groups. Specifically, unequal accuracy rates were obtained for women and dark-skinned people. To mitigate the bias of gender classifiers, the vision community has developed several strategies. However, the efficacy of these mitigation strategies is demonstrated for a limited number of races mostly, Caucasian and African-American. Further, these strategies often offer a trade-off between bias and classification accuracy. To further advance the state-of-the-art, we leverage the power of generative views, structured learning, and evidential learning towards mitigating gender classification bias. We demonstrate the superiority of our bias mitigation strategy in improving classification accuracy and reducing bias across gender-racial groups through extensive experimental validation, resulting in state-of-the-art performance in intra- and cross dataset evaluations.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"994 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133431903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-08-08DOI: 10.48550/arXiv.2208.08853
Radhika Dua, Jiyoung Lee, J. Kwon, E. Choi
{"title":"Automatic Detection of Noisy Electrocardiogram Signals without Explicit Noise Labels","authors":"Radhika Dua, Jiyoung Lee, J. Kwon, E. Choi","doi":"10.48550/arXiv.2208.08853","DOIUrl":"https://doi.org/10.48550/arXiv.2208.08853","url":null,"abstract":"Electrocardiogram (ECG) signals are beneficial in diagnosing cardiovascular diseases, which are one of the leading causes of death. However, they are often contaminated by noise artifacts and affect the automatic and manual diagnosis process. Automatic deep learning-based examination of ECG signals can lead to inaccurate diagnosis, and manual analysis involves rejection of noisy ECG samples by clinicians, which might cost extra time. To address this limitation, we present a two-stage deep learning-based framework to automatically detect the noisy ECG samples. Through extensive experiments and analysis on two different datasets, we observe that the deep learning-based framework can detect slightly and highly noisy ECG samples effectively. We also study the transfer of the model learned on one dataset to another dataset and observe that the framework effectively detects noisy ECG samples.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-07-29DOI: 10.1007/978-3-031-37660-3_43
Heng Ee Tay, M. Lim, Chun Yong Chong
{"title":"SERCNN: Stacked Embedding Recurrent Convolutional Neural Network in Detecting Depression on Twitter","authors":"Heng Ee Tay, M. Lim, Chun Yong Chong","doi":"10.1007/978-3-031-37660-3_43","DOIUrl":"https://doi.org/10.1007/978-3-031-37660-3_43","url":null,"abstract":"","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-07-21DOI: 10.48550/arXiv.2207.10246
Aakash Varma Nadimpalli, A. Rattani
{"title":"GBDF: Gender Balanced DeepFake Dataset Towards Fair DeepFake Detection","authors":"Aakash Varma Nadimpalli, A. Rattani","doi":"10.48550/arXiv.2207.10246","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10246","url":null,"abstract":"Facial forgery by deepfakes has raised severe societal concerns. Several solutions have been proposed by the vision community to effectively combat the misinformation on the internet via automated deepfake detection systems. Recent studies have demonstrated that facial analysis-based deep learning models can discriminate based on protected attributes. For the commercial adoption and massive roll-out of the deepfake detection technology, it is vital to evaluate and understand the fairness (the absence of any prejudice or favoritism) of deepfake detectors across demographic variations such as gender and race. As the performance differential of deepfake detectors between demographic subgroups would impact millions of people of the deprived sub-group. This paper aims to evaluate the fairness of the deepfake detectors across males and females. However, existing deepfake datasets are not annotated with demographic labels to facilitate fairness analysis. To this aim, we manually annotated existing popular deepfake datasets with gender labels and evaluated the performance differential of current deepfake detectors across gender. Our analysis on the gender-labeled version of the datasets suggests (a) current deepfake datasets have skewed distribution across gender, and (b) commonly adopted deepfake detectors obtain unequal performance across gender with mostly males outperforming females. Finally, we contributed a gender-balanced and annotated deepfake dataset, GBDF, to mitigate the performance differential and to promote research and development towards fairness-aware deep fake detectors. The GBDF dataset is publicly available at: https://github.com/aakash4305/GBDF","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132190369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-07-04DOI: 10.48550/arXiv.2207.01420
Gianluigi Lopardo, D. Garreau
{"title":"Comparing Feature Importance and Rule Extraction for Interpretability on Text Data","authors":"Gianluigi Lopardo, D. Garreau","doi":"10.48550/arXiv.2207.01420","DOIUrl":"https://doi.org/10.48550/arXiv.2207.01420","url":null,"abstract":"Complex machine learning algorithms are used more and more often in critical tasks involving text data, leading to the development of interpretability methods. Among local methods, two families have emerged: those computing importance scores for each feature and those extracting simple logical rules. In this paper we show that using different methods can lead to unexpectedly different explanations, even when applied to simple models for which we would expect qualitative coincidence. To quantify this effect, we propose a new approach to compare explanations produced by different methods.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115543453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-05-13DOI: 10.48550/arXiv.2205.06546
T. Gomez, Thomas Fr'eour, H. Mouchère
{"title":"Comparison of attention models and post-hoc explanation methods for embryo stage identification: a case study","authors":"T. Gomez, Thomas Fr'eour, H. Mouchère","doi":"10.48550/arXiv.2205.06546","DOIUrl":"https://doi.org/10.48550/arXiv.2205.06546","url":null,"abstract":"An important limitation to the development of AI-based solutions for In Vitro Fertilization (IVF) is the black-box nature of most state-of-the-art models, due to the complexity of deep learning architectures, which raises potential bias and fairness issues. The need for interpretable AI has risen not only in the IVF field but also in the deep learning community in general. This has started a trend in literature where authors focus on designing objective metrics to evaluate generic explanation methods. In this paper, we study the behavior of recently proposed objective faithfulness metrics applied to the problem of embryo stage identification. We benchmark attention models and post-hoc methods using metrics and further show empirically that (1) the metrics produce low overall agreement on the model ranking and (2) depending on the metric approach, either post-hoc methods or attention models are favored. We conclude with general remarks about the difficulty of defining faithfulness and the necessity of understanding its relationship with the type of approach that is favored.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114464898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-03-09DOI: 10.48550/arXiv.2203.05051
John J. Howard, Eli J. Laird, Yevgeniy B. Sirotin, Rebecca E. Rubin, Jerry L. Tipton, A. Vemury
{"title":"Evaluating Proposed Fairness Models for Face Recognition Algorithms","authors":"John J. Howard, Eli J. Laird, Yevgeniy B. Sirotin, Rebecca E. Rubin, Jerry L. Tipton, A. Vemury","doi":"10.48550/arXiv.2203.05051","DOIUrl":"https://doi.org/10.48550/arXiv.2203.05051","url":null,"abstract":"The development of face recognition algorithms by academic and commercial organizations is growing rapidly due to the onset of deep learning and the widespread availability of training data. Though tests of face recognition algorithm performance indicate yearly performance gains, error rates for many of these systems differ based on the demographic composition of the test set. These\"demographic differentials\"in algorithm performance can contribute to unequal or unfair outcomes for certain groups of people, raising concerns with increased worldwide adoption of face recognition systems. Consequently, regulatory bodies in both the United States and Europe have proposed new rules requiring audits of biometric systems for\"discriminatory impacts\"(European Union Artificial Intelligence Act) and\"fairness\"(U.S. Federal Trade Commission). However, no standard for measuring fairness in biometric systems yet exists. This paper characterizes two proposed measures of face recognition algorithm fairness (fairness measures) from scientists in the U.S. and Europe. We find that both proposed methods are challenging to interpret when applied to disaggregated face recognition error rates as they are commonly experienced in practice. To address this, we propose a set of interpretability criteria, termed the Functional Fairness Measure Criteria (FFMC), that outlines a set of properties desirable in a face recognition algorithm fairness measure. We further develop a new fairness measure, the Gini Aggregation Rate for Biometric Equitability (GARBE), and show how, in conjunction with the Pareto optimization, this measure can be used to select among alternative algorithms based on the accuracy/fairness trade-space. Finally, we have open-sourced our dataset of machine-readable, demographically disaggregated error rates. We believe this is currently the largest open-source dataset of its kind.","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127822130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ICPR WorkshopsPub Date : 2022-01-05DOI: 10.1007/978-3-031-37745-7_9
Samuil Stoychev, H. Gunes
{"title":"The Effect of Model Compression on Fairness in Facial Expression Recognition","authors":"Samuil Stoychev, H. Gunes","doi":"10.1007/978-3-031-37745-7_9","DOIUrl":"https://doi.org/10.1007/978-3-031-37745-7_9","url":null,"abstract":"","PeriodicalId":391161,"journal":{"name":"ICPR Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129368199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}