{"title":"Assessing Post-hoc Explainability of the BKT Algorithm","authors":"Tongyu Zhou, Haoyu Sheng, I. Howley","doi":"10.1145/3375627.3375856","DOIUrl":"https://doi.org/10.1145/3375627.3375856","url":null,"abstract":"As machine intelligence is increasingly incorporated into educational technologies, it becomes imperative for instructors and students to understand the potential flaws of the algorithms on which their systems rely. This paper describes the design and implementation of an interactive post-hoc explanation of the Bayesian Knowledge Tracing algorithm which is implemented in learning analytics systems used across the United States. After a user-centered design process to smooth out interaction design difficulties, we ran a controlled experiment to evaluate whether the interactive or static version of the explainable led to increased learning. Our results reveal that learning about an algorithm through an explainable depends on users' educational background. For other contexts, designers of post-hoc explainables must consider their users' educational background to best determine how to empower more informed decision-making with AI-enhanced systems.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73557758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Just Approach Balancing Rawlsian Leximax Fairness and Utilitarianism","authors":"V. Chen, J. Hooker","doi":"10.1145/3375627.3375844","DOIUrl":"https://doi.org/10.1145/3375627.3375844","url":null,"abstract":"Numerous AI-assisted resource allocation decisions need to balance the conflicting goals of fairness and efficiency. Our paper studies the challenging task of defining and modeling a proper fairness-efficiency trade off. We define fairness with Rawlsian leximax fairness, which views the lexicographic maximum among all feasible outcomes as the most equitable; and define efficiency with Utilitarianism, which seeks to maximize the sum of utilities received by entities regardless of individual differences. Motivated by a justice-driven trade off principle: prioritize fairness to benefit the less advantaged unless too much efficiency is sacrificed, we propose a sequential optimization procedure to balance leximax fairness and utilitarianism in decision-making. Each iteration of our approach maximizes a social welfare function, and we provide a practical mixed integer/linear programming (MILP) formulation for each maximization problem. We illustrate our method on a budget allocation example. Compared with existing approaches of balancing equity and efficiency, our method is more interpretable in terms of parameter selection, and incorporates a strong equity criterion with a thoroughly balanced perspective.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82151691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Technocultural Pluralism: A \"Clash of Civilizations\" in Technology?","authors":"Osonde A. Osoba","doi":"10.1145/3375627.3375834","DOIUrl":"https://doi.org/10.1145/3375627.3375834","url":null,"abstract":"At the end of the Cold War, the renowned political scientist, Samuel Huntington, argued that future conflicts were more likely to stem from cultural frictions -- ideologies, social norms, and political systems -- rather than political or economic frictions. Huntington focused his concern on the future of geopolitics in a rapidly shrinking world. This paper argues that a similar dynamic is at play in the interaction of technology cultures. We emphasize the role of culture in the evolution of technology and identify the particular role that culture (esp. privacy culture) plays in the development of AI/ML technologies. Then we examine some implications that this perspective brings to the fore.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84574179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic
{"title":"Auditing Algorithms: On Lessons Learned and the Risks of Data Minimization","authors":"G. G. Clavell, M. M. Zamorano, C. Castillo, Oliver Smith, A. Matic","doi":"10.1145/3375627.3375852","DOIUrl":"https://doi.org/10.1145/3375627.3375852","url":null,"abstract":"In this paper, we present the Algorithmic Audit (AA) of REM!X, a personalized well-being recommendation app developed by Telefónica Innovación Alpha. The main goal of the AA was to identify and mitigate algorithmic biases in the recommendation system that could lead to the discrimination of protected groups. The audit was conducted through a qualitative methodology that included five focus groups with developers and a digital ethnography relying on users comments reported in the Google Play Store. To minimize the collection of personal information, as required by best practice and the GDPR [1], the REM!X app did not collect gender, age, race, religion, or other protected attributes from its users. This limited the algorithmic assessment and the ability to control for different algorithmic biases. Indirect evidence was thus used as a partial mitigation for the lack of data on protected attributes, and allowed the AA to identify four domains where bias and discrimination were still possible, even without direct personal identifiers. Our analysis provides important insights into how general data ethics principles such as data minimization, fairness, non-discrimination and transparency can be operationalized via algorithmic auditing, their potential and limitations, and how the collaboration between developers and algorithmic auditors can lead to better technologies","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89118614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson
{"title":"When Your Only Tool Is A Hammer: Ethical Limitations of Algorithmic Fairness Solutions in Healthcare Machine Learning","authors":"M. Mccradden, M. Mazwi, Shalmali Joshi, James A. Anderson","doi":"10.1145/3375627.3375824","DOIUrl":"https://doi.org/10.1145/3375627.3375824","url":null,"abstract":"It is no longer a hypothetical worry that artificial intelligence - more specifically, machine learning (ML) - can propagate the effects of pernicious bias in healthcare. To address these problems, some have proposed the development of 'algorithmic fairness' solutions. The primary goal of these solutions is to constrain the effect of pernicious bias with respect to a given outcome of interest as a function of one's protected identity (i.e., characteristics generally protected by civil or human rights legislation. The technical limitations of these solutions have been well-characterized. Ethically, the problematic implication - of developers, potentially, and end users - is that by virtue of algorithmic fairness solutions a model can be rendered 'objective' (i.e., free from the influence of pernicious bias). The ostensible neutrality of these solutions may unintentionally prompt new consequences for vulnerable groups by obscuring downstream problems due to the persistence of real-world bias. The main epistemic limitation of algorithmic fairness is that it assumes the relationship between the extent of bias's impact on a given health outcome and one's protected identity is mathematically quantifiable. The reality is that social and structural factors confluence in complex and unknown ways to produce health inequalities. Some of these are biologic in nature, and differences like these are directly relevant to predicting a health event and should be incorporated into the model's design. Others are reflective of prejudice, lack of access to healthcare, or implicit bias. Sometimes, there may be a combination. With respect to any specific task, it is difficult to untangle the complex relationships between potentially influential factors and which ones are 'fair' and which are not to inform their inclusion or mitigation in the model's design.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73280803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethics for AI Writing: The Importance of Rhetorical Context","authors":"H. McKee, J. E. Porter","doi":"10.1145/3375627.3375811","DOIUrl":"https://doi.org/10.1145/3375627.3375811","url":null,"abstract":"Implicit in any rhetorical interaction-between humans or between humans and machines-are ethical codes that shape the rhetorical context, the social situation in which communication happens and also the engine that drives communicative interaction. Such implicit codes are usually invisible to AI writing systems because the social factors shaping communication (the why and how of language, not the what) are not usually explicitly evident in databases the systems use to produce discourse. Can AI writing systems learn to learn rhetorical context, particularly the implicit codes for communication ethics? We see evidence that some systems do address issues of rhetorical context, at least in rudimentary ways. But we critique the information transfer communication model supporting many AI writing systems, arguing for a social context model that accounts for rhetorical context-what is, in a sense, \"not there\" in the data corpus but that is critical for the production of meaningful, significant, and ethical communication. We offer two ethical principles to guide design of AI writing systems: transparency about machine presence and critical data awareness, a methodological reflexivity about rhetorical context and omissions in the data that need to be provided by a human agent or accounted for in machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81487590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measuring Fairness in an Unfair World","authors":"J. Herington","doi":"10.1145/3375627.3375854","DOIUrl":"https://doi.org/10.1145/3375627.3375854","url":null,"abstract":"Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they are deployed. I then discuss how these idealizations fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. In the final section, I suggest an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83294224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adoption Dynamics and Societal Impact of AI Systems in Complex Networks","authors":"Pedro M. Fernandes, F. C. Santos, Manuel Lopes","doi":"10.1145/3375627.3375847","DOIUrl":"https://doi.org/10.1145/3375627.3375847","url":null,"abstract":"We propose a game-theoretical model to simulate the dynamics of AI adoption in adaptive networks. This formalism allows us to understand the impact of the adoption of AI systems for society as a whole, addressing some of the concerns on the need for regulation. Using this model we study the adoption of AI systems, the distribution of the different types of AI (from selfish to utilitarian), the appearance of clusters of specific AI types, and the impact on the fitness of each individual. We suggest that the entangled evolution of individual strategy and network structure constitutes a key mechanism for the sustainability of utilitarian and human-conscious AI. Differently, in the absence of rewiring, a minority of the population can easily foster the adoption of selfish AI and gains a benefit at the expense of the remaining majority.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90083363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computerize the Race Problem?: Why We Must Plan for a Just AI Future","authors":"Charlton D. McIlwain","doi":"10.1145/3375627.3377140","DOIUrl":"https://doi.org/10.1145/3375627.3377140","url":null,"abstract":"1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn't hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest \"thinking\" and \"learning\" machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday's technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and - most importantly - how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day's most preeminent technological question for the world and ask, \"Computerize the Race Problem?\" Better yet, what might our AI-driven future look like if we ask ourselves this question today?","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73210840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards Just, Fair and Interpretable Methods for Judicial Subset Selection","authors":"Lingxiao Huang, Julia Wei, Elisa Celis","doi":"10.1145/3375627.3375848","DOIUrl":"https://doi.org/10.1145/3375627.3375848","url":null,"abstract":"In many judicial systems -- including the United States courts of appeals, the European Court of Justice, the UK Supreme Court and the Supreme Court of Canada -- a subset of judges is selected from the entire judicial body for each case in order to hear the arguments and decide the judgment. Ideally, the subset selected is representative, i.e., the decision of the subset would match what the decision of the entire judicial body would have been had they all weighed in on the case. Further, the process should be fair in that all judges should have similar workloads, and the selection process should not allow for certain judge's opinions to be silenced or amplified via case assignments. Lastly, in order to be practical and trustworthy, the process should also be interpretable, easy to use, and (if algorithmic) computationally efficient. In this paper, we propose an algorithmic method for the judicial subset selection problem that satisfies all of the above criteria. The method satisfies fairness by design, and we prove that it has optimal representativeness asymptotically for a large range of parameters and under noisy information models about judge opinions -- something no existing methods can provably achieve. We then assess the benefits of our approach empirically by counterfactually comparing against the current practice and recent alternative algorithmic approaches using cases from the United States courts of appeals database.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82759489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}