{"title":"Better Anticipating Unintended Consequences","authors":"Clinton J. Andrews","doi":"10.1109/TTS.2024.3403412","DOIUrl":"https://doi.org/10.1109/TTS.2024.3403412","url":null,"abstract":"If people want the benefits of innovations, must they simply accept the unintended adverse consequences? Versions of this question haunt many who care about the social implications of technology. Technological design processes could include impact assessment steps, but not all do. Adoption in the marketplace may ignore spillover effects. Jurisprudence is often reactive and focused on remediating obvious wrongs. Public policy also often requires evidence of harm before legislators or administrators are willing to act. The failure to anticipate adverse consequences is sometimes framed as a moral lapse, but it could equally be about competence or incentives. This paper considers the relative merits of methodology (analogizing, interpolating, projecting,) and procedure (reflecting, reasoning, discourse) as systematic approaches to anticipating unintended consequences of innovation. It weighs the efficacy of such approaches against current reactive remedies, highlighting the importance of tailoring approach to context, and building in early learning opportunities (observing and testing). Several examples suggest that society is often playing catch-up and trying to avoid adverse consequences before the innovation is widely deployed rather than before it is initially introduced.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"205-216"},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Human(e) Technology Design Studios: An Action-Oriented, Co-Creative Modality for Centering the Human in Critical Technology Discussions","authors":"Erica O’Neil;Elizabeth Grumbach;Gaymon Bennett;Elizabeth Langland","doi":"10.1109/TTS.2024.3378057","DOIUrl":"https://doi.org/10.1109/TTS.2024.3378057","url":null,"abstract":"The Human(e) Technology Design Studio is a discourse-driven, action-oriented modality developed by the Lincoln Center for Applied Ethics at Arizona State University to shape generative opportunities for critical technology discussions with user groups closest to the problem. We outline the rationale for the creation of this modality, with theoretical commitments rooted in the domains of participatory action research and co-creation, as well as the design aspirations informing the studios’ rhythms of insight identification, integration, and activation. We then present a detailed case study of this model that outlines the collective insights and actions generated by our first cohort of academics and technologists across six Design Studios, which culminated in the creation of a Humane Tech Oracle Deck. That two-year process allowed us to iterate the model in response to challenges, as we now move toward creating a public Design Studio toolkit.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"24-35"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for the Interpretable Modeling of Household Wealth in Rural Communities From Satellite Data","authors":"Emily J. Zuetell;Paulina Jaramillo","doi":"10.1109/TTS.2024.3377541","DOIUrl":"https://doi.org/10.1109/TTS.2024.3377541","url":null,"abstract":"Data-driven policy development and investment are necessary for aligning policies across administrative levels, targeting interventions, and meeting the 2030 Sustainable Development Goals. However, local-level economic well-being data at timely intervals, critical to informing policy development and ensuring equity of outcomes, are unavailable in many parts of the world. Yet, filling these data gaps with black-box predictive models like neural networks introduces risk and inequity to the decision- making process. In this work, we construct an alternative interpretable model to these black-box models to predict household wealth, a key socioeconomic well-being indicator, at 5-km scale from widely available satellite data. Our interpretable model promotes transparency, the identification of potential drivers of bias and harmful outcomes, and inclusive design for human-ML decision-making. We model household wealth as a low- order function of productive land use that can be interpreted and integrated with domain knowledge by decision-makers. We aggregate remotely sensed land cover change data from 2006–2019 to construct an interpretable linear regression model for household wealth and wealth change in Uganda at a 5-km scale with \u0000<inline-formula> <tex-math>$r^{2},,{=}$ </tex-math></inline-formula>\u0000 72%. Our results demonstrate that there is not a clear performance-interpretability tradeoff in modeling household wealth from satellite imagery at high spatial and temporal resolution. Finally, we recommend a tiered framework to model socioeconomic outcomes from remote sensing data.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"36-44"},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rating Sentiment Analysis Systems for Bias Through a Causal Lens","authors":"Kausik Lakkaraju;Biplav Srivastava;Marco Valtorta","doi":"10.1109/TTS.2024.3375519","DOIUrl":"https://doi.org/10.1109/TTS.2024.3375519","url":null,"abstract":"Sentiment Analysis Systems (SASs) are data-driven Artificial Intelligence (AI) systems that assign one or more numbers to convey the polarity and emotional intensity of a given piece of text. However, like other automatic machine learning systems, SASs can exhibit model uncertainty, resulting in drastic swings in output with even small changes in input. This issue becomes more problematic when inputs involve protected attributes like gender or race, as it can be perceived as bias or unfairness. To address this, we propose a novel method to assess and rate SASs. We perturb inputs in a controlled causal setting to test if the output sentiment is sensitive to protected attributes while keeping other components of the textual input, such as chosen emotion words, fixed. Based on the results, we assign labels (ratings) at both fine-grained and overall levels to indicate the robustness of the SAS to input changes. The ratings can help decision-makers improve online content by reducing hate speech, often fueled by biases related to protected attributes such as gender and race. These ratings provide a principled basis for comparing SASs and making informed choices based on their behavior. The ratings also benefit all users, especially developers who reuse off-the-shelf SASs to build larger AI systems but do not have access to their code or training data to compare.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"82-92"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding Data Valuation: Valuing Google’s Data Assets","authors":"Kean Birch;Sarah Marquis;Guilherme Cavalcante Silva","doi":"10.1109/TTS.2024.3398400","DOIUrl":"https://doi.org/10.1109/TTS.2024.3398400","url":null,"abstract":"Digital personal data are increasingly understood as a key asset in our digital economies. But how should we value such data? Numerous policymakers, regulators, and stakeholders are trying to work out how to manage the collection, use, and valuation of data in order to balance the advantages and disadvantages of its collection and use. The negative implications of data practices may include privacy loss, data breaches, or declining market competition, while social and economic benefits include improved service delivery, more efficient welfare systems, or better products. Increasingly, data are conceptualized as an asset. To understand the value of data as an asset means understanding how data are configured as an asset; data value does not reflect ownership and property rights per se, but rather diverse modes of access and use restrictions (usually delineated by opaque contractual agreements). Data are increasingly controlled by a few, large digital technology firms, especially so-called ‘Big Tech’ firms. In this paper, we use Google as a case study of how Big Tech firms configure and value digital data as an asset. We analyse how Google understands, frames, values, and monetizes the data they collect from users. We qualitatively analyse an extensive dataset of financial documentary materials produced by and about Google to identify the different modes of access and use restrictions that Google deploys to turn digital data into a valuable asset. We conclude that, despite being highly ambiguous, Google’s approach to data value focuses on monetizing users.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"183-190"},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10525235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social and Ethical Norms in Annotation Task Design","authors":"Razvan Amironesei;Mark Díaz","doi":"10.1109/TTS.2024.3374639","DOIUrl":"https://doi.org/10.1109/TTS.2024.3374639","url":null,"abstract":"The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"45-47"},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Future Role of Clinical Artificial Intelligence: View of Chronic Patients","authors":"Bijun Wang;Onur Asan;Ting Liao;Mo Mansouri","doi":"10.1109/TTS.2024.3374647","DOIUrl":"https://doi.org/10.1109/TTS.2024.3374647","url":null,"abstract":"Artificial intelligence (AI) can transform various aspects of healthcare, including diagnosis, treatment, monitoring, and preventative care. Patients’ attitudes and views are considered critical factors for the development and success of AI-based technology in healthcare delivery. This study seeks to explore the chronic patients’ perceptions, including their knowledge, concerns regarding misuse and abuse, their attitude toward AI involvement, and their views on the future role of AI in healthcare delivery. Using the convenience sampling technique, 219 chronic-condition participants completed an online survey. This study leveraged Hayes Process Macro to develop a moderated mediation model to analyze the collected data. Our results showed that patients’ knowledge of AI did not directly influence their perceptions of the future of AI in healthcare. Nonetheless, the evidence from the mediational analysis revealed an indirect effect, where concerns about AI misuse and abuse and extensive AI involvement played a role in that. Additionally, the level of trustworthiness moderated the relationship between acceptance of extensive AI involvement and patients’ perception of AI’s future role. These findings highlight the importance of considering patients’ views and attitudes towards AI and addressing any concerns or fears they may have in order to build trust and confidence in clinical AI systems, which can ultimately lead to better health outcomes.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions","authors":"Jordan Richard Schoenherr;Robert Thomson","doi":"10.1109/TTS.2024.3370095","DOIUrl":"https://doi.org/10.1109/TTS.2024.3370095","url":null,"abstract":"While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"61-70"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Public Interest Technology for Innovation in Global Development: Recommendations for Launching PIT Projects","authors":"Roba Abbas;Katina Michael;Dinara Davlembayeva;Savvas Papagiannidis;Jeremy Pitt","doi":"10.1109/TTS.2024.3375431","DOIUrl":"https://doi.org/10.1109/TTS.2024.3375431","url":null,"abstract":"This paper serves as an Introduction to the Public Interest Technology (PIT) for Innovation in Global Development Special Issue, based on a workshop of the same title held in September 2023. The paper’s contribution is in proposing recommendations and practical guidance to aid in launching PIT projects. We begin by situating the Special Issue in evolving definitions of PIT in \u0000<xref>Section II</xref>\u0000, followed by an overview of the PIT ecosystem in \u0000<xref>Section III</xref>\u0000 to offer a succinct account of the current state of PIT scholarship. The corresponding links to the innovation in global development context are subsequently described in \u0000<xref>Section IV</xref>\u0000, in keeping with the theme of the workshop. These links relate to an overview of adjacent fields and concepts; an illustrative example in the information technology for development (ICT4D) field; the identification of gaps in current PIT scholarship; and the preliminary questions that require attention. Next, \u0000<xref>Section V</xref>\u0000 presents workshop outcomes, in the form of a general overview of the event; the identification of prevalent themes emerging from and / or are reinforced in the workshop; and a summary of Special Issue papers. The workshop is used as an interdisciplinary catalyst for the explication of more recent PIT developments. These developments are encapsulated in ten recommendations for launching PIT projects in \u0000<xref>Section VI</xref>\u0000, intended to direct PIT project managers or lead investigators prior to project launch or during the initial stages of a project.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"14-23"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539349","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel
{"title":"Failures in the Loop: Human Leadership in AI-Based Decision-Making","authors":"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel","doi":"10.1109/TTS.2024.3378587","DOIUrl":"https://doi.org/10.1109/TTS.2024.3378587","url":null,"abstract":"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \u0000<xref>[1]</xref>\u0000. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \u0000<xref>[2]</xref>\u0000. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \u0000<italic>all-or-nothing</i>\u0000 approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \u0000<xref>[3]</xref>\u0000. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \u0000<xref>[4]</xref>\u0000, \u0000<xref>[5]</xref>\u0000. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \u0000<xref>[6]</xref>\u0000. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"2-13"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}