{"title":"Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility","authors":"D. Widder, D. Nafus","doi":"10.1177/20539517231177620","DOIUrl":"https://doi.org/10.1177/20539517231177620","url":null,"abstract":"Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells
{"title":"The effectiveness of embedded values analysis modules in Computer Science education: An empirical study","authors":"Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells","doi":"10.1177/20539517231176230","DOIUrl":"https://doi.org/10.1177/20539517231176230","url":null,"abstract":"Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":"10 1","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43917079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Towards a political economy of technical systems: The case of Google","authors":"Bernhard Rieder","doi":"10.1177/20539517221135162","DOIUrl":"https://doi.org/10.1177/20539517221135162","url":null,"abstract":"This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47079658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret
{"title":"Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders","authors":"Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret","doi":"10.1177/20539517221124586","DOIUrl":"https://doi.org/10.1177/20539517221124586","url":null,"abstract":"In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41974968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science","authors":"Z. Tacheva","doi":"10.1177/20539517221112901","DOIUrl":"https://doi.org/10.1177/20539517221112901","url":null,"abstract":"Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42126394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel
{"title":"Learning accountable governance: Challenges and perspectives for data-intensive health research networks","authors":"Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel","doi":"10.1177/20539517221136078","DOIUrl":"https://doi.org/10.1177/20539517221136078","url":null,"abstract":"Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46307312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading","authors":"C. Borch, Bo Hee Min","doi":"10.1177/20539517221111361","DOIUrl":"https://doi.org/10.1177/20539517221111361","url":null,"abstract":"Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48201896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social data governance: Towards a definition and model","authors":"Jun Liu","doi":"10.1177/20539517221111352","DOIUrl":"https://doi.org/10.1177/20539517221111352","url":null,"abstract":"With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47421750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Privacy at risk? Understanding the perceived privacy protection of health code apps in China","authors":"Gejun Huang, A. Hu, Wenhong Chen","doi":"10.1177/20539517221135132","DOIUrl":"https://doi.org/10.1177/20539517221135132","url":null,"abstract":"As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43299140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital phenotyping – Editorial","authors":"Lukas Engelmann, G. Wackers","doi":"10.1177/20539517221113775","DOIUrl":"https://doi.org/10.1177/20539517221113775","url":null,"abstract":"There is an astonishing posthuman promise in digital phenotyping, as Beth Semel recently argued (Semel, 2022). The goal of digital phenotyping enthusiasts is no less than to bypass the human observer as a deeply flawed threshold of medical knowledge production. The second goal is then – ultimately – to rid the human body and mind of its frailty and to utilise technology for a ‘world without disease’ (Topol and Corr, 2019). This promissory rhetoric is not only geared towards the disruption of dated medical conventions but comes equipped with bold, revolutionary concepts. Objective knowledge, based on aggregated, automated, and sweeping data collection to deliver granular, minute, and personalised healthcare; digital phenotyping is a collection of ideas, technologies, and practices to realise a powerful and futuristic vision of a medicine far beyond human capacities. This posthuman promise might be naive and driven by an abundant positivism, but as a small movement, made up of medical researchers and digital disruptors alike, it has continuously gathered steam over the last decade. The purpose of this collection is foremost to take stock and to collect a range of critical questions for a first revision of what digital phenotyping might be and what it could potentially become. The meaning of digital phenotyping is not as well defined as the many publications in this growing body of scholarship might suggest. Some of that vagueness has been captured in the critical literature. Birk and Samuel, in their sociological analysis, have described the term recently in more general terms as an analytical concept that presumes simply that diseases and illness are by and large ‘measurable by digital devices’ (Birk and Samuel, 2020). This assumes that a person’s experience of any kind of suffering is always in one way or another expressed in the digital traces of their behaviour. The leg injury that might result in a different mobility pattern; measurable tremors in the thumb control of smartphones as a sign of Parkinson’s; sudden lack of social interaction as a sign of depression: digital phenotypes can in theory be defined for any illness and disease and captured by any of the sensors, devices, and technologies, through which humans leave digital traces. Loi, in his ethical and philosophical exploration of the digital phenotype, assumes it in more general terms to be ‘an assemblage of information in digital form, that humans produce intentionally or as a by-product of other activities, and which affects human behaviour’ (Loi, 2018). Many questions remain, not least why and how this concept seeks association with genetic terminology. What does the wholesale capturing of a human’s digital traces as phenotype imply? What does it mean to group a sheer endless range of symptoms within the paradigm of inheritable traits and how does this framing structure research on and with digital phenotypes? The phrase itself was coined by the physician Sachin Jain and colleague","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41616997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}