First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13637
Esther Mwema, Abeba Birhane
{"title":"Undersea cables in Africa: The new frontiers of digital colonialism","authors":"Esther Mwema, Abeba Birhane","doi":"10.5210/fm.v29i4.13637","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13637","url":null,"abstract":"The Internet has become the backbone of the social fabric. The United Nations Human Rights Council declared access to the Internet a fundamental human right over a decade ago. Yet, Africa remains the region with the widest Digital Divide where most of the population is either sparsely connected or has no access to the Internet. This has in turn created a race amongst Western big tech corporations scrambling to “bridge the Digital Divide”. Although the Internet is often portrayed as something that resides in the “cloud”, it heavily depends on physical infrastructure, including undersea cables. In this paper, we examine how current undersea cable projects and Internet infrastructure, owned, controlled, and managed by private Western big tech corporation, often using the “bridging the Digital Divide” rhetoric, not only replicates colonial logic but also follows the same infrastructural path laid during the trans-Atlantic slave trade era. Despite its significant impact on the continent’s digital infrastructure, we find publicly available information is scarce and undersea cable projects are carried out with no oversight and little transparency. We review historical evolution of the Internet, and detail and track the development of undersea cables in Africa, and illustrate its tight connection with colonial legacies. We provide an in-depth analysis of two current major undersea cable undertakings across the continent: Google’s Equiano and Meta’s 2Africa. Using Google and Meta’s undersea cables as case studies, we illustrate how these projects follow colonial logic, create a new cost model that keep African nations under perpetual debt, and serve as infrastructure for mass data harvesting while bringing little benefit to the Global South. We conclude with actionable recommendations for and demands from big tech corporations, regulatory bodies, and governments across the African continent.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"143 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140707050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13642
Margaret Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington, Jacob Metcalf
{"title":"Participation versus scale: Tensions in the practical demands on participatory AI","authors":"Margaret Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington, Jacob Metcalf","doi":"10.5210/fm.v29i4.13642","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13642","url":null,"abstract":"Ongoing calls from academic and civil society groups and regulatory demands for the central role of affected communities in development, evaluation, and deployment of artificial intelligence systems have created the conditions for an incipient “participatory turn” in AI. This turn encompasses a wide number of approaches — from legal requirements for consultation with civil society groups and community input in impact assessments, to methods for inclusive data labeling and co-design. However, more work remains in adapting the methods of participation to the scale of commercial AI. In this paper, we highlight the tensions between the localized engagement of community-based participatory methods, and the globalized operation of commercial AI systems. Namely, the scales of commercial AI and participatory methods tend to differ along the fault lines of (1) centralized to distributed development; (2) calculable to self-identified publics; and (3) instrumental to intrinsic perceptions of the value of public input. However, a close look at these differences in scale demonstrates that these tensions are not irresolvable but contingent. We note that beyond its reference to the size of any given system, scale serves as a measure of the infrastructural investments needed to extend a system across contexts. To scale for a more participatory AI, we argue that these same tensions become opportunities for intervention by offering case studies that illustrate how infrastructural investments have supported participation in AI design and governance. Just as scaling commercial AI has required significant investments, we argue that scaling participation accordingly will require the creation of infrastructure dedicated to the practical dimension of achieving the participatory tradition’s commitment to shifting power.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"206 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13636
Timnit Gebru, Émile P. Torres
{"title":"The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence","authors":"Timnit Gebru, Émile P. Torres","doi":"10.5210/fm.v29i4.13636","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13636","url":null,"abstract":"The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"70 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13643
Jenna Burrell, Jacob Metcalf
{"title":"Introduction for the special issue of “Ideologies of AI and the consolidation of power”: Naming power","authors":"Jenna Burrell, Jacob Metcalf","doi":"10.5210/fm.v29i4.13643","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13643","url":null,"abstract":"This introductory essay for the special issue of First Monday, “Ideologies of AI and the consolidation of power,” considers how power operates in AI and machine learning research and publication. Drawing on themes from the seven contributions to this special issue, we argue that what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science. We argue that naming and grappling with this power, and the troubled history of core commitments behind the pursuit of general artificial intelligence, is necessary for the integrity of the field and the well-being of the people whose lives are impacted by AI.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"174 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13630
Jenna Burrell
{"title":"Automated decision-making as domination","authors":"Jenna Burrell","doi":"10.5210/fm.v29i4.13630","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13630","url":null,"abstract":"Machine learning ethics research is demonstrably skewed. Work that defines fairness as a matter of distribution or allocation and that proposes computationally tractable definitions of fairness has been overproduced and overpublished. This paper takes a sociological approach to explain how subtle processes of social-reproduction within the field of computer science partially explains this outcome. Arguing that allocative fairness is inherently limited as a definition of justice, I point to how researchers in this area can make broader use of the intellectual insights from political philosophy, philosophy of knowledge, and feminist and critical race theories. I argue that a definition of injustice not as allocative unfairness but as domination, drawing primarily from the argument of philosopher Iris Marion Young, would better explain observations of algorithmic harm that are widely acknowledged in this research community. This alternate definition expands the solution space for algorithmic justice to include other forms of consequential action beyond code fixes, such as legislation, participatory assessments, forms of user repurposing and resistance, and activism that leads to bans on certain uses of technology.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"29 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13626
Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang
{"title":"Field-building and the epistemic culture of AI safety","authors":"Shazeda Ahmed, Klaudia Jaźwińska, Archana Ahlawat, Amy Winecoff, Mona Wang","doi":"10.5210/fm.v29i4.13626","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13626","url":null,"abstract":"The emerging field of “AI safety” has attracted public attention and large infusions of capital to support its implied promise: the ability to deploy advanced artificial intelligence (AI) while reducing its gravest risks. Ideas from effective altruism, longtermism, and the study of existential risk are foundational to this new field. In this paper, we contend that overlapping communities interested in these ideas have merged into what we refer to as the broader “AI safety epistemic community,” which is sustained through its mutually reinforcing community-building and knowledge production practices. We support this assertion through an analysis of four core sites in this community’s epistemic culture: 1) online community-building through Web forums and career advising; 2) AI forecasting; 3) AI safety research; and 4) prize competitions. The dispersal of this epistemic community’s members throughout the tech industry, academia, and policy organizations ensures their continued input into global discourse about AI. Understanding the epistemic culture that fuses their moral convictions and knowledge claims is crucial to evaluating these claims, which are gaining influence in critical, rapidly changing debates about the harms of AI and how to mitigate them.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"66 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140704780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13628
Abeba Birhane, J. V. Dijk, Frank Pasquale
{"title":"Debunking robot rights metaphysically, ethically, and legally","authors":"Abeba Birhane, J. V. Dijk, Frank Pasquale","doi":"10.5210/fm.v29i4.13628","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13628","url":null,"abstract":"In this work we challenge the argument for robot rights on metaphysical, ethical and legal grounds. Metaphysically, we argue that machines are not the kinds of things that may be denied or granted rights. Building on theories of phenomenology and post-Cartesian approaches to cognitive science, we ground our position in the lived reality of actual humans in an increasingly ubiquitously connected, controlled, digitized, and surveilled society. Ethically, we argue that, given machines’ current and potential harms to the most marginalized in society, limits on (rather than rights for) machines should be at the centre of current AI ethics debate. From a legal perspective, the best analogy to robot rights is not human rights but corporate rights, a highly controversial concept whose most important effect has been the undermining of worker, consumer, and voter rights by advancing the power of capital to exercise outsized influence on politics and law. The idea of robot rights, we conclude, acts as a smoke screen, allowing theorists and futurists to fantasize about benevolently sentient machines with unalterable needs and desires protected by law. While such fantasies have motivated fascinating fiction and art, once they influence legal theory and practice articulating the scope of rights claims, they threaten to immunize from legal accountability the current AI and robotics that is fuelling surveillance capitalism, accelerating environmental destruction, and entrenching injustice and human suffering.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140705952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-04-14DOI: 10.5210/fm.v29i4.13620
Norah Abokhodair, Yarden Skop, Sarah Rüller, Konstantin Aal, Houda Elmimouni
{"title":"Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis","authors":"Norah Abokhodair, Yarden Skop, Sarah Rüller, Konstantin Aal, Houda Elmimouni","doi":"10.5210/fm.v29i4.13620","DOIUrl":"https://doi.org/10.5210/fm.v29i4.13620","url":null,"abstract":"Social media platforms, while influential tools for human rights activism, free speech, and mobilization, also bear the influence of corporate ownership and commercial interests. This dual character can lead to clashing interests in the operations of these platforms. This study centers on the May 2021 Sheikh Jarrah events in East Jerusalem, a focal point in the Israeli-Palestinian conflict that garnered global attention. During this period, Palestinian activists and their allies observed and encountered a notable increase in automated content moderation actions, like shadow banning and content removal. We surveyed 201 users who faced content moderation and conducted 12 interviews with political influencers to assess the impact of these practices on activism. Our analysis centers on automated content moderation and transparency, investigating how users and activists perceive the content moderation systems employed by social media platforms, and their opacity. Findings reveal perceived censorship by pro-Palestinian activists due to opaque and obfuscated technological mechanisms of content demotion, complicating harm substantiation and lack of redress mechanisms. We view this difficulty as part of algorithmic harms, in the realm of automated content moderation. This dynamic has far-reaching implications for activism’s future and it raises questions about power centralization in digital spaces.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"161 s1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-03-09DOI: 10.5210/fm.v29i3.13571
Sarah Young, Catherine Brooks, J. Pridmore
{"title":"Societal implications of quantum technologies through a technocriticism of quantum key distribution","authors":"Sarah Young, Catherine Brooks, J. Pridmore","doi":"10.5210/fm.v29i3.13571","DOIUrl":"https://doi.org/10.5210/fm.v29i3.13571","url":null,"abstract":"Advancement in quantum networking is becoming increasingly more sophisticated, with some arguing that a working quantum network may be reached by 2030. Just how these networks can and will come to be is still a work in progress, including how communications within those networks will be secured. While debates about the development of quantum networking often focus on technical specifications, less is written about their social impacts and the myriad of ways individuals can engage in conversations about quantum technologies, especially in non-technical ways. Spaces for legal, humanist or behavioral scholars to weigh in on the impacts of this emerging capability do exist, and using the example of criticism of the quantum protocol quantum key distribution (QKD), this paper illustrates five entry points for non-technical experts to help technical, practical, and scholarly communities prepare for the anticipated quantum revolution. Selecting QKD as an area of critique was chosen due to its established position as an application of quantum properties that reaches beyond theoretical applications.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"224 S728","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
First MondayPub Date : 2024-03-09DOI: 10.5210/fm.v29i3.12497
Mariya Kozharinova, Lev Manovich
{"title":"Instagram as a narrative platform","authors":"Mariya Kozharinova, Lev Manovich","doi":"10.5210/fm.v29i3.12497","DOIUrl":"https://doi.org/10.5210/fm.v29i3.12497","url":null,"abstract":"Even though Instagram has been the subject of numerous studies, none of them have systematically investigated its potential as a narrative medium. This article argues that Instagram’s narrative capabilities are comparable to those of literature and film. To support our claims, we analyze a number of prominent female Instagram creators and demonstrate how they employ the platform’s diverse features, functionalities, and interface to create multi-year biographical narratives. Furthermore, we discuss the applicability of theories developed in literary and film studies in analyzing Instagram’s narrative capabilities. By employing Bakhtin’s influential chronotope concept, we examine in depth how these narratives make specific use of space and time. Additionally, we compare time construction in film and Instagram narratives using the cinema studies’ theory of narrative time in movies.","PeriodicalId":38833,"journal":{"name":"First Monday","volume":"270 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}