Gemma Serrano , Francesco Striano , Steven Umbrello
{"title":"Digital humanism as a bottom-up ethics","authors":"Gemma Serrano , Francesco Striano , Steven Umbrello","doi":"10.1016/j.jrt.2024.100082","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100082","url":null,"abstract":"<div><p>In this paper, we explore a new perspective on digital humanism, emphasizing the centrality of multi-stakeholder dialogues and a bottom-up approach to surfacing stakeholder values. This approach starkly contrasts with existing frameworks, such as the Vienna Manifesto's top-down digital humanism, which hinges on pre-established first principles. Our approach provides a more flexible, inclusive framework that captures a broader spectrum of ethical considerations, particularly those pertinent to the digital realm. We apply our model to two case studies, comparing the insights generated with those derived from a utilitarian perspective and the Vienna Manifesto's approach. The findings underscore the enhanced effectiveness of our approach in revealing additional, often overlooked stakeholder values, not typically encapsulated by traditional top-down methodologies. Furthermore, this paper positions our digital humanism approach as a powerful tool for framing ethics-by-design, by promoting a narrative that empowers and centralizes stakeholders. As a result, it paves the way for more nuanced, comprehensive ethical considerations in the design and implementation of digital technologies, thereby enriching the existing literature on digital ethics and setting a promising agenda for future research.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2024-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000088/pdfft?md5=a40431af04a93c455298d3e1eacfeb46&pid=1-s2.0-S2666659624000088-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140330847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Do we really need a “Digital Humanism”? A critique based on post-human philosophy of technology and socio-legal techniques","authors":"Federica Buongiorno , Xenia Chiaramonte","doi":"10.1016/j.jrt.2024.100080","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100080","url":null,"abstract":"<div><p>Few concepts have been subjected to as intense scrutiny in contemporary discourse as that of “humanism.” While these critiques have acknowledged the importance of retaining certain key aspects of humanism, such as rights, freedom, and human dignity, the term has assumed ambivalence, especially in light of post-colonial and gender studies, that cannot be ignored. The “Vienna Manifesto on Digital Humanism,” as well as the recent volume (2022) titled <em>Perspectives on Digital Humanism</em>, bear a complex imprint of this ambivalence. In this contribution, we aim to bring to the forefront and decipher this underlying trace, by considering alternative (non-humanistic) ways to understand human-technologies relations, beyond the dominant neoliberal paradigm (paragraphs 1 and 2); we then analyse those relations within the specific context of legal studies (paragraphs 3 and 4), one in which the interdependency of humans and non-humans shows a specific and complex form of “fundamental ambivalence.”</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100080"},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000064/pdfft?md5=a83279cb48841b221775aa3aa2b0256f&pid=1-s2.0-S2666659624000064-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligence as a human life form","authors":"Maurizio Ferraris","doi":"10.1016/j.jrt.2024.100081","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100081","url":null,"abstract":"<div><p>This text aims to counter the anxieties generated by the recent emergence of AI and the criticisms leveled at it, demanding its moralization. It does so by demonstrating that AI is neither new nor is it true intelligence but rather a tool, akin to many others that have long been serving human intelligence and its objectives. In what follows, I offer a broader reflection on technology that aims to contextualize the novelty and singularity attributed to AI within the history of technological developments. My ultimate goal is to relativize the novelty of AI, seeking to alleviate the moral anxieties it currently elicits and encouraging a more normal, optimistic view of it. The first step in understanding AI is indeed to realize that its novelty is only relative, and that AI has many ancestors that, upon closer examination, turn out to be closely related.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"18 ","pages":"Article 100081"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000076/pdfft?md5=1b728ab83e058b5709581507a0c2ecfb&pid=1-s2.0-S2666659624000076-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140187837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam K. Taras , Niko Sünderhauf , Peter Corke , Donald G. Dansereau
{"title":"Inherently privacy-preserving vision for trustworthy autonomous systems: Needs and solutions","authors":"Adam K. Taras , Niko Sünderhauf , Peter Corke , Donald G. Dansereau","doi":"10.1016/j.jrt.2024.100079","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100079","url":null,"abstract":"<div><p>Vision is an effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, identity, and activity of humans within that scene. This raises important questions about the reach, lifespan, and misuse of this information. This paper is a call to action to consider privacy in robotic vision. We propose a specific form of inherent privacy preservation in which no images are captured or could be reconstructed by an attacker, even with full remote access. We present a set of principles by which such systems could be designed, employing data-destroying operations and obfuscation in the optical and analogue domains. These cameras <em>never</em> see a full scene. Our localisation case study demonstrates in simulation four implementations that all fulfil this task. The design space of such systems is vast despite the constraints of optical-analogue processing. We hope to inspire future works that expand the range of applications open to sighted robotic systems.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100079"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000052/pdfft?md5=4bc01eda85dc3576e713b1aa99ec1739&pid=1-s2.0-S2666659624000052-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139999894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exit (digital) humanity: Critical notes on the anthropological foundations of “digital humanism”","authors":"Antonio Lucci , Andrea Osti","doi":"10.1016/j.jrt.2024.100077","DOIUrl":"10.1016/j.jrt.2024.100077","url":null,"abstract":"<div><p>This paper evaluates the historical-anthropological and ethical underpinnings of the concept of “digital humanism.” Our inquiry begins with a reconstructive analysis (§1), focusing on three pivotal works defining digital humanism. The objective is to expose shared characteristics shaping the notions of “human being” and “humanity.” Moving forward, our investigation employs anthropological-evolutionary (§2) and individual-cognitive (§3) perspectives to discern how cultural-historical contingencies shape the implicit understanding of the “human being” that forms the foundation for digital humanism. As an illustrative case study, we delve into Luddism (§4) to illuminate the potential and limitations of adopting a critical stance towards digital humanism. Through a thorough analysis, encompassing both efficacy and implicit anthropological elements, our goal is to extract ethical implications (§5) pertinent to our broader objective. This examination reveals the interplay between cultural-historical contingencies and anthropological constants in shaping assumptions about the “human being” within the context of digital humanism. In conclusion, our paper contributes to a nuanced understanding of the implicit assumptions permeating the digital humanism discourse. We advocate for a more critical and reflective engagement with the foundational concepts of digital humanism, urging scholars and practitioners to navigate the complexities of its historical-anthropological and ethical dimensions.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100077"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000039/pdfft?md5=1f267005df7fc3992c4564d52def1b64&pid=1-s2.0-S2666659624000039-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139891840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are we done with (Wordy) manifestos? Towards an introverted digital humanism","authors":"Giacomo Pezzano","doi":"10.1016/j.jrt.2024.100078","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100078","url":null,"abstract":"<div><p>Beginning with a reconstruction of the anthropological paradigms underlying <em>The Vienna Manifesto</em> and <em>The Onlife Manifesto</em> (§ 1.1), this paper distinguishes between two possible approaches to digital humanism: an <em>extroverted</em> one, principally engaged in finding a way to humanize digital technologies, and an <em>introverted</em> one, pointing instead attention to how digital technologies can re-humanize us, particularly our “mindframe” (§ 1.2). On this basis, I stress that if we take seriously the consequences of the “mediatic turn”, according to which human reason is finally recognized as mediatically contingent (§ 2.1), then we should accept that just as the book created the poietic context for the development of traditional humanism and its “bookish” idea of private and public reason, so too digital psycho-technologies today provide the conditions for the rise of a new humanism (§ 2.2). I then discuss the possible humanizing potential of digital simulated worlds: I compare the symbolic-reconstructive mindset to the sensorimotor mindset (§ 3.1), and I highlight their respective mediological association with the book and the video game, advocating for the peculiar thinking and reasoning affordances now offered by the new digital psycho-technologies (§ 3.2).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100078"},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000040/pdfft?md5=bba4bc77d24cfec45f135507bd575f96&pid=1-s2.0-S2666659624000040-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139737874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali
{"title":"C-XAI: A conceptual framework for designing XAI tools that support trust calibration","authors":"Mohammad Naiseh , Auste Simkute , Baraa Zieni , Nan Jiang , Raian Ali","doi":"10.1016/j.jrt.2024.100076","DOIUrl":"10.1016/j.jrt.2024.100076","url":null,"abstract":"<div><p>Recent advancements in machine learning have spurred an increased integration of AI in critical sectors such as healthcare and criminal justice. The ethical and legal concerns surrounding fully autonomous AI highlight the importance of combining human oversight with AI to elevate decision-making quality. However, trust calibration errors in human-AI collaboration, encompassing instances of over-trust or under-trust in AI recommendations, pose challenges to overall performance. Addressing trust calibration in the design process is essential, and eXplainable AI (XAI) emerges as a valuable tool by providing transparent AI explanations. This paper introduces Calibrated-XAI (C-XAI), a participatory design framework specifically crafted to tackle both technical and human factors in the creation of XAI interfaces geared towards trust calibration in Human-AI collaboration. The primary objective of the C-XAI framework is to assist designers of XAI interfaces in minimising trust calibration errors at the design level. This is achieved through the adoption of a participatory design approach, which includes providing templates, guidance, and involving diverse stakeholders in the design process. The efficacy of C-XAI is evaluated through a two-stage evaluation study, demonstrating its potential to aid designers in constructing user interfaces with trust calibration in mind. Through this work, we aspire to offer systematic guidance to practitioners, fostering a responsible approach to eXplainable AI at the user interface level.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100076"},"PeriodicalIF":0.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000027/pdfft?md5=b4038e407ec2450c0ab8e0c8949eebfe&pid=1-s2.0-S2666659624000027-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139633231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Principles of digital humanism: A critical post-humanist view","authors":"Erich Prem","doi":"10.1016/j.jrt.2024.100075","DOIUrl":"10.1016/j.jrt.2024.100075","url":null,"abstract":"<div><p>Digital humanism emerges from serious concerns about the way in which digitisation develops, its impact on society and on humans. While its motivation is clear and broadly accepted, it is still an emerging field that does not yet have a universally accepted definition. Also, it is not always clear how to differentiate digital humanism from other similar endeavours. In this article, we critically investigate the notion of digital humanism and present its main principles as shared by its key proponents. These principles include the quest for human dignity and the ideal of a better society based on core values of the Enlightenment.</p><p>The paper concludes that digital humanism is to be treated as a technical endeavour to shape digital technologies and use them for digital innovation, a political endeavour investigating power shifts triggered by digital technology, and, at the same time, as a philosophical endeavour including the quest to delineate its scope and to draw boundaries for the digital.</p><p>Methodologically, digital humanism is an interdisciplinary effort to debate a broad range of digitisation shortfalls in their totality, from privacy infringements to power shifts, from human alienation to disownment. While it overlaps with a range of established fields and other movements, digital humanism reflects a new academic, engineering, and societal awareness of the challenges of digital technologies.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100075"},"PeriodicalIF":0.0,"publicationDate":"2024-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000015/pdfft?md5=7ab59149ce4444a4e0475dabeed868de&pid=1-s2.0-S2666659624000015-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139640078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring value dilemmas of brain monitoring technology through speculative design scenarios","authors":"Martha Risnes , Erik Thorstensen , Peyman Mirtaheri , Arild Berg","doi":"10.1016/j.jrt.2023.100074","DOIUrl":"https://doi.org/10.1016/j.jrt.2023.100074","url":null,"abstract":"<div><p>In the field of brain monitoring, the advancement of more user-friendly wearable and non-invasive devices is introducing new opportunities for application outside the lab and clinical use. Despite the growing importance of responsible innovation, there remains a knowledge gap in addressing the possible impacts of wearable non-invasive brain monitoring technology on mental health and well-being. Addressing this, our main aim was to study the use of speculative design scenarios as a method to describe potential value dilemmas associated with this new technology. Through a qualitative study, we invited participants to engage in discussions regarding three variations of wearable non-invasive brain monitoring technology presented in speculative video scenarios. The study's findings describe how the discussions contribute towards promoting heuristics that can help foster more responsible innovation by identifying norms and value dilemmas through inclusive speculative design practices. This qualitative case study contributes to the literature on responsible innovation by demonstrating how responsible innovation frameworks can benefit from incorporating anticipatory speculative design methods aimed at early identification of potential value dilemmas.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100074"},"PeriodicalIF":0.0,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659623000173/pdfft?md5=faf5b1f612b27f90600a2a1f937cefd7&pid=1-s2.0-S2666659623000173-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139379225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What is digital humanism? A conceptual analysis and an argument for a more critical and political digital (post)humanism","authors":"Mark Coeckelbergh","doi":"10.1016/j.jrt.2023.100073","DOIUrl":"10.1016/j.jrt.2023.100073","url":null,"abstract":"<div><p>The term digital humanism is gaining traction in academia, but what does it mean? This brief discussion paper offers a conceptual analysis and discussion of the term and vision, thereby arguing for a more critical, posthumanist, and political version of digital humanism.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"17 ","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659623000161/pdfft?md5=4cb858a75a3584b0010a2fc04ea97e14&pid=1-s2.0-S2666659623000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139014415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}