Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer, Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt
{"title":"A method for ethical AI in defence: A case study on developing trustworthy autonomous systems","authors":"Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt","doi":"10.1016/j.jrt.2022.100036","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100036","url":null,"abstract":"<div><p>What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000130/pdfft?md5=881df316ceef04dfdd86d777884f9837&pid=1-s2.0-S2666659622000130-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72075553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins
{"title":"Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project","authors":"Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins","doi":"10.1016/j.jrt.2022.100044","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100044","url":null,"abstract":"<div><p>Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the <em>anticipation</em> of barriers to use, <em>reflection</em> on stakeholders’ views, effective <em>engagement</em> with stakeholders, and <em>action</em> to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200021X/pdfft?md5=aaad5f2bbda984671acf004f7fb61ea1&pid=1-s2.0-S266665962200021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72123542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahong Chen , Elena Nichele , Zack Ellerby , Christian Wagner
{"title":"Responsible research and innovation in practice: Driving both the ‘How’ and the ‘What’ to research","authors":"Jiahong Chen , Elena Nichele , Zack Ellerby , Christian Wagner","doi":"10.1016/j.jrt.2022.100042","DOIUrl":"10.1016/j.jrt.2022.100042","url":null,"abstract":"<div><p>There have been ongoing discussions in research communities, including the field of trustworthy autonomous systems (TAS), on how researchers may meaningfully engage with responsible research and innovation (RRI). By critically reflecting on the RRI aspects of an ongoing research project focusing on the efficient capture of richer quantitative human response data (e.g., from consumer surveys), this paper offers a case study on how research development can be ethically driven. The role of RRI in the project is unpicked against the broader considerations of its possible interactions with researchers in a typology we developed: as a research <em>safeguard</em>, research <em>subject</em>, and research <em>driver</em>. Going beyond the more common practice of using RRI simply to safeguard <em>how</em> research should be conducted, it is demonstrated that it can also serve as a positive driving force to explore <em>what</em> should be researched. Experiences and challenges are elaborated within the main stages of research development, potentially applicable to a wider range of future projects in the field.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100042"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000191/pdfft?md5=105da519846b4b09a86041c4919855a0&pid=1-s2.0-S2666659622000191-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45611937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Andrews, M. Rawsthorne, Cosmin Manolescu, Matthew Burton McFaul, B. French, Elizabeth Rye, Rebecca McNaughton, Michael Baliousis, Sharron Smith, Sanchia Biswas, Erin Baker, D. Repper, Y. Long, T. Jilani, Jérémie Clos, F. Higton, Nima G. Moghaddam, Sam Malins
{"title":"Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the XXXXXX project","authors":"J. Andrews, M. Rawsthorne, Cosmin Manolescu, Matthew Burton McFaul, B. French, Elizabeth Rye, Rebecca McNaughton, Michael Baliousis, Sharron Smith, Sanchia Biswas, Erin Baker, D. Repper, Y. Long, T. Jilani, Jérémie Clos, F. Higton, Nima G. Moghaddam, Sam Malins","doi":"10.1016/j.jrt.2022.100044","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100044","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42528163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Business and human rights in Industry 4.0: A blueprint for collaborative human rights due diligence in the Factories of the Future","authors":"Ivo Emanuilov , Katerina Yordanova","doi":"10.1016/j.jrt.2022.100028","DOIUrl":"10.1016/j.jrt.2022.100028","url":null,"abstract":"<div><p>The digitalisation of production driven by new paradigms such as Industry 4.0, factories of the future and smart manufacturing, create new challenges as to how manufacturers and other supply chain actors would discharge their corporate responsibility to respect human rights. These new paradigms enable novel approaches like distributed and collaborative manufacturing. Manufacturers increasingly leverage digital technologies, such as 3D printing, cloud manufacturing and artificial intelligence, to provide customised products. Digital technologies also improve predictive and preventive maintenance on the shop floor and across the supply chain, increasing the overall resilience of manufacturing industries in times of crisis. This article proposes a blueprint of a collaborative, decentralised approach to human rights due diligence in digital supply chains. It argues that the pooling of human rights due diligence efforts in manufacturing industries could have network-wide effects of incentivising value chain actors to also collaborate on providing collective remedy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"10 ","pages":"Article 100028"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000051/pdfft?md5=329135c487898986211a12be14d83e95&pid=1-s2.0-S2666659622000051-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49335787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transformation²: Making software engineering accountable for sustainability","authors":"Christoph Schneider , Stefanie Betz","doi":"10.1016/j.jrt.2022.100027","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100027","url":null,"abstract":"<div><p>Software engineering, as a central practice of digitalization, needs to become accountable for sustainability. In light of the ecological crises and the tremendous impact of digital systems on reshaping economic and social arrangements - often with negative side-effects - we need a sustainability transformation of the digital transformation. However, this is a complex and long-term task. In this article we combine an analysis of accountability arrangements in software engineering and a model of sustainability transformations to trace how certain dynamics are starting to make software engineering accountable for sustainability in the technological, cultural, economic and governance domains. The article discusses existing approaches for sustainable software engineering and software engineering for sustainability, traces emerging discourses that connect digitalization and sustainability, highlights new digital business models that may support sustainability and shows governance efforts to highlight “green and digital” policy problems. Yet, we argue that these are so far niche dynamics and that a sustainability transformation requires a collective and long-lasting effort to engender systemic changes. The goal should be to create varied accountability arrangements for sustainability in software engineering which is embedded in complex ways in society and economy.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"10 ","pages":"Article 100027"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200004X/pdfft?md5=748f1502b2a8df9332f0f31ed55e8a9f&pid=1-s2.0-S266665962200004X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72080613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Madeleine Borthwick , Martin Tomitsch , Melinda Gaughwin
{"title":"From human-centred to life-centred design: Considering environmental and ethical concerns in the design of interactive products","authors":"Madeleine Borthwick , Martin Tomitsch , Melinda Gaughwin","doi":"10.1016/j.jrt.2022.100032","DOIUrl":"10.1016/j.jrt.2022.100032","url":null,"abstract":"<div><p>Over the past decades, the field of interaction design has shaped how people interact with digital technology, both through research and practice. Interaction designers adopted human-centred design to ensure that the interactive products they design meet the needs and desires of end consumers. However, there is surmounting evidence that placing the end consumer at the centre of the design process creates unintended consequences, damaging global systems that are essential to human well-being. This article reviews emerging paradigms that provide a more holistic perspective, such as value-sensitive design, more-than-human participation and life-centred design. Based on this review, the article introduces a practical framework for life-centred design consisting of principles, actionable methods and a model for responsible innovation. The article discusses how interaction designers can use the framework to balance human-centred considerations with environmental and ethical concerns when designing interactive products.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"10 ","pages":"Article 100032"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000099/pdfft?md5=0e2cd9398b8dde551be4c34ccfd78818&pid=1-s2.0-S2666659622000099-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48503529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timo Jakobi, Maximilian von Grafenstein, Patrick Smieskol, Gunnar Stevens
{"title":"A Taxonomy of user-perceived privacy risks to foster accountability of data-based services","authors":"Timo Jakobi, Maximilian von Grafenstein, Patrick Smieskol, Gunnar Stevens","doi":"10.1016/j.jrt.2022.100029","DOIUrl":"10.1016/j.jrt.2022.100029","url":null,"abstract":"<div><p>Data protection risks play a major role in data protection laws and have shown to be suitable means for accountability in designing for usable privacy. Especially in the legal realm, risks are typically collected heuristically or deductively, e.g., by referring to fundamental right violations. Following a user-centered design credo, research on usable privacy has shown that a user-perspective on privacy risks can enhance system intelligibility and accountability. However, research on mapping the landscape of <em>user-perceived</em> privacy risks is still in its infancy. To extend the corpus of privacy risks as users perceive them in their daily use of technology, we conducted 9 workshops collecting 91 risks in the fields of web browsing, voice assistants and connected mobility. The body of risks was then categorized by 11 experts from the legal and HCI-domain. We find that, while existing taxonomies generally fit well, a societal dimension of risks is not yet represented. Discussing our empirically backed taxonomy including the full list of 91 risks, we demonstrate roads to use user-perceived risks as a mechanism to foster accountability for usable privacy in connected devices.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"10 ","pages":"Article 100029"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000063/pdfft?md5=2bc3ceea5eaf1a3c851762143a88439f&pid=1-s2.0-S2666659622000063-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45640469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}