Journal of responsible technology最新文献

筛选
英文 中文
Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital 数字主权与智能可穿戴设备:数字合法控制权分配的四大道德准则
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100053
N. Conradie, S. Nagel
{"title":"Digital sovereignty and smart wearables: four moral calculi for the distribution of legitimate control over the digital","authors":"N. Conradie, S. Nagel","doi":"10.1016/j.jrt.2022.100053","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100053","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44418719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag. ?类人计算机?Manfred Velden(2022)。类人计算机:荒谬的一课。柏林:Schwabe Verlag。
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100037
Carlos Andrés Salazar Martínez
{"title":"¿Human-like Computers? Velden, Manfred (2022). Human-like Computers: A Lesson in Absurdity. Berlin: Schwabe Verlag.","authors":"Carlos Andrés Salazar Martínez","doi":"10.1016/j.jrt.2022.100037","DOIUrl":"10.1016/j.jrt.2022.100037","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100037"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000142/pdfft?md5=0f467b3c25ff4be3ac3bf4e00407bcf3&pid=1-s2.0-S2666659622000142-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46535421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space 太空殖民应该以繁殖为基础吗?关于选择在太空生孩子的关键考虑
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100040
Maurizio Balistreri , Steven Umbrello
{"title":"Should the colonisation of space be based on reproduction? Critical considerations on the choice of having a child in space","authors":"Maurizio Balistreri ,&nbsp;Steven Umbrello","doi":"10.1016/j.jrt.2022.100040","DOIUrl":"10.1016/j.jrt.2022.100040","url":null,"abstract":"<div><p>This paper aims to argue for the thesis that it is not <em>a priori</em> morally justified that the first phase of space colonisation is based on sexual reproduction. We ground this position on the argument that, at least in the first colonisation settlements, those born in space may not have a good chance of having a good life. This problem does not depend on the fact that life on another planet would have to deal with issues such as solar radiation or with the decrease or entire absence of the force of gravity. These issues could plausibly be addressed given that the planets or settlements we will feasibly colonise could be completely transformed through geoengineering processes. Likewise, the ability of humans to live in space could be enhanced through genetic modification interventions. Even if, however, the problems concerning survival in space were solved, we think that, at least in the first period of colonisation of space or other planets, giving birth to children in space could be a morally irresponsible choice since we argue, the life we ​​could give them might not be good enough. We contend that this is the case since when we decide to have a baby. We argue that it is not morally right to be content that our children have a minimally sufficient life worth living; before we give birth to children in space, we should make sure we can give them a reasonable chance of having a good life. This principle applies both on Earth - at least where you can choose - and for space travel.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000178/pdfft?md5=e944bf978b1233e58ceb542e40645d21&pid=1-s2.0-S2666659622000178-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44667584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Erratum regarding missing Declaration of Competing Interest statements in previously published article. 关于先前发表的文章中缺少竞争利益声明的勘误表。
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100033
{"title":"Erratum regarding missing Declaration of Competing Interest statements in previously published article.","authors":"","doi":"10.1016/j.jrt.2022.100033","DOIUrl":"10.1016/j.jrt.2022.100033","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100033"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9421412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9888780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns 尼日利亚的数字身份(ID)管理计划:伦理、法律和社会文化问题
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100039
Damian Eke , Ridwan Oloyede , Paschal Ochang , Favour Borokini , Mercy Adeyeye , Lebura Sorbarikor , Bamidele Wale-Oshinowo , Simisola Akintoye
{"title":"Nigeria’s Digital Identification (ID) Management Program: Ethical, Legal and Socio-Cultural concerns","authors":"Damian Eke ,&nbsp;Ridwan Oloyede ,&nbsp;Paschal Ochang ,&nbsp;Favour Borokini ,&nbsp;Mercy Adeyeye ,&nbsp;Lebura Sorbarikor ,&nbsp;Bamidele Wale-Oshinowo ,&nbsp;Simisola Akintoye","doi":"10.1016/j.jrt.2022.100039","DOIUrl":"10.1016/j.jrt.2022.100039","url":null,"abstract":"<div><p>National digital identity management systems have gained traction as a critical tool for inclusion of citizens in the increasingly digitised public services. With the help of the World Bank, countries around the world are committing to building and promoting digital identification systems to improve development outcomes as part of the Identity for development initiative (ID4D). One of those countries is Nigeria, which is building a national ID management database for its over 100 million residents. However, there are privacy, security, human rights, ethics and socio-cultural implications associated with the design and scaling of such a system at a national level. Through a mixed method approach, this paper identifies some of these concerns and categorises which ones Nigerians are most worried about. It provides an empirically sound perspective around centralised national electronic identity (eID) management system, public trust and responsible data governance, and offers recommendations on enhancing privacy, security and trustworthiness of the digital infrastructure for identity management in Nigeria.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100039"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000166/pdfft?md5=14d456c9bcd0a32b20f209e06035c96b&pid=1-s2.0-S2666659622000166-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49198422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Responsible innovation; responsible data. A case study in autonomous driving 负责任的创新;负责任的数据。自动驾驶案例研究
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100038
C. Ten Holter , L. Kunze , J-A. Pattinson , P. Salvini , M. Jirotka
{"title":"Responsible innovation; responsible data. A case study in autonomous driving","authors":"C. Ten Holter ,&nbsp;L. Kunze ,&nbsp;J-A. Pattinson ,&nbsp;P. Salvini ,&nbsp;M. Jirotka","doi":"10.1016/j.jrt.2022.100038","DOIUrl":"10.1016/j.jrt.2022.100038","url":null,"abstract":"<div><p>Autonomous Vehicles (AVs) collect a vast amount of data during their operation (MBs/sec). What data is recorded, who has access to it, and how it is analysed and used can have major technical, ethical, social, and legal implications. By embedding Responsible Innovation (RI) methods within the AV lifecycle, negative consequences resulting from inadequate data logging can be foreseen and prevented. An RI approach demands that questions of societal benefit, anticipatory governance, and stakeholder inclusion, are placed at the forefront of research considerations. Considered as foundational principles, these concepts create a contextual mindset for research that will by definition have an RI underpinning as well as application. Such an RI mindset both inspired and governed the genesis and operation of a research project on autonomous vehicles. The impact this had on research outlines and workplans, and the challenges encountered along the way are detailed, with conclusions and recommendations for RI in practice.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100038"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000154/pdfft?md5=83cd9d06b2115ee4c793d9b4e7219e99&pid=1-s2.0-S2666659622000154-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48931334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes 人力资源技术中的负责任人工智能:一种创新的包容性和公平性匹配算法,用于招聘目的
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100041
Sebastien Delecraz , Loukman Eltarr , Martin Becuwe , Henri Bouxin , Nicolas Boutin , Olivier Oullier
{"title":"Responsible Artificial Intelligence in Human Resources Technology: An innovative inclusive and fair by design matching algorithm for job recruitment purposes","authors":"Sebastien Delecraz ,&nbsp;Loukman Eltarr ,&nbsp;Martin Becuwe ,&nbsp;Henri Bouxin ,&nbsp;Nicolas Boutin ,&nbsp;Olivier Oullier","doi":"10.1016/j.jrt.2022.100041","DOIUrl":"10.1016/j.jrt.2022.100041","url":null,"abstract":"<div><p>In this article, we address the broad issue of a responsible use of Artificial Intelligence in Human Resources Management through the lens of a fair-by-design approach to algorithm development illustrated by the introduction of a new machine learning-based approach to job matching. The goal of our algorithmic solution is to improve and automate the recruitment of temporary workers to find the best match with existing job offers. We discuss how fairness should be a key focus of human resources management and highlight the main challenges and flaws in the research that arise when developing algorithmic solutions to match candidates with job offers. After an in-depth analysis of the distribution and biases of our proprietary data set, we describe the methodology used to evaluate the effectiveness and fairness of our machine learning model as well as solutions to correct some biases. The model we introduce constitutes the first step in our effort to control for unfairness in the outcomes of machine learning algorithms in job recruitment, and more broadly a responsible use of artificial intelligence in Human Resources Management thanks to “safeguard algorithms” tasked to control for biases and prevent discriminatory outcomes.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200018X/pdfft?md5=1067842485c764fe87523992da73aaec&pid=1-s2.0-S266665962200018X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
AI Documentation: A path to accountability 人工智能文档:责任之路
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100043
Florian Königstorfer, Stefan Thalmann
{"title":"AI Documentation: A path to accountability","authors":"Florian Königstorfer,&nbsp;Stefan Thalmann","doi":"10.1016/j.jrt.2022.100043","DOIUrl":"10.1016/j.jrt.2022.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) promises huge potential for businesses but due to its black-box character has also substantial drawbacks. This is a particular challenge in regulated use cases, where software needs to be certified or validated before deployment. Traditional software documentation is not sufficient to provide the required evidence to auditors and AI-specific guidelines are not available yet. Thus, AI faces significant adoption barriers in regulated use cases, since accountability of AI cannot be ensured to a sufficient extent. This interview study aims to determine the current state of documenting AI in regulated use cases. We found that the risk level of AI use cases has an impact on the AI adoption and the scope of AI documentation. Further, we discuss how AI is currently documented and which challenges practitioners face when documenting AI.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000208/pdfft?md5=bb63316f230d774001f337edc4c0fa62&pid=1-s2.0-S2666659622000208-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49588355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A method for ethical AI in defence: A case study on developing trustworthy autonomous systems 一种道德人工智能防御方法:以开发值得信赖的自主系统为例
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100036
Tara Roberson , Stephen Bornstein , Rain Liivoja , Simon Ng , Jason Scholz , Kate Devitt
{"title":"A method for ethical AI in defence: A case study on developing trustworthy autonomous systems","authors":"Tara Roberson ,&nbsp;Stephen Bornstein ,&nbsp;Rain Liivoja ,&nbsp;Simon Ng ,&nbsp;Jason Scholz ,&nbsp;Kate Devitt","doi":"10.1016/j.jrt.2022.100036","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100036","url":null,"abstract":"<div><p>What does it mean to be responsible and responsive when developing and deploying trusted autonomous systems in Defence? In this short reflective article, we describe a case study of building a trusted autonomous system – Athena AI  – within an industry-led, government-funded project with diverse collaborators and stakeholders. Using this case study, we draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches and principles throughout the development of technology at high translation readiness levels.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100036"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659622000130/pdfft?md5=881df316ceef04dfdd86d777884f9837&pid=1-s2.0-S2666659622000130-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72075553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project 让心理治疗利益相关者参与负责任的研究,开发自动化反馈工具:从ExTRAPPOLATE项目中学习
Journal of responsible technology Pub Date : 2022-10-01 DOI: 10.1016/j.jrt.2022.100044
Jacob A Andrews , Mat Rawsthorne , Cosmin Manolescu , Matthew Burton McFaul , Blandine French , Elizabeth Rye , Rebecca McNaughton , Michael Baliousis , Sharron Smith , Sanchia Biswas , Erin Baker , Dean Repper , Yunfei Long , Tahseen Jilani , Jeremie Clos , Fred Higton , Nima Moghaddam , Sam Malins
{"title":"Involving psychological therapy stakeholders in responsible research to develop an automated feedback tool: Learnings from the ExTRAPPOLATE project","authors":"Jacob A Andrews ,&nbsp;Mat Rawsthorne ,&nbsp;Cosmin Manolescu ,&nbsp;Matthew Burton McFaul ,&nbsp;Blandine French ,&nbsp;Elizabeth Rye ,&nbsp;Rebecca McNaughton ,&nbsp;Michael Baliousis ,&nbsp;Sharron Smith ,&nbsp;Sanchia Biswas ,&nbsp;Erin Baker ,&nbsp;Dean Repper ,&nbsp;Yunfei Long ,&nbsp;Tahseen Jilani ,&nbsp;Jeremie Clos ,&nbsp;Fred Higton ,&nbsp;Nima Moghaddam ,&nbsp;Sam Malins","doi":"10.1016/j.jrt.2022.100044","DOIUrl":"https://doi.org/10.1016/j.jrt.2022.100044","url":null,"abstract":"<div><p>Understanding stakeholders’ views on novel autonomous systems in healthcare is essential to ensure these are not abandoned after substantial investment has been made. The ExTRAPPOLATE project applied the principles of Responsible Research and Innovation (RRI) in the development of an automated feedback system for psychological therapists, ‘AutoCICS’. A Patient and Practitioner Reference Group (PPRG) was convened over three online workshops to inform the system's development. Iterative workshops allowed proposed changes to the system (based on stakeholder comments) to be scrutinized. The PPRG reference group provided valuable insights, differentiated by role, including concerns and suggestions related to the applicability and acceptability of the system to different patients, as well as ethical considerations. The RRI approach enabled the <em>anticipation</em> of barriers to use, <em>reflection</em> on stakeholders’ views, effective <em>engagement</em> with stakeholders, and <em>action</em> to revise the design and proposed use of the system prior to testing in future planned feasibility and effectiveness studies. Many best practices and learnings can be taken from the application of RRI in the development of the AutoCICS system.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":"11 ","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962200021X/pdfft?md5=aaad5f2bbda984671acf004f7fb61ea1&pid=1-s2.0-S266665962200021X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72123542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信