{"title":"Voluntary measures, participation and fundamental rights in the governance of research and innovation","authors":"Gorgoni Guido","doi":"10.29297/orbit.v1i4.72","DOIUrl":"https://doi.org/10.29297/orbit.v1i4.72","url":null,"abstract":"<div><p>Responsible Research and Innovation (RRI) aims at being a new governance paradigm aiming at steering the innovation process in a participative manner by constructing responsibility as a shared process between innovators and societal stakeholders, rather than a remedy to its failures. In order to achieve those goals, RRI implements a collaborative and inclusive process between innovators and societal stakeholders, widely based on the idea of granting a wider participation of societal actors to the innovation process. The purpose of steering the research and innovation processes through participation of societal actors is one of the distinguishing characteristics of RRI approach, which this way aims at taking into account the increasing political implications of scientific innovation. In order to do so, RRI model promotes governance strategies focusing on actors’ <em>responsibilisation</em>, which make appeal to actors’ capacity of reciprocal commitment towards some common goals not mandated by the law. Whilst voluntary non-binding regulatory approaches seem to be the ‘natural’ way to implement RRI in practice, nevertheless some concern remains about the scope and the limits of the contextual agreements reached each time, in particular their capacity to grant respect to some fundamental values, which are part of the European political and legal culture, and which are at risk to become freely re-negotiable within the RRI context if we base it only on the idea of autonomy, participation and consent. On the contrary, the paper argues that, if it wants to be coherent with its premises, RRI governance model needs to be complemented with a reference to fundamental rights, in order to give normative anchor-points to the confrontations between divergent views and values accompanying the development of technological innovation.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 4","pages":"Pages 1-21"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i4.72","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92037345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tackling global health challenges with telemedicine","authors":"Bolz Karsten","doi":"10.29297/orbit.v1i3.65","DOIUrl":"10.29297/orbit.v1i3.65","url":null,"abstract":"<div><p>This case study showcases the development of the telemedical diabetes monitoring system GlucoTel™ as a learning case for RRI in ICT. It therefore links the activities along the development process with RRI aspects, such as stakeholder engagement or open access. By taking a business perspective, the study shows that the integration of RRI principles into company processes not only benefits patients, caregivers and other users but can also have benefits for the company, such as competitive advantage. In addition, the case study takes a global perspective and outlines the potential of telemedicine for coping with the global societal challenge of diabetes and secondary diseases which are a major health care problem worldwide.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 3","pages":"Pages 1-15"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i3.65","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"93994987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Ethical Analysis of Personal Health Monitoring in the UK","authors":"Mittelstadt Brent","doi":"10.29297/orbit.v1i3.69","DOIUrl":"10.29297/orbit.v1i3.69","url":null,"abstract":"<div><p>Recent years have seen an influx of medical technologies capable of remotely monitoring the health and behaviours of individuals to detect, manage and prevent health problems. Known collectively as ‘Personal Health Monitoring’ (PHM), these systems are intended to supplement medical care with health monitoring outside traditional care environments such as hospitals. In the face of ageing demographics across the EU, such technologies are seen as a promising way to close the predicted gap between healthcare demand and resources. Medical care and monitoring currently provided by humans may be supplemented by technological monitoring, creating new ways of delivering healthcare to the elderly, homebound, chronically ill and healthy alike. However, the implications of introducing technological monitoring into healthcare need to be considered in greater detail before the technologies are widely used. PHM allows for greater collection of personal health data about users, which may raise ethical concerns. As an emerging technology with the potential for widespread usage across Europe and beyond, the opportunity remains for PHM to be developed and deployed responsibly by adhering to the principles of Responsible Research & Innovation (RRI). To contribute to this process an interview study with potential users and healthcare professionals was carried out in the UK. Twenty-one stakeholders were interviewed from patient groups and healthcare professionals representing medical conditions targeted by PHM: diabetes mellitus, hypertension and dementia. A series of recommendations on how to address the ethical implications and concerns of stakeholders are provided for members of industry responsible for developing PHM devices and services. Nine recommendations were identified:</p><ul><li><span>•</span><span><p>Offer devices and services with user feedback and recommendations for better health</p></span></li><li><span>•</span><span><p>Limit user access to raw monitoring data</p></span></li><li><span>•</span><span><p>Offer multiple levels of summarised feedback to users</p></span></li><li><span>•</span><span><p>Create open channels of communication with users</p></span></li><li><span>•</span><span><p>Do not view monitors as a replacement for staff</p></span></li><li><span>•</span><span><p>Give users control over their devices</p></span></li><li><span>•</span><span><p>Capture contextual information to support monitoring data</p></span></li><li><span>•</span><span><p>Take a minimal approach to contextual data</p></span></li><li><span>•</span><span><p>Discuss the extent and implications of monitoring with users</p></span></li></ul><p>The work described here is a first broad step in the RRI process which can contribute to the development and deployment of any PHM devices and services. The study can be understood as a broad ethical foresight study achieved through engagement with PHM stakeholders, including patients, doctors and healthcare organisations. Each ","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 3","pages":"Pages 1-16"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i3.69","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"100179672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"RRI in Industry","authors":"Stahl Bernd","doi":"10.29297/orbit.v1i3.64","DOIUrl":"10.29297/orbit.v1i3.64","url":null,"abstract":"<div><p>The concept of responsible research and innovation has its origin in publicly funded research. Much of the research activities and even more innovation activities which bring products and services happen in private companies. This editorial therefore aims to outline what RRI can mean in industrial complexes and describes the role that ORBIT can play in them. It draws on the work undertaken in the European project ‘Responsible-Industry’ and highlights the question of how the RRI discourse can be translated into a vocabulary familiar to companies.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 3","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2515856220300304/pdfft?md5=87893a166e2b86820fe0e36f20ea06ca&pid=1-s2.0-S2515856220300304-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"110164644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Issues of Implied Trust in Ethical Hacking","authors":"Thomas Georg, Burmeister Oliver, Low Gregory","doi":"10.29297/orbit.v2i1.77","DOIUrl":"10.29297/orbit.v2i1.77","url":null,"abstract":"<div><p>This paper discusses the issues of implied trust in ethical hacking. Unlike many other long-established professions, such as lawyers, doctors, and accountants; ethical hacking is a relatively new profession. As a result, this profession does not currently have a uniformed or mandated code, nor does it require any form of licensing. Because ethical hackers could gain access to highly sensitive and confidential information and there is potential for misuse of such information, the need to ensure professionalism is maintained through ensuring competence and ethical behavior is critical.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"2 1","pages":"Pages 1-19"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v2i1.77","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114267930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fossum Selamawit Molla, Barkved Line, Throne-Holst Harald
{"title":"Practicing responsible research and innovation in a crowdsourcing project in Norway","authors":"Fossum Selamawit Molla, Barkved Line, Throne-Holst Harald","doi":"10.29297/orbit.v2i1.82","DOIUrl":"10.29297/orbit.v2i1.82","url":null,"abstract":"<div><p>The paper discusses the operationalization of responsible research and innovation (RRI), drawing on empirical materials from a research project that explored ICT-enabled tools and methods for crowdsourcing in urban environmental research and decision- making. An integrated model for RRI is developed from prior studies, in which socially responsible crowdsourcing is described as an iterative and recursive process of inclusion, anticipation, reflexivity and responsiveness on the purpose, process, product and people components of the crowdsourcing project. The paper outlines four important aspects that influence the practice of RRI: time, interdisciplinary skills and capacities, design flexibility of ICT tools and strategic alliances between researchers and public officials. Theoretically, the paper contributes with an integrated conceptual model that further extends the already existing RRI framework.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"2 1","pages":"Pages 1-28"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2515856220300225/pdf?md5=daf89b934f16b3b94a74e721d63421c6&pid=1-s2.0-S2515856220300225-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"95117002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is professional practice at risk following the Volkswagen and Tesla motors revelations?","authors":"Simon Rogerson","doi":"10.29297/orbit.v1i1.34","DOIUrl":"10.29297/orbit.v1i1.34","url":null,"abstract":"<div><p>With the world in economic crisis the headlong drive for efficiency and effectiveness together with resulting profit is the watchword. Such pressure might have resulted in real gains but has also led to unscrupulous or reckless actions. The tempering of such drive with ethical consideration is often neglected until there is a detrimental event causing public outcry which in turn places pressure on the actors to account for the reasons the event had occurred. This cause and effect map is commonplace. The Volkswagen emissions scandal and Tesla Motors public beta testing of the Autopilot software in their cars illustrate the drive for efficiency and effectiveness without proper ethical consideration. This paper focuses on the role of software engineers in such situations. An ethical analysis of the two cases is presented using the Software Engineering Code of Ethics and Professional Practice. The findings, together with previously published analyses, are used to draw general conclusions and recommendations about the efficacy of the software engineering profession.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 1","pages":"Pages 1-15"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i1.34","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49552851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Difficult Task of Teaching Computer Ethics to Engineers","authors":"Patrignani Norberto, Kavathatzopoulos Iordanis","doi":"10.29297/orbit.v1i1.20","DOIUrl":"10.29297/orbit.v1i1.20","url":null,"abstract":"<div><p>This paper addresses the challenges of teaching computer ethics to engineers. The computer professionals are identified as the starting point of the stakeholder's network of ICT and as a consequence it is underlined the importance of computer ethics courses for engineers. To this purpose, a simple four-steps methodology is proposed for teaching computer ethics. The importance of applying the paradigm of complex systems is then described and the three dimensions of Slow Tech (good, clean, and fair ICT) are proposed as a compass for designing complex socio-technical systems. Finally some preliminary results coming from the feedback of about some hundreds students in several years are illustrated.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 1","pages":"Pages 1-10"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i1.20","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42793831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Editorial responsibilities arising from personalization algorithms","authors":"Koene Ansgar, Vallejos Elvira Perez, Webb Helena, Patel Menisha, Ceppi Sofia, Jirotka Marina, McAuley Derek","doi":"10.29297/orbit.v1i1.26","DOIUrl":"10.29297/orbit.v1i1.26","url":null,"abstract":"<div><p>Social media platforms routinely apply personalization algorithms to ensure the content presented to the user is relevant and engaging. These algorithms are designed to prioritize and make some pieces of information more visible than others. However, there is typically no transparency in the criteria used for ranking the information, and more importantly, the consequences that the resulting content could have on users. Social media platforms argue that because they do not alter content, just reshape the way it is presented to the user, they are merely technological companies (not media companies). We highlight the value of a Responsible Research and innovation (RRI) approach to the design, implementation and use of personalization algorithms. Based on this and in combination with reasoned analysis and the use of case studies, we suggest that social media platforms should take editorial responsibility and adopt a code of ethics to promote corporate social responsibility.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 1","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2515856220300687/pdf?md5=2d6c01a38ac8fa394d62af2cf6b04110&pid=1-s2.0-S2515856220300687-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69883173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why We Should Have Seen That Coming","authors":"M.J. Wolf, K.W. Miller, F.S. Grodzinsky","doi":"10.29297/orbit.v1i2.49","DOIUrl":"10.29297/orbit.v1i2.49","url":null,"abstract":"<div><p>In this paper we examine the case of Tay, the Microsoft AI chatbot that was launched in March, 2016. After less than 24 hours, Microsoft shut down the experiment because the chatbot was generating tweets that were judged to be inappropriate since they included racist, sexist, and anti-Semitic language. We contend that the case of Tay illustrates a problem with the very nature of learning software (LS is a term that describes any software that changes its program in response to its interactions) that interacts directly with the public, and the developer’s role and responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care.</p></div>","PeriodicalId":101247,"journal":{"name":"The ORBIT Journal","volume":"1 2","pages":"Pages 1-12"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.29297/orbit.v1i2.49","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45248044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}