{"title":"Investigating accuracy disparities for gender classification using convolutional neural networks","authors":"Lia Chin-Purcell, America Chambers","doi":"10.1109/istas52410.2021.9629153","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629153","url":null,"abstract":"Automatic gender recognition (AGR) is a subfield of facial recognition that has recently been scrutinized for bias in the form of misgendering and erasure against various identity groups in our society. Recent studies have found that several commercial AGR classifiers (from Microsoft, IMB, Face++) are biased against women and darker-skinned people as well as gender non-binary people [8, 11]. In this work, we investigate and quantify AGR classifier bias against transgender people by developing and evaluating three different convolutional neural networks (CNN): using images of cisgender individuals, using images of transgender individuals, and using images of both cisgender and transgender individuals. We find that the cisgender trained classifier is 91.7% accurate when evaluated on cisgender people, but only 68.9% accurate when evaluated on transgender people, with the worst performance of 38.6% precision for transgender men. We investigate this low precision further by performing additional experiments where various parts of the face are obscured. We end with recommendations for commercial classifiers based upon our findings.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127280670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From value-lists to value-based engineering with IEEE 7000™","authors":"S. Spiekermann","doi":"10.1109/istas52410.2021.9629134","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629134","url":null,"abstract":"Digital ethics is currently being discussed worldwide as a necessity to create more reliable IT systems. This discussion, fueled by the fear of uncontrollable general artificial intelligence (AI) and by ethical dilemmas of existing systems, has moved many institutions and scientists to demand value principles that should guide the development of future IT systems. These usually include the demand for privacy, security, transparency, fairness, etc. This article shows why working through lists of values is insufficient for good or ethically aligned design. It will be shown how a truly ethical ‘Value-based Engineering’ (VbE) would have to look like instead, so that technical product innovation as a whole is put on better (more ethical) feet. VbE is a process-driven, holistic approach to system engineering which initially drew from the ideas of Value Sensitive Design and Ethical Computing. From 2016-2021 VbE was further fleshed out in the IEEE 7000™standardization project *.*This article presents inter alia guidance for ethical engineering given in the forthcoming IEEE 7000− standard. However, this article solely represents the views of the author and does not necessarily represent a position of either the IEEE P7000 Working Group, IEEE or the IEEE Standards Association. The official link to the IEEE P7000 is: https://sagroups.iece.org/7000/.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131500107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Alexopoulos, Kelly A. Lyons, Kaushar Mahetaji, Keli Chiu
{"title":"Evaluating the disruption of COVID-19 on AI innovation using patent filings","authors":"M. Alexopoulos, Kelly A. Lyons, Kaushar Mahetaji, Keli Chiu","doi":"10.1109/istas52410.2021.9629125","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629125","url":null,"abstract":"Economists have long recognized that technological innovation is a key contributor to economic growth due to its impact on productivity. In this paper, we explore the impact of COVID-19 on innovation in artificial intelligence (AI) to better understand future effects on economic growth and productivity. Using patents as a measure of innovation and knowledge production, we analyze monthly patent application filing data from January 2015 to June 2021 to compare and assess trends. Past research has shown that growth in patents in the fields of AI have accelerated since 2012, with 6.5 times more annual filings occurring from 2006 to 2017. Here, we focus specifically on determining if the pandemic has had an impact on this acceleration in AI-related innovation. To accomplish this task we must confront the challenge in using up-to-date patent data for this kind of analysis due to the fact that there are considerable time lags associated with patent filing dates and their ultimate publication dates. In real-time situations such as COVID-19, it is, therefore, difficult to ascertain impact using the publicly available patenting data directly. In this paper, we propose a novel approach for examining existing and up-to-date publicly available patent filing data and use that method to gain new insights into the pandemic’s effects on AI-related innovation. Our findings suggest that the pandemic has had a slowing impact on the rate of innovation in these areas but that the downturn may be reversing.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132412650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ethics of AI as practical ethics","authors":"Florian Richter","doi":"10.1109/istas52410.2021.9629163","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629163","url":null,"abstract":"Whereas we acknowledge in general certain values as crucial for the ethical debate in AI, like e.g., fairness, transparency, and accountability, these values often can stand in conflict with each other. E.g., more transparency can lead to less privacy. Introducing higher principles to balance the values faces two problematic issues: (1.) Principles also can stand in conflict with each other and defer the discussion into a purely theoretical realm. (2.) If a higher-level principle is introduced and this also stands in conflict with another principle, then a higher-higher-level principle is needed, and we will get into an infinite regress. Although ethics of AI is part of the so-called field of applied ethics and it seems therefore that it is about the application of principles and values and finding the right balance regarding certain ethical theories, e.g., Kantian ethics or utilitarianism ([1][2][3]), the traditional approaches in the field of applied ethics do not offer sufficient conceptual means to deal with practical problems. Thus, the problems that arise from the implementation of intelligent systems can also not be handled adequately, because the same issue as above arises: How can the values be balanced regarding the ethical theories? If higher-level principles are not a viable approach to resolve conflicts of values, the criteria under which they can be implemented should be taken into consideration. Therefore, it is proposed that specific criteria for the implementation of the values need to be made explicit, which will result at least in a clarification for the public debate about certain technological advancements in the field of AI. [4] Furthermore, it is proposed that to resolve these conflicts the implementation must be evaluated, whether it enables human intervention in the future instead of making further actions and interventions impossible.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114704019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A review of data governance challenges in smart farming and potential solutions","authors":"Adesola Anidu, Rozita Dara","doi":"10.1109/istas52410.2021.9629169","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629169","url":null,"abstract":"The expectation on the agricultural system is constantly growing to be more productive with less labor, less water and less arable land. To achieve this goal, the use of digital technologies is being promoted. This has resulted in growth in use of wireless sensors, IoTs, cloud computing and other technologies in farms which have fueled the explorational of data. Data collected at farms varies from business operation data (farm management data), transport and farm storage data, land data (water, soil, GPS), machine data, agronomic data, livestock data, climate, and weather data. This large amount of data needs to be managed to ensure confidentiality and other governance requirements and enhance technical capacity and performance such as data integration and processing. This paper reviews the data governance challenges generated in smart farms and provides recommendations on how those challenges can be addressed.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134396246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From historical thinking to critical thinking about technology","authors":"S. Campbell","doi":"10.1109/istas52410.2021.9629144","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629144","url":null,"abstract":"This paper, a work-in-progress, presents historical thinking, a framework for teaching history, as a tool that can be integrated into undergraduate courses that relate to technology, society, and ethics to improve students’ critical thinking about technology. Historical thinking teaches students to distinguish past from present in a potentially humanizing process. The framework also raises analytical tensions and questions about the relationship between technology and society that can produce insightful critical analysis and, potentially, answers.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134013754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fairness in AI applications","authors":"C. Shelley","doi":"10.1109/istas52410.2021.9629140","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629140","url":null,"abstract":"Applications of Artificial Intelligence (AI) that have broad, social impact for many people have recently increased greatly in number. They will continue to increase in ubiquity and impact for some time to come. In conjunction with this increase, many scholars have studied the nature of these impacts, including problems of fairness. Here, fairness refers to conflicts of interest between social groups that result from the configuration of these AI systems. One focus of research has been to define these fairness problems and to quantify them in a way that lends itself to calculation of fair outcomes. The purpose of this presentation is to show that this issue of fairness in AI is consistent with fairness problems posed by technological design in general and that addressing these problems goes beyond what can be readily quantified and calculated. For example, many such problems may be best resolved by forms of public consultation. This point is clarified by presenting an analytical tool, the Fairness Impact Assessment, and examples from AI and elsewhere.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129884988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"See something, say something? Coordinating the disclosure of security vulnerabilities in Canada’s infrastructure","authors":"Yuan Stevens, S. Tran, Ryan Atkinson","doi":"10.1109/istas52410.2021.9629214","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629214","url":null,"abstract":"Ill-intentioned actors are rapidly developing the means to exploit vulnerabilities in the software and infrastructure of governments around the world. Numerous jurisdictions now facilitate coordinated vulnerability disclosure for such public systems, providing good faith security researchers a predictable and cooperative process to disclose security vulnerabilities for patching before they are exploited. This study identifies that Canada may be falling behind its global peers by failing to implement such reporting procedures. It indicates the need for a straightforward vulnerability disclosure and remediation path involving federal systems, linked to improved legal frameworks and government policies for security vulnerability discovery and disclosure in Canada and beyond.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI can turn the clock back before we know it","authors":"A. Gerdes","doi":"10.1109/istas52410.2021.9629161","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629161","url":null,"abstract":"This paper outlines intertwined challenges related to three areas: the hype surrounding AI, the consequences of corporate influence on the AI research agenda, and the public sector’s uncritical embracement of AI technologies. We argue that AI can backfire if overconfident predictions influence decisions to introduce AI in high-risk domains. Moreover, the corporate colonization of the AI research agenda may cause a decline in societal trust in science, which is highly problematic considering that AI will increasingly power important domains in society.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131009604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cybersecurity issues in citizen science","authors":"D. Schaeffer, P. Olson","doi":"10.1109/istas52410.2021.9629198","DOIUrl":"https://doi.org/10.1109/istas52410.2021.9629198","url":null,"abstract":"As Citizen Science projects become more wide-spread and global, we must pay attention to cybersecurity issues that can emerge. Sharing data is fundamental to Citizen Science projects, and that requires attention to protecting privacy. This issue is exacerbated if participants use common technologies, e.g., cell phones, that are used for other purposes and may be subject to cybersecurity vulnerabilities. Cybersecurity issues can arise during recruitment, by which social media is a prevalent tool, to Citizen Science projects in that bad actors can solicit input into malicious and false projects. Furthermore, the results of Citizen Science projects may provide foundations for policy and legislation; thus, integrity must be maintained, and bad actors prevented from manipulating data and/or results. Governance of Citizen Science projects must include attention to cybersecurity issues, regardless of their scope and scale. The exploitation of cybersecurity vulnerabilities is often the result of ethical lapses. Just as scientists respect science research ethics, so must citizens who undertake scientific research. In this paper, we will share a taxonomy that identifies how research ethics are represented and brought to life in 21st century Citizen Science projects.","PeriodicalId":314239,"journal":{"name":"2021 IEEE International Symposium on Technology and Society (ISTAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131066556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}