Ethical, political and epistemic implications of machine learning (mis)information classification: insights from an interdisciplinary collaboration between social and data scientists
Andrés Domínguez Hernández, Richard Owen, Dan Saattrup Nielsen, Ryan McConville
{"title":"Ethical, political and epistemic implications of machine learning (mis)information classification: insights from an interdisciplinary collaboration between social and data scientists","authors":"Andrés Domínguez Hernández, Richard Owen, Dan Saattrup Nielsen, Ryan McConville","doi":"10.1080/23299460.2023.2222514","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) enabled classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation and other content that could be identified as harmful. In building these models, data scientists need to take a stance on the legitimacy, authoritativeness and objectivity of the sources of “truth” used for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high accuracy and performance, ML-driven moderation systems have the potential to shape online public debate and create downstream negative impacts such as undue censorship and the reinforcing of false beliefs. Using collaborative ethnography and theoretical insights from social studies of science and expertise, we offer a critical analysis of the process of building ML models for (mis)information classification: we identify a series of algorithmic contingencies—key moments during model development that could lead to different future outcomes, uncertainty and harmful effects as these tools are deployed by social media platforms. We conclude by offering a tentative path toward reflexive and responsible development of ML tools for moderating misinformation and other harmful content online.","PeriodicalId":46727,"journal":{"name":"Journal of Responsible Innovation","volume":"51 1","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Responsible Innovation","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1080/23299460.2023.2222514","RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 1
Abstract
Machine learning (ML) enabled classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation and other content that could be identified as harmful. In building these models, data scientists need to take a stance on the legitimacy, authoritativeness and objectivity of the sources of “truth” used for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high accuracy and performance, ML-driven moderation systems have the potential to shape online public debate and create downstream negative impacts such as undue censorship and the reinforcing of false beliefs. Using collaborative ethnography and theoretical insights from social studies of science and expertise, we offer a critical analysis of the process of building ML models for (mis)information classification: we identify a series of algorithmic contingencies—key moments during model development that could lead to different future outcomes, uncertainty and harmful effects as these tools are deployed by social media platforms. We conclude by offering a tentative path toward reflexive and responsible development of ML tools for moderating misinformation and other harmful content online.
期刊介绍:
The Journal of Responsible Innovation (JRI) provides a forum for discussions of the normative assessment and governance of knowledge-based innovation. JRI offers humanists, social scientists, policy analysts and legal scholars, and natural scientists and engineers an opportunity to articulate, strengthen, and critique the relations among approaches to responsible innovation, thus giving further shape to a newly emerging community of research and practice. These approaches include ethics, technology assessment, governance, sustainability, socio-technical integration, and others. JRI intends responsible innovation to be inclusive of such terms as responsible development and sustainable development, and the journal invites comparisons and contrasts among such concepts. While issues of risk and environmental health and safety are relevant, JRI especially encourages attention to the assessment of the broader and more subtle human and social dimensions of innovation—including moral, cultural, political, and religious dimensions, social risk, and sustainability addressed in a systemic fashion.