Reflections on AI in Humanities

Raad Khair Allah
{"title":"Reflections on AI in Humanities","authors":"Raad Khair Allah","doi":"10.31273/eirj.v11i1.1453","DOIUrl":null,"url":null,"abstract":"As artificial intelligence (AI) continues advancing rapidly, there is growing potential for its application in the humanities to uncover new insights and perspectives from the historical archives. However, it is also important to consider how AI tools themselves may unintentionally perpetuate existing biases if not developed conscientiously. This critical reflection reflects on the opportunities and challenges of utilising AI to amplify marginalised voices that have been traditionally excluded or underrepresented in mainstream historical narratives, with a focus on women. Through natural language processing and computer vision techniques, AI shows promise in automating the analysis of large volumes of text, image, and multimedia sources to bring to the surface female narratives previously overlooked due to limitations of manual research methods. However, issues such as training data bias, problematic stereotypes learned from legacy sources, and a lack of diversity among AI researchers threaten to replicate the very inequities they are seeking to overcome if not addressed proactively. Collaborative frameworks and design principles centred on representation, accountability and community oversight are needed. By critically examining its social responsibilities and impacts, this reflection argues that AI possesses great potential in the service of feminist and intersectional scholarship when guided appropriately. It calls for continued multidisciplinary dialogue to help ensure technologies amplify marginalised voices rather than risk their further marginalisation.","PeriodicalId":268124,"journal":{"name":"Exchanges: The Warwick Research Journal","volume":"36 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Exchanges: The Warwick Research Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31273/eirj.v11i1.1453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As artificial intelligence (AI) continues advancing rapidly, there is growing potential for its application in the humanities to uncover new insights and perspectives from the historical archives. However, it is also important to consider how AI tools themselves may unintentionally perpetuate existing biases if not developed conscientiously. This critical reflection reflects on the opportunities and challenges of utilising AI to amplify marginalised voices that have been traditionally excluded or underrepresented in mainstream historical narratives, with a focus on women. Through natural language processing and computer vision techniques, AI shows promise in automating the analysis of large volumes of text, image, and multimedia sources to bring to the surface female narratives previously overlooked due to limitations of manual research methods. However, issues such as training data bias, problematic stereotypes learned from legacy sources, and a lack of diversity among AI researchers threaten to replicate the very inequities they are seeking to overcome if not addressed proactively. Collaborative frameworks and design principles centred on representation, accountability and community oversight are needed. By critically examining its social responsibilities and impacts, this reflection argues that AI possesses great potential in the service of feminist and intersectional scholarship when guided appropriately. It calls for continued multidisciplinary dialogue to help ensure technologies amplify marginalised voices rather than risk their further marginalisation.
人文学科对人工智能的思考
随着人工智能(AI)的快速发展,其在人文学科中的应用潜力越来越大,可以从历史档案中发现新的见解和观点。然而,同样重要的是要考虑到,如果不认真发展,人工智能工具本身可能会无意中延续现有的偏见。这一批判性反思反映了利用人工智能放大传统上在主流历史叙事中被排斥或代表性不足的边缘化声音的机遇和挑战,重点是女性。通过自然语言处理和计算机视觉技术,人工智能有望对大量文本、图像和多媒体资源进行自动化分析,从而使以前由于人工研究方法的限制而被忽视的女性叙事浮出水面。然而,如果不主动解决,诸如训练数据偏差、从传统资源中学习到的有问题的刻板印象以及人工智能研究人员之间缺乏多样性等问题,可能会复制他们正在寻求克服的不平等现象。需要以代表性、问责制和社区监督为中心的协作框架和设计原则。通过批判性地审视其社会责任和影响,本文认为,如果引导得当,人工智能在为女权主义和交叉学术服务方面具有巨大潜力。它呼吁继续进行多学科对话,以帮助确保技术放大边缘化的声音,而不是冒着进一步边缘化的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信