AI and ethics最新文献

筛选
英文 中文
Partnering with AI to derive and embed principles for ethically guided AI behavior 与人工智能合作,推导和嵌入人工智能行为的道德指导原则
AI and ethics Pub Date : 2025-02-17 DOI: 10.1007/s43681-025-00656-1
Michael Anderson
{"title":"Partnering with AI to derive and embed principles for ethically guided AI behavior","authors":"Michael Anderson","doi":"10.1007/s43681-025-00656-1","DOIUrl":"10.1007/s43681-025-00656-1","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems, particularly large language models (LLMs), become increasingly embedded in sensitive and impactful domains, ethical failures threaten public trust and the broader acceptance of these technologies. Current approaches to AI ethics rely on reactive measures—such as keyword filters, disclaimers, and content moderation—that address immediate concerns but fail to provide the depth and flexibility required for principled decision-making. This paper introduces AI-aided reflective equilibrium (AIRE), a novel framework for embedding ethical reasoning into AI systems. Building on the philosophical tradition of deriving principles from specific cases, AIRE leverages the capabilities of AI to dynamically generate and analyze such cases and abstract and refine ethical principles from them. Through illustrative scenarios, including a self-driving car dilemma and a vulnerable individual interacting with an AI, we demonstrate how AIRE navigates complex ethical decisions by prioritizing principles like minimizing harm and protecting the vulnerable. We address critiques of scalability, complexity, and the question of “whose ethics,” highlighting AIRE’s potential to democratize ethical reasoning while maintaining rigor and transparency. Beyond its technical contributions, this paper underscores the transformative potential of AI as a collaborative partner in ethical deliberation, paving the way for trustworthy, principled systems that can adapt to diverse real-world challenges.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"1893 - 1910"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the mutations of society in the era of generative AI 探索生成式人工智能时代的社会突变
AI and ethics Pub Date : 2025-01-13 DOI: 10.1007/s43681-024-00632-1
Hubert Etienne, Brent Mittelstadt, Rob Reich, John Basl, Jeff Behrends, Dominique Lestel, Chloé Bakalar, Geoff Keeling, Giada Pistilli, Marta Cantero Gamito
{"title":"Exploring the mutations of society in the era of generative AI","authors":"Hubert Etienne,&nbsp;Brent Mittelstadt,&nbsp;Rob Reich,&nbsp;John Basl,&nbsp;Jeff Behrends,&nbsp;Dominique Lestel,&nbsp;Chloé Bakalar,&nbsp;Geoff Keeling,&nbsp;Giada Pistilli,&nbsp;Marta Cantero Gamito","doi":"10.1007/s43681-024-00632-1","DOIUrl":"10.1007/s43681-024-00632-1","url":null,"abstract":"","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revolutionizing education and therapy for students with autism spectrum disorder: a scoping review of AI-driven tools, technologies, and ethical implications 自闭症谱系障碍学生的教育和治疗革命:人工智能驱动的工具、技术和伦理影响的范围审查
AI and ethics Pub Date : 2025-01-10 DOI: 10.1007/s43681-024-00608-1
Fatemeh Habibi, Sadaf Sedaghatshoar, Tahereh Attar, Marzieh Shokoohi, Arash Kiani, Ali Naderi Malek
{"title":"Revolutionizing education and therapy for students with autism spectrum disorder: a scoping review of AI-driven tools, technologies, and ethical implications","authors":"Fatemeh Habibi,&nbsp;Sadaf Sedaghatshoar,&nbsp;Tahereh Attar,&nbsp;Marzieh Shokoohi,&nbsp;Arash Kiani,&nbsp;Ali Naderi Malek","doi":"10.1007/s43681-024-00608-1","DOIUrl":"10.1007/s43681-024-00608-1","url":null,"abstract":"<div><p>This scoping review aims to update the understanding of AI-driven educational tools for students with autism spectrum disorder (ASD). Following Arksey and O’Malley’s (Int J Soc Res Methodol 8(1):19–32, 2005) five-stage framework, we defined research questions, conducted a comprehensive literature review, selected relevant studies, and qualitatively analyzed the findings. An electronic search in multiple databases using terms related to AI in education and ASD yielded 128 articles. After rigorous screening, 13 studies were selected for data extraction and narrative synthesis. The review highlights the transformative potential of AI in enhancing educational and therapeutic outcomes for students with ASD. AI-driven tools, such as “LIFEisGAME” and “Empower Me,” utilize advanced technologies like facial recognition and augmented reality to improve social and emotional skills. These tools provide real-time feedback, creating interactive and engaging learning environments tailored to individual needs. Additionally, AI applications in speech-generating devices and educational robots like Kaspar and Kiwi have shown promise in developing communication skills and enhancing social interactions. The narrative synthesis revealed key patterns and insights into the effectiveness of AI applications in supporting students with ASD. AI’s ability to analyze behavioral and emotional data provides a holistic understanding of each student, allowing for personalized learning pathways and real-time adaptation of instructional strategies. However, the review also notes significant ethical challenges, including the need for extensive training for educators, data privacy concerns, and potential algorithmic biases. Ensuring the ethical deployment of AI technologies involves addressing these challenges by implementing robust data protection measures, fostering transparency in AI algorithms, and actively mitigating bias. In conclusion, AI has the potential to revolutionize the education and therapy of students with ASD by offering personalized, adaptive, and effective interventions. The implications of this review suggest that to fully harness the potential of AI, future efforts must focus on long-term studies validating AI effectiveness in diverse settings, developing standardized frameworks for ethical AI deployment, and fostering interdisciplinary collaboration. These steps are essential to ensure sustainable, equitable, and impactful integration of AI-driven technologies in educational and therapeutic contexts for students with ASD.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"2055 - 2070"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The need for an empirical research program regarding human–AI relational norms 需要开展有关人类与人工智能关系规范的实证研究计划
AI and ethics Pub Date : 2025-01-09 DOI: 10.1007/s43681-024-00631-2
Madeline G. Reinecke, Andreas Kappes, Sebastian Porsdam Mann, Julian Savulescu, Brian D. Earp
{"title":"The need for an empirical research program regarding human–AI relational norms","authors":"Madeline G. Reinecke,&nbsp;Andreas Kappes,&nbsp;Sebastian Porsdam Mann,&nbsp;Julian Savulescu,&nbsp;Brian D. Earp","doi":"10.1007/s43681-024-00631-2","DOIUrl":"10.1007/s43681-024-00631-2","url":null,"abstract":"<div><p>As artificial intelligence (AI) systems begin to take on social roles traditionally filled by humans, it will be crucial to understand how this affects people’s cooperative expectations. In the case of human–human dyads, different relationships are governed by different norms: For example, how two strangers—versus two friends or colleagues—should interact when faced with a similar coordination problem often differs. How will the rise of ‘social’ artificial intelligence (and ultimately, superintelligent AI) complicate people’s expectations about the cooperative norms that should govern different types of relationships, whether human–human or human–AI? Do people expect AI to adhere to the same cooperative dynamics as humans when in a given social role? Conversely, will they begin to expect humans in certain types of relationships to act more like AI? Here, we consider how people’s cooperative expectations may pull apart between human–human and human–AI relationships, detailing an empirical proposal for mapping these distinctions across relationship types. We see the data resulting from our proposal as relevant for understanding people’s relationship–specific cooperative expectations in an age of social AI, which may also forecast potential resistance towards AI systems occupying certain social roles. Finally, these data can form the basis for ethical evaluations: What relationship–specific cooperative norms we should adopt for human–AI interactions, or reinforce through responsible AI design, depends partly on empirical facts about what norms people find intuitive for such interactions (along with the costs and benefits of maintaining these). Toward the end of the paper, we discuss how these relational norms may change over time and consider the implications of this for the proposed research program.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"71 - 80"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00631-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI to renew public employment services? Explanation and trust of domain experts 人工智能更新公共就业服务?领域专家的解释和信任
AI and ethics Pub Date : 2025-01-09 DOI: 10.1007/s43681-024-00629-w
Thomas Souverain
{"title":"AI to renew public employment services? Explanation and trust of domain experts","authors":"Thomas Souverain","doi":"10.1007/s43681-024-00629-w","DOIUrl":"10.1007/s43681-024-00629-w","url":null,"abstract":"<div><p>It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"55 - 70"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning about AI ethics from cases: a scoping review of AI incident repositories and cases 从案例中学习人工智能伦理:对人工智能事件库和案例的范围审查
AI and ethics Pub Date : 2025-01-09 DOI: 10.1007/s43681-024-00639-8
Simon Knight, Cormac McGrath, Olga Viberg, Teresa Cerratto Pargman
{"title":"Learning about AI ethics from cases: a scoping review of AI incident repositories and cases","authors":"Simon Knight,&nbsp;Cormac McGrath,&nbsp;Olga Viberg,&nbsp;Teresa Cerratto Pargman","doi":"10.1007/s43681-024-00639-8","DOIUrl":"10.1007/s43681-024-00639-8","url":null,"abstract":"<div><p>Cases provide a practical resource for learning regarding the uses and challenges of AI applications. Cases give insight into how principles and values are implicated in real contexts, the trade-offs and different perspectives held regarding these contexts, and the—sometimes hidden—relationships between cases, relationships that may support analogical reasoning across contexts. We aim to (1) provide an approach for structuring ethics cases and (2) investigate existing case repository structures. We motivate a scoping review through a conceptual analysis of ethics case desirable features. The review sought to retrieve repositories, (sometimes known as observatories, catalogues, galleries, or incident databases), and their cases, for analysis of their expression of ethics concepts. We identify n = 14 repositories, extracting the case schema used in each, to identify how this metadata can express ethical concepts. We find that most repositories focus on harm-indicators, with some indicating positive impacts, but with little explicit reference to ethical concepts; a subset (n = 4) includes no structural elements addressing ethical concepts or impacts. We extract a subset of cases from the total cases (n = 2000) across repositories addressing education (n = 100). These are grouped by topic, with a structured content analysis provided of ethical implications from one sub-theme, offering qualitative insights into the ethical coverage. Our conceptual analysis and empirical review exemplify a model for ethics cases (shorthanded as Ethics-case-CPR), while highlighting gaps both in existing case repositories and specific examples of cases.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"2037 - 2053"},"PeriodicalIF":0.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00639-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Waging warfare against states: the deployment of artificial intelligence in cyber espionage 对国家发动战争:在网络间谍活动中部署人工智能
AI and ethics Pub Date : 2025-01-08 DOI: 10.1007/s43681-024-00628-x
Wan Rosalili Wan Rosli
{"title":"Waging warfare against states: the deployment of artificial intelligence in cyber espionage","authors":"Wan Rosalili Wan Rosli","doi":"10.1007/s43681-024-00628-x","DOIUrl":"10.1007/s43681-024-00628-x","url":null,"abstract":"<div><p>Cyber espionage has significantly been viewed as a risk towards nation-states, especially in the area of security and protection of Critical National Infrastructures. The race against digitisation has also raised concerns about how emerging technologies are defining how cyber activities are linked to waging warfare between States. Real-world crimes have since found a place in cyberspace, and with high connectivity, has exposed various actors to various risks and vulnerabilities, including cyber espionage. Cyber espionage has always been a national security issue as it does not only target States but also affects public–private networks, corporations and individuals. The challenge of crimes committed within the cyber realm is how the nature of cybercrimes distorts the dichotomy of state responsibility in responding to cyber threats and vulnerabilities. Furthermore, the veil of anonymity and emerging technologies such as artificial intelligence have further provided opportunities for a larger scale impact on the state for such crime. The imminent threat of cyber espionage is impacting the economic and political interactions between nation-states and changing the nature of modern conflict. Due to these implications, this paper will discuss the current legal landscape governing cyber espionage and the impact of the use of artificial intelligence in the commission of such crimes.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"47 - 53"},"PeriodicalIF":0.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-024-00628-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143423329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Legal and ethical implications of AI-based crowd analysis: the AI Act and beyond. 基于人工智能的人群分析的法律和伦理影响:人工智能法案及以后。
AI and ethics Pub Date : 2025-01-01 Epub Date: 2025-01-07 DOI: 10.1007/s43681-024-00644-x
Emmeke Veltmeijer, Charlotte Gerritsen
{"title":"Legal and ethical implications of AI-based crowd analysis: the AI Act and beyond.","authors":"Emmeke Veltmeijer, Charlotte Gerritsen","doi":"10.1007/s43681-024-00644-x","DOIUrl":"10.1007/s43681-024-00644-x","url":null,"abstract":"<p><p>The increasing global population and the consequent rise in crowded environments have amplified the risks of accidents and tragedies. This underscores the need for effective crowd management strategies, with Artificial Intelligence (AI) holding potential to complement traditional methods. While AI offers promise in analysing crowd dynamics and predicting escalations, its deployment raises significant ethical concerns, regarding privacy, bias, accuracy, and accountability. This paper investigates the legal and ethical implications of AI in automated crowd analysis, with a focus on the European perspective. We examine the effect of the GDPR and the recently accepted AI Act on the field. The study then delves into remaining concerns post-legislation and proposes recommendations for ethical deployment. Key findings highlight challenges in notifying individuals of data usage, protecting vulnerable groups, balancing privacy with safety, and mitigating biased outcomes. Recommendations advocate for non-invasive data collection methods, refraining from predicting and decision-making AI systems, contextual considerations, and individual responsibility. The recommendations offer a foundational framework for ethical AI deployment, with universal applicability to benefit citizens globally.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"3173-3183"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12103326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tempered enthusiasm by interviewed experts for synthetic data and ELSI checklists for AI in medicine. 受访专家对人工智能在医学领域的合成数据和ELSI清单的热情有所减弱。
AI and ethics Pub Date : 2025-01-01 Epub Date: 2025-01-10 DOI: 10.1007/s43681-024-00652-x
Laura Y Cabrera, Jennifer Wagner, Sara Gerke, Daniel Susser
{"title":"Tempered enthusiasm by interviewed experts for synthetic data and ELSI checklists for AI in medicine.","authors":"Laura Y Cabrera, Jennifer Wagner, Sara Gerke, Daniel Susser","doi":"10.1007/s43681-024-00652-x","DOIUrl":"10.1007/s43681-024-00652-x","url":null,"abstract":"<p><p>Synthetic data are increasingly being used in data-driven fields. While synthetic data is a promising tool in medicine, it raises new ethical, legal, and social implications (ELSI) challenges. There is a recognized need for well-designed approaches and standards for documenting and communicating relevant information about artificial intelligence (AI) research datasets and models, including consideration of the many ELSI challenges. This study investigates the ethical dimensions of synthetic data and explores the utility and challenges of ELSI-focused computational checklists for biomedical AI via semi-structure interviews with subject matter experts. Our results suggest that AI experts have tempered views about the promises and challenges of both synthetic data and ELSI-focused computational checklists. Experts discussed a number of ELSI issues covered by previous literature on the topic, such as issues of bias and privacy, yet other less discussed ELSI issues, such as social justice implications and issues of trust were also raised. When discussing ELSI-focused computational checklists our participants highlighted the challenges connected to developing and implementing them.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s43681-024-00652-x.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 3","pages":"3241-3254"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12103352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An approach to sociotechnical transparency of social media algorithms using agent-based modelling. 使用基于代理的建模来实现社交媒体算法的社会技术透明度。
AI and ethics Pub Date : 2025-01-01 Epub Date: 2024-07-29 DOI: 10.1007/s43681-024-00527-1
Anna Gausen, Ce Guo, Wayne Luk
{"title":"An approach to sociotechnical transparency of social media algorithms using agent-based modelling.","authors":"Anna Gausen, Ce Guo, Wayne Luk","doi":"10.1007/s43681-024-00527-1","DOIUrl":"https://doi.org/10.1007/s43681-024-00527-1","url":null,"abstract":"<p><p>The recommendation algorithms on social media platforms are hugely impactful, they shape information flow and human connection on an unprecedented scale. Despite growing criticism of the social impact of these algorithms, they are still opaque and transparency is an ongoing challenge. This paper has three contributions: (1) We introduce the concept of <i>sociotechnical transparency</i>. This can be defined as transparency approaches that consider both the technical system, and how it interacts with users and the environment in which it is deployed. We propose sociotechnical approaches will improve the understanding of social media algorithms for policy-makers and the public. (2) We present an approach to sociotechnical transparency using agent-based modelling, which overcomes a number of challenges with existing approaches. This is a novel application of agent-based modelling to provide transparency into how the recommendation algorithm prioritises different curation signals for a topic. (3) This agent-based model has a novel implementation of a multi-objective recommendation algorithm that is calibrated and empirically validated with data collected from X, previously Twitter. We show that agent-based modelling can provide useful insights into how the recommendation algorithm prioritises different curation signals. We can begin to explore whether the priorities of the recommendation algorithm align with what platforms say it is doing and whether they align with what the public want.</p>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1827-1845"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058895/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144043553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信