AI & Society最新文献

筛选
英文 中文
As you sow, so shall you reap: rethinking humanity in the age of artificial intelligence 种瓜得瓜,种豆得豆:重新思考人工智能时代的人性
IF 2.9
AI & Society Pub Date : 2024-06-10 DOI: 10.1007/s00146-024-01983-3
Monalisa Bhattacherjee, Sweta Sinha
{"title":"As you sow, so shall you reap: rethinking humanity in the age of artificial intelligence","authors":"Monalisa Bhattacherjee, Sweta Sinha","doi":"10.1007/s00146-024-01983-3","DOIUrl":"10.1007/s00146-024-01983-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1541 - 1542"},"PeriodicalIF":2.9,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance 对人工智能的态度:结合关于技术接受度的三种理论视角
IF 2.9
AI & Society Pub Date : 2024-06-08 DOI: 10.1007/s00146-024-01987-z
Pascal D. Koenig
{"title":"Attitudes toward artificial intelligence: combining three theoretical perspectives on technology acceptance","authors":"Pascal D. Koenig","doi":"10.1007/s00146-024-01987-z","DOIUrl":"10.1007/s00146-024-01987-z","url":null,"abstract":"<div><p>Evidence on AI acceptance comes from a diverse field comprising public opinion research and largely experimental studies from various disciplines. Differing theoretical approaches in this research, however, imply heterogeneous ways of studying AI acceptance. The present paper provides a framework for systematizing different uses. It identifies three families of theoretical perspectives informing research on AI acceptance—user acceptance, delegation acceptance, and societal adoption acceptance. These models differ in scope, each has elements specific to them, and the connotation of technology acceptance thus changes when shifting perspective. The discussion points to a need for combining the three perspectives as they have all become relevant for AI. A combined approach serves to systematically relate findings from different studies. And as AI systems affect people in different constellations and no single perspective can accommodate them all, building blocks from several perspectives are needed to comprehensively study how AI is perceived in society.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1333 - 1345"},"PeriodicalIF":2.9,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01987-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141370709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfiguring the alterity relation: the role of communication in interactions with social robots and chatbots 重构改变性关系:交流在与社交机器人和聊天机器人互动中的作用
IF 2.9
AI & Society Pub Date : 2024-06-06 DOI: 10.1007/s00146-024-01953-9
Dakota Root
{"title":"Reconfiguring the alterity relation: the role of communication in interactions with social robots and chatbots","authors":"Dakota Root","doi":"10.1007/s00146-024-01953-9","DOIUrl":"10.1007/s00146-024-01953-9","url":null,"abstract":"<div><p>Don Ihde’s <i>alterity relation</i> focuses on the quasi-otherness of dynamic technologies that interact with humans. The alterity relation is one means to study relations between humans and artificial intelligence (AI) systems . However, research on alterity relations has not defined the difference between playing with a toy, using a computer, and interacting with a social robot or chatbot. We suggest that Ihde’s quasi-other concept fails to account for the interactivity, autonomy, and adaptability of social robots and chatbots, which more closely approach human alterity. In this article, we will examine experiences with a chatbot, Replika, and a humanoid robot, a RealDoll, to show how some users experience AI systems as companions<i>.</i> First, we show that the perception of social robots and chatbots as intimate companions is grounded in communication. Advances in natural language processing (NLP) and natural language generation (NLG) allow a relationship to form between some users and social robots and chatbots. In this relationship, some users experience social robots and chatbots as more than quasi-others. We will use Kanemitsu’s another-other concept to analyze cases where social robots and chatbots should be distinguished from quasi-others.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1321 - 1332"},"PeriodicalIF":2.9,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01953-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141378055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI in situated action: a scoping review of ethnomethodological and conversation analytic studies 情境行动中的人工智能:人种方法学和会话分析研究范围综述
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01919-x
Jakub Mlynář, Lynn de Rijk, Andreas Liesenfeld, Wyke Stommel, Saul Albert
{"title":"AI in situated action: a scoping review of ethnomethodological and conversation analytic studies","authors":"Jakub Mlynář,&nbsp;Lynn de Rijk,&nbsp;Andreas Liesenfeld,&nbsp;Wyke Stommel,&nbsp;Saul Albert","doi":"10.1007/s00146-024-01919-x","DOIUrl":"10.1007/s00146-024-01919-x","url":null,"abstract":"<div><p>Despite its elusiveness as a concept, ‘artificial intelligence’ (AI) is becoming part of everyday life, and a range of empirical and methodological approaches to social studies of AI now span many disciplines. This article reviews the scope of ethnomethodological and conversation analytic (EM/CA) approaches that treat AI as a phenomenon emerging in and through the situated organization of social interaction. Although this approach has been very influential in the field of computational technology since the 1980s, AI has only recently emerged as such a pervasive part of daily life to warrant a sustained empirical focus in EM/CA. Reviewing over 50 peer-reviewed publications, we find that the studies focus on various social and group activities such as task-oriented situations, semi-experimental setups, play, and everyday interactions. They also involve a range of participant categories including children, older participants, and people with disabilities. Most of the reviewed studies apply CA’s conceptual apparatus, its approach to data analysis, and core topics such as turn-taking and repair. We find that across this corpus, studies center on three key themes: openings and closing the interaction, miscommunication, and non-verbal aspects of interaction. In the discussion, we reflect on EM studies that differ from those in our corpus by focusing on praxeological respecifications of AI-related phenomena. Concurrently, we offer a critical reflection on the work of literature reviewing, and explore the tortuous relationship between EM and CA in the area of research on AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1497 - 1527"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01919-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hunters not beggars 猎人不是乞丐
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01978-0
Mark Ressler
{"title":"Hunters not beggars","authors":"Mark Ressler","doi":"10.1007/s00146-024-01978-0","DOIUrl":"10.1007/s00146-024-01978-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1539 - 1540"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141267971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We’re only human after all: a critique of human-centred AI 我们终究只是人类:对以人为本的人工智能的批判
IF 2.9
AI & Society Pub Date : 2024-06-04 DOI: 10.1007/s00146-024-01976-2
Mark Ryan
{"title":"We’re only human after all: a critique of human-centred AI","authors":"Mark Ryan","doi":"10.1007/s00146-024-01976-2","DOIUrl":"10.1007/s00146-024-01976-2","url":null,"abstract":"<div><p>The use of a ‘human-centred’ artificial intelligence approach (HCAI) has substantially increased over the past few years in academic texts (1600 +); institutions (27 Universities have HCAI labs, such as Stanford, Sydney, Berkeley, and Chicago); in tech companies (e.g., Microsoft, IBM, and Google); in politics (e.g., G7, G20, UN, EU, and EC); and major institutional bodies (e.g., World Bank, World Economic Forum, UNESCO, and OECD). Intuitively, it sounds very appealing: placing human concerns at the centre of AI development and use. However, this paper will use insights from the works of Michel Foucault (mostly <i>The Order of Things</i>) to argue that the HCAI approach is deeply problematic in its assumptions. In particular, this paper will criticise four main assumptions commonly found within HCAI: human–AI hybridisation is desirable and unproblematic; humans are not currently at the centre of the AI universe; we should use humans as a way to guide AI development; AI is the next step in a continuous path of human progress; and increasing human control over AI will reduce harmful bias. This paper will contribute to the field of philosophy of technology by using Foucault's analysis to examine assumptions found in HCAI [it provides a Foucauldian conceptual analysis of a current approach (human-centredness) that aims to influence the design and development of a transformative technology (AI)], it will contribute to AI ethics debates by offering a critique of human-centredness in AI (by choosing Foucault, it provides a bridge between older ideas with contemporary issues), and it will also contribute to Foucault studies (by using his work to engage in contemporary debates, such as AI).</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1303 - 1319"},"PeriodicalIF":2.9,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01976-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141266672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
If AI is our co-pilot, who is the captain?
IF 2.9
AI & Society Pub Date : 2024-05-30 DOI: 10.1007/s00146-024-01965-5
K. Woods
{"title":"If AI is our co-pilot, who is the captain?","authors":"K. Woods","doi":"10.1007/s00146-024-01965-5","DOIUrl":"10.1007/s00146-024-01965-5","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1537 - 1538"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethics and administration of the ‘Res publica’: dynamics of democracy 公共资源 "的伦理与管理:民主的动力
IF 2.9
AI & Society Pub Date : 2024-05-30 DOI: 10.1007/s00146-024-01963-7
Satinder P. Gill
{"title":"Ethics and administration of the ‘Res publica’: dynamics of democracy","authors":"Satinder P. Gill","doi":"10.1007/s00146-024-01963-7","DOIUrl":"10.1007/s00146-024-01963-7","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 3","pages":"825 - 827"},"PeriodicalIF":2.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142415160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morality first?
IF 2.9
AI & Society Pub Date : 2024-05-28 DOI: 10.1007/s00146-024-01926-y
Nathaniel Sharadin
{"title":"Morality first?","authors":"Nathaniel Sharadin","doi":"10.1007/s00146-024-01926-y","DOIUrl":"10.1007/s00146-024-01926-y","url":null,"abstract":"<div><p>The Morality First strategy for developing AI systems that can represent and respond to human values aims to <i>first</i> develop systems that can represent and respond to <i>moral</i> values. I argue that Morality First and other X-First views are unmotivated. Moreover, if one particular philosophical view about value is true, these strategies are positively distorting. The natural alternative according to which no domain of value comes “first” introduces a new set of challenges and highlights an important but otherwise obscured problem for e-AI developers.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1289 - 1301"},"PeriodicalIF":2.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01926-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143818307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perceived responsibility in AI-supported medicine 人工智能辅助医疗的责任意识
IF 2.9
AI & Society Pub Date : 2024-05-22 DOI: 10.1007/s00146-024-01972-6
S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig, Matthias Uhl
{"title":"Perceived responsibility in AI-supported medicine","authors":"S. Krügel,&nbsp;J. Ammeling,&nbsp;M. Aubreville,&nbsp;A. Fritz,&nbsp;A. Kießig,&nbsp;Matthias Uhl","doi":"10.1007/s00146-024-01972-6","DOIUrl":"10.1007/s00146-024-01972-6","url":null,"abstract":"<div><p>In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when people believe that AI may become conscious at some point. In consequence, less responsibility is attributed to human agents in settings with hybrid diagnostic teams than in settings with human-only diagnostic teams. Our findings may have implications for behavior exhibited in contexts of collaborative medical decision making with AI-based as opposed to human recommenders because less responsibility is attributed to agents who have the mental capacity to care about outcomes.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"40 3","pages":"1485 - 1495"},"PeriodicalIF":2.9,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-024-01972-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141112934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信