AI & SocietyPub Date : 2023-07-21DOI: 10.1007/s00146-023-01725-x
Jacqueline Harding, William D’Alessandro, N. G. Laskowski, Robert Long
{"title":"AI language models cannot replace human research participants","authors":"Jacqueline Harding, William D’Alessandro, N. G. Laskowski, Robert Long","doi":"10.1007/s00146-023-01725-x","DOIUrl":"10.1007/s00146-023-01725-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2603 - 2605"},"PeriodicalIF":2.9,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-07-21DOI: 10.1007/s00146-023-01720-2
Juan Ignacio del Valle, Francisco Lara
{"title":"AI-powered recommender systems and the preservation of personal autonomy","authors":"Juan Ignacio del Valle, Francisco Lara","doi":"10.1007/s00146-023-01720-2","DOIUrl":"10.1007/s00146-023-01720-2","url":null,"abstract":"<div><p>Recommender Systems (RecSys) have been around since the early days of the Internet, helping users navigate the vast ocean of information and the increasingly available options that have been available for us ever since. The range of tasks for which one could use a RecSys is expanding as the technical capabilities grow, with the disruption of Machine Learning representing a tipping point in this domain, as in many others. However, the increase of the technical capabilities of AI-powered RecSys did not come with a thorough consideration of their ethical implications and, despite being a well-established technical domain, the potential impacts of RecSys on their users are still under-assessed. This paper aims at filling this gap in regards to one of the main impacts of RecSys: personal autonomy. We first describe how technology can affect human values and a suitable methodology to identify these effects and mitigate potential harms: Value Sensitive Design (VSD). We use VSD to carry out a conceptual investigation of personal autonomy in the context of a generic RecSys and draw on a nuanced account of procedural autonomy to focus on two components: competence and authenticity. We provide the results of our inquiry as a value hierarchy and apply it to the design of a speculative RecSys as an example.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2479 - 2491"},"PeriodicalIF":2.9,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01720-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121597991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-07-20DOI: 10.1007/s00146-023-01722-0
Benedetta Giovanola, Simona Tiribelli
{"title":"Correction: Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms","authors":"Benedetta Giovanola, Simona Tiribelli","doi":"10.1007/s00146-023-01722-0","DOIUrl":"10.1007/s00146-023-01722-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2637 - 2637"},"PeriodicalIF":2.9,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121397838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-07-12DOI: 10.1007/s00146-023-01724-y
Jacob Browning
{"title":"“Personhood and AI: Why large language models don’t understand us”","authors":"Jacob Browning","doi":"10.1007/s00146-023-01724-y","DOIUrl":"10.1007/s00146-023-01724-y","url":null,"abstract":"<div><p>Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2499 - 2506"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identifying arbitrage opportunities in retail markets with artificial intelligence","authors":"Jitsama Tanlamai, Warut Khern-am-nuai, Yossiri Adulyasak","doi":"10.1007/s00146-023-01718-w","DOIUrl":"10.1007/s00146-023-01718-w","url":null,"abstract":"<div><p>This study uses an artificial intelligence (AI) model to identify arbitrage opportunities in the retail marketplace. Specifically, we develop an AI model to predict the optimal purchasing point based on the price movement of products in the market. Our model is trained on a large dataset collected from an online marketplace in the United States. Our model is enhanced by incorporating user-generated content (UGC), which is empirically proven to be significantly informative. Overall, the AI model attains more than 90% precision rate, while the recall rate is higher than 80% in an out-of-sample test. In addition, we conduct a field experiment to verify the external validity of the AI model in a real-life setting. Our model identifies 293 arbitrage opportunities during a one-year field experiment and generates a profit of $7.06 per arbitrage opportunity. The result demonstrates that AI performs exceptionally well in identifying arbitrage opportunities in retail markets with tangible economic values. Our results also yield important implications regarding the role of AI in the society, both from the consumer and firm perspectives.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2615 - 2630"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01718-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taking AI risks seriously: a new assessment model for the AI Act","authors":"Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, Luciano Floridi","doi":"10.1007/s00146-023-01723-z","DOIUrl":"10.1007/s00146-023-01723-z","url":null,"abstract":"<div><p>The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2493 - 2497"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01723-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130835071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI & SocietyPub Date : 2023-07-10DOI: 10.1007/s00146-023-01716-y
Pertti Saariluoma, Antero Karvonen
{"title":"Theory languages in designing artificial intelligence","authors":"Pertti Saariluoma, Antero Karvonen","doi":"10.1007/s00146-023-01716-y","DOIUrl":"10.1007/s00146-023-01716-y","url":null,"abstract":"<div><p>The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages use terms referring to the actual world, i.e., to entities, events, and thoughts. Thus, computational languages have actual-world references and semantics. They are thus no longer context- or content-free. However, computational languages always have fixed meanings and, for this reason, limited domains of reference. Finally, unlike formal and computational languages, natural languages are creative, dynamic, and productive. Consequently, they can refer to an unlimited number of objects and their attributes in an unlimited number of domains. The differences between the three theory languages enable us to reflect on the traditional problems of strong and weak AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2249 - 2258"},"PeriodicalIF":2.9,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01716-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}