{"title":"重塑人工智能:简介","authors":"Dominic Smith, Natasha Lushetich, Tina Röck, Edzia Carvalho, Kenny Lewis, Gabriele Schweikert","doi":"10.1080/20539320.2023.2213008","DOIUrl":null,"url":null,"abstract":"The expression “AI” has become as commonplace as “computer.” While many people have a relatively clear idea of what an AI system or a computer does or can do, fewer have an idea of how precisely the “intelligence” in AI or “computing” in “computer” operates in practice. As machine learning scientist Joe Davidson humorously put it: “[w]hen you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.” But does this mean that AI is a form of glorified statistics and that, consequently, there is no intelligence in “artificial intelligence”? Or, in a more quotidian register, that machine learning scientists have an ironic sense of humour? Regardless of whether we see this through the “object of discussion” or the “subject discussing” lens, it has by now become established that “artificial intelligence” is not an algorithmic variation on human intelligence, as authors such as Niels Nilsson have suggested (2010). Rather, it’s the system’s “ability to make generalizations” based on “limited data” and “iterative sequences.” How these iterative sequences are organised is clearly key. We are all familiar with systems that derive reductive conclusions from oversimplified procedures. This kind of “intelligence” is probably best labelled “obtuseness” since it shares many features with human obtuseness. Unlike human obtuseness, however, the artificial variant creates what could be called “future from structure” by turning possibilities into probabilities, and probabilities into mathematical-logical “necessities” many of which have lasting consequences. Artificial obtuseness can also be seen as a form of machinic enunciation, a variation on authoritarianism, minus the human agent: “you will either use the grossly reductive online form or you will not be able to submit your health insurance claim.” Or: “despite the fact that the decision to reject your health insurance claim was arrived at using oversimplified, and therefore inaccurate parameters, the decision is irrevocable.” Another category that differs from (what we might call) “artificial intelligence proper” is artificial militelligence (a mashup of “military” and “intelligence”). This is essentially a target machine concerned with understanding the enemy’s whereabouts, movements, and practices. Its purpose is to translate information into actionable knowledge and strategic advantage. In order to achieve this, artificial militellligence forces order on disorder, often by violent means.","PeriodicalId":41067,"journal":{"name":"Journal of Aesthetics and Phenomenology","volume":"9 1","pages":"87 - 99"},"PeriodicalIF":0.2000,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reimagining AI: Introduction\",\"authors\":\"Dominic Smith, Natasha Lushetich, Tina Röck, Edzia Carvalho, Kenny Lewis, Gabriele Schweikert\",\"doi\":\"10.1080/20539320.2023.2213008\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The expression “AI” has become as commonplace as “computer.” While many people have a relatively clear idea of what an AI system or a computer does or can do, fewer have an idea of how precisely the “intelligence” in AI or “computing” in “computer” operates in practice. As machine learning scientist Joe Davidson humorously put it: “[w]hen you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.” But does this mean that AI is a form of glorified statistics and that, consequently, there is no intelligence in “artificial intelligence”? Or, in a more quotidian register, that machine learning scientists have an ironic sense of humour? Regardless of whether we see this through the “object of discussion” or the “subject discussing” lens, it has by now become established that “artificial intelligence” is not an algorithmic variation on human intelligence, as authors such as Niels Nilsson have suggested (2010). Rather, it’s the system’s “ability to make generalizations” based on “limited data” and “iterative sequences.” How these iterative sequences are organised is clearly key. We are all familiar with systems that derive reductive conclusions from oversimplified procedures. This kind of “intelligence” is probably best labelled “obtuseness” since it shares many features with human obtuseness. Unlike human obtuseness, however, the artificial variant creates what could be called “future from structure” by turning possibilities into probabilities, and probabilities into mathematical-logical “necessities” many of which have lasting consequences. Artificial obtuseness can also be seen as a form of machinic enunciation, a variation on authoritarianism, minus the human agent: “you will either use the grossly reductive online form or you will not be able to submit your health insurance claim.” Or: “despite the fact that the decision to reject your health insurance claim was arrived at using oversimplified, and therefore inaccurate parameters, the decision is irrevocable.” Another category that differs from (what we might call) “artificial intelligence proper” is artificial militelligence (a mashup of “military” and “intelligence”). This is essentially a target machine concerned with understanding the enemy’s whereabouts, movements, and practices. Its purpose is to translate information into actionable knowledge and strategic advantage. In order to achieve this, artificial militellligence forces order on disorder, often by violent means.\",\"PeriodicalId\":41067,\"journal\":{\"name\":\"Journal of Aesthetics and Phenomenology\",\"volume\":\"9 1\",\"pages\":\"87 - 99\"},\"PeriodicalIF\":0.2000,\"publicationDate\":\"2022-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Aesthetics and Phenomenology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/20539320.2023.2213008\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Aesthetics and Phenomenology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/20539320.2023.2213008","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
The expression “AI” has become as commonplace as “computer.” While many people have a relatively clear idea of what an AI system or a computer does or can do, fewer have an idea of how precisely the “intelligence” in AI or “computing” in “computer” operates in practice. As machine learning scientist Joe Davidson humorously put it: “[w]hen you’re fundraising, it’s Artificial Intelligence. When you’re hiring, it’s Machine Learning. When you’re implementing, it’s logistic regression.” But does this mean that AI is a form of glorified statistics and that, consequently, there is no intelligence in “artificial intelligence”? Or, in a more quotidian register, that machine learning scientists have an ironic sense of humour? Regardless of whether we see this through the “object of discussion” or the “subject discussing” lens, it has by now become established that “artificial intelligence” is not an algorithmic variation on human intelligence, as authors such as Niels Nilsson have suggested (2010). Rather, it’s the system’s “ability to make generalizations” based on “limited data” and “iterative sequences.” How these iterative sequences are organised is clearly key. We are all familiar with systems that derive reductive conclusions from oversimplified procedures. This kind of “intelligence” is probably best labelled “obtuseness” since it shares many features with human obtuseness. Unlike human obtuseness, however, the artificial variant creates what could be called “future from structure” by turning possibilities into probabilities, and probabilities into mathematical-logical “necessities” many of which have lasting consequences. Artificial obtuseness can also be seen as a form of machinic enunciation, a variation on authoritarianism, minus the human agent: “you will either use the grossly reductive online form or you will not be able to submit your health insurance claim.” Or: “despite the fact that the decision to reject your health insurance claim was arrived at using oversimplified, and therefore inaccurate parameters, the decision is irrevocable.” Another category that differs from (what we might call) “artificial intelligence proper” is artificial militelligence (a mashup of “military” and “intelligence”). This is essentially a target machine concerned with understanding the enemy’s whereabouts, movements, and practices. Its purpose is to translate information into actionable knowledge and strategic advantage. In order to achieve this, artificial militellligence forces order on disorder, often by violent means.