{"title":"超越 \"人工智能助推器","authors":"Karen Yeung","doi":"10.1111/newe.12400","DOIUrl":null,"url":null,"abstract":"<p>‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.</p><p>Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.</p><p>To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.</p><p>In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8</p><p>Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.</p><p>The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.</p><p>History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.</p><p>Hence, our challenge is to <i>learn</i> how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.</p>","PeriodicalId":37420,"journal":{"name":"IPPR Progressive Review","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/newe.12400","citationCount":"0","resultStr":"{\"title\":\"Beyond ‘AI boosterism’\",\"authors\":\"Karen Yeung\",\"doi\":\"10.1111/newe.12400\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.</p><p>Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.</p><p>To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.</p><p>In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8</p><p>Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.</p><p>The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.</p><p>History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.</p><p>Hence, our challenge is to <i>learn</i> how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.</p>\",\"PeriodicalId\":37420,\"journal\":{\"name\":\"IPPR Progressive Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/newe.12400\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IPPR Progressive Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/newe.12400\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IPPR Progressive Review","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/newe.12400","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Social Sciences","Score":null,"Total":0}
‘AI boosterism’ has characterised British industrial policy for digital and data-enabled technologies under successive Conservative administrations, intended to ‘turbocharge’ artificial intelligence (AI) sector growth. Although former prime minister, Rishi Sunak, believed that public trust in AI was essential, evident in his initiatives championing AI safety (such as the AI Safety Summit in Bletchley Park in November 2023), Sunak retained an unwavering belief that existing laws, complemented by voluntary cooperation between industry and government, would address AI's threats and harms via technical fixes.
Such ‘techno-solutionist’ fantasies have hitherto dominated digital sector policy, in which AI is viewed as the key to solving society's most intractable ills. It is rooted in a pernicious fallacy that ‘regulation stifles innovation’ and must be strenuously avoided if the British economy is to thrive. AI boosterism accepts at face value the bold marketing claims of software vendors, naively believing that if an AI system can perform a given function, it will necessarily deliver its promised benefits in real-world settings once implemented.1 It also ignores the already-evident adverse impacts of AI systems, including ever-growing instances of ‘algorithmic injustice’ involving the use of automated systems which have resulted in human rights violations, particularly when used by public authorities to (a) inform (or automate) decisions about whether individuals are entitled to benefits and services or (b) subject them to unwanted investigation or detention on the basis that they have been computationally evaluated as ‘risky’.2 Likewise, it conveniently ignores the systemic adverse impacts of algorithmic systems, including their ecological toll, the deepening concentration of economic power, and the erosion of democracy, as ever-more powerful tools are harnessed to propagate misinformation, exploitation and pervasive surveillance.3 AI sector growth cannot be justified at all costs, and whether bigger implies ‘better’, demands consideration of ‘better for whom?’ and ‘with respect to what norms, goals and collective values?’.
To deliver on its stated desire to ‘make AI work for everyone’,4 the new Labour government must change tack. It needs to abandon these false narratives and magical thinking and establish a regulatory governance framework that serves the public interest. In this article, I explain what this framework should consist of, beginning by clarifying what regulation is for and why it matters.
In constructing legal guardrails, the new government must focus on how and why digital systems can produce adverse impacts. Algorithmic systems can have capabilities far beyond those imaginable when most of our legal rules and frameworks were established. Legislators must now grapple with their unique risks, whether algorithms take a simple, rule-based form or rely on deep learning techniques, particularly when deployed in ways that have safety-critical or rights-critical consequences: in other words, if they have ‘high-stakes’ implications. This is precisely the focus of the collaborative analysis undertaken by an informal group of UK sector regulators, the Digital Regulation Cooperation Forum (DRCF).6 Academic research demonstrates that the adverse impacts of algorithmic systems arise outside the remit of existing sectoral regulators, are often unintended and many are opaque, particularly those resulting in violations to human rights and/or the corrosion of our social, political and cultural fabric.7 Yet, contrary to the UK's 2023 white paper on AI, expecting the DRCF to provide comprehensive and effective oversight simply by publishing a set of high-level, non-binding ‘motherhood and apple-pie’ principles concerning transparency and fairness, without additional legislative measures, information-gathering powers or additional resources, defies common sense.8
Establishing a trustworthy basis for algorithmic systems requires a comprehensive oversight regime that supports cross-sectoral coordination to provide a clear, stable framework of legally binding rules, monitored and enforced by an independent, skilled and properly resourced regulator, accountable to parliament and equipped with information gathering and investigative powers to apply those rules in a fair, transparent and effective manner without fear or favour.
The portrayal of legal regulation as the enemy of innovation, peddled by those in thrall to techno-solutionism, fails to acknowledge that contemporary pharmaceutical regulation, despite many shortcomings, enabled the development and rollout of safe, effective and affordable Covid-19 vaccines at unprecedented speed. It is a powerful, recent demonstration that effective regulatory oversight is legally, institutionally and politically possible in the service of the public good. In earlier decades, snake-oil salesmen were commonplace in the absence of clear legal frameworks and effective independent oversight and enforcement to ensure that new drugs were both safe and efficacious. It was not until the devastation of Thalidomide that the legal and institutional reforms needed to ensure the efficacy and safety of medicines were put in place and taken seriously.
History indicates that if we wish to facilitate the development of socially beneficial yet powerful new technologies, we must establish a legitimate and effective regulatory framework to protect people and communities from harms and wrongs. A clear-eyed vision of an AI-enabled Britain would begin by recognising that we are only at the beginning of the AI ‘revolution’. Its promise to make our lives ‘better’ remains marketing rhetoric, without robust evidence demonstrating how, and how much, AI and other forms of digital automation actually serve the needs of people in specific contexts and circumstances, and at what cost.
Hence, our challenge is to learn how to build and collaborate effectively with machines in ways that enhance human flourishing, while taking account of their costs: not merely the direct cost of adoption, but also those less visible but perhaps more serious adverse impacts. Instead of AI boosterism and a misguided belief in magic bullets,20 the UK needs a clear and effective regulatory governance framework that establishes and nourishes a trustworthy ecosystem, thereby fostering the development and sensitive implementation of automated systems that deliver real-world benefits to all its people.
期刊介绍:
The permafrost of no alternatives has cracked; the horizon of political possibilities is expanding. IPPR Progressive Review is a pluralistic space to debate where next for progressives, examine the opportunities and challenges confronting us and ask the big questions facing our politics: transforming a failed economic model, renewing a frayed social contract, building a new relationship with Europe. Publishing the best writing in economics, politics and culture, IPPR Progressive Review explores how we can best build a more equal, humane and prosperous society.