加拿大负责任的技术政策议程

IF 1.1 4区 管理学 Q3 PUBLIC ADMINISTRATION
Sam Andrey, Joe Masoodi, Tiffany Kwok, André Côté
{"title":"加拿大负责任的技术政策议程","authors":"Sam Andrey,&nbsp;Joe Masoodi,&nbsp;Tiffany Kwok,&nbsp;André Côté","doi":"10.1111/capa.12535","DOIUrl":null,"url":null,"abstract":"<p>The launch of ChatGPT and the explosion of interest and concern about generative artificial intelligence (AI) has again reinforced that rapid advancements in technology outpace policy and regulatory responses. There is a growing need for scholarship, engagement, and capacity-building around technology governance in Canada to unpack and demystify emerging digital innovations and equip decision-makers in government, industry, academia and civil society to advance responsible, accountable, and trustworthy technology. To help bridge this gap, this note contextualizes today's challenges of governing the design, deployment and use of digital technologies, and describes a new set of secure and responsible technology policy movements and initiatives that can inform and support effective, public interest-oriented technology policymaking in Canada. We conclude by outlining a potential research agenda, with multi-sector mobilization opportunities, to accelerate this critical work.</p><p>The arrival of the internet in 1991 brought hopes for a freer society. Early narratives surrounding the expansion of internet technologies were largely positive, driven by a libertarian spirit highlighting the beneficial features of information technology for democracy, social and economic progress (Korvela, <span>2021</span>). By the mid-2010s, this optimism had dissipated due to several international developments, including the Snowden revelations of widespread state surveillance of citizens, and the Cambridge Analytica scandal resulting from the unwitting exposure of vast troves of personal data collected by social media platforms to influence electoral outcomes.</p><p>Against this backdrop, there have been calls for more policy and regulation in Canada and elsewhere, and several new pieces of legislation aimed at regulating digital technologies and platforms have been introduced.1 Policymakers are also recognizing the need to grow their “tech policy literacy.” For example, two Canadian federal legislators recently launched the Parliamentary Caucus on Emerging Technology, which aims to serve as a cross-party forum linking parliamentarians with experts to fill the knowledge gaps on new and emerging technology (Rempel Garner, <span>2023</span>). Yet, accelerating digital innovation continues to race ahead of the capacity and timeliness of policymakers and public institutions in governing technology. What has emerged is a narrative about government and policymakers lacking understanding and expertise about emerging technology, and therefore incapable of effectively regulating it.</p><p>Former Google CEO Eric Schmidt recently argued against government regulation of AI technologies, noting “there's no way a nonindustry person can understand what's possible” (Hetzner, <span>2023</span>). This comment can be seen as self-serving, reflecting the tech giants' continuing preference for self-regulation. It also reflects a broader view that digital technology is solely the domain of technologists and industry experts—particularly from the science, technology, engineering and math (STEM) disciplines—who are the guardians of society's, or even humanity's, best interests (Allison, <span>2023</span>). Not only is this problematic when viewed through the lens of liberal-democratic representation, it also ignores historical precedents.</p><p>Long before the internet and consolidated online platforms, scholars studied the impacts of technology on society (see for e.g., Adorno &amp; Horkheimer, <span>1972</span>; Arendt, <span>1958</span>; Ellul, <span>1964</span>; Heidegger, <span>1977</span>; Marx, <span>1976</span>). Such works paved the way to scholarship viewing technology as a social process influenced by interests and public processes, with human biases embodied in its design (Bijker et al., <span>1989</span>; Caro, <span>1974</span>; Winner, <span>1986</span>). This scholarship has acted as a springboard for contemporary studies questioning the effects of today's data-driven society on rights, freedoms, race, power, equity and democracy. For instance, research has shed light on the black-boxes of today's digital technologies, including by revealing the inherent biases embedded by human developers in AI large language models (LLMs), which produce discriminatory and racist outcomes, harming marginalized and vulnerable groups of populations (Benjamin, <span>2019</span>; Buolamwini &amp; Gebru, <span>2018</span>; Noble, <span>2018</span>). Such works offer new ways of conceptualizing and theorizing current tech policy issues with direct applications to policymaking, such as calls for moratoriums on police use of facial recognition technologies or regulating online platforms (McPhail, <span>2022</span>; Owen, <span>2019</span>).</p><p>The past five years have also seen the emergence and acceleration of secure and responsible technology movements and initiatives within and across Western jurisdictions. Applying various labels—from “tech for good” and “privacy/security by design” to “ethical AI” and “tech stewardship”—they typically share a common aim: to better align the development and deployment of digital technology with values and principles of an open, inclusive, equitable, and democratic society. While offering significant promise for advancing principles-based tech policymaking in Canada and internationally, these efforts remain nascent, uncoordinated and relatively limited in Canada.</p><p>A lengthy list of secure and responsible technology (SRT) initiatives has recently emerged in Canada and internationally. Generally, such initiatives are focused at the intersection of technological development and culture with human values (MIT Technology Review Insights, <span>2023</span>), seeking to ensure the design and deployment of digital technologies align with democratic values and human rights. A recent G7 Communique, for instance, outlined these as including fairness, accountability, transparency, safety, protection from online harassment, and respect for privacy and the protection of personal data (White House, <span>2023</span>). While a powerful statement, it offers little guidance about what forms of tech policy these values should be applied to. We introduce a basic approach for considering these SRT initiatives that aim to influence tech policy, developed for a new Secure and Responsible Tech Policy professional education program offered at Toronto Metropolitan University.</p><p>The concept of “tech policy” itself is not commonly defined or understood. As policymaking for today's digital technologies clearly extends well beyond the scope of “public policy” formulated and advanced primarily by governments and public institutions, tech policy can be defined as the public, industry, and civil society policies and initiatives that set the conditions, rules and oversight for the development, use and impact of digital technology in Canada and globally. To help differentiate and organize the various types of tech policies, we developed a framework outlining these on a spectrum (see Figure 1, below). It progresses from ideas and advocacy initiatives meant to influence tech policy, to voluntary commitments and discretionary organizational actions, and finally legally-enforceable requirements.</p><p>An initial scan across this landscape reveals several types of initiatives. <b>Thought-leadership and activism</b> includes the wide-ranging work of nonprofits, academia and civil society seeking to inform, influence or create public accountability for a shift towards SRT in policy and practice. In Canada, there are research and policy institutes like the Centre for Media, Technology and Democracy (McGill University) and the Schwartz Reisman Institute for Technology and Society (University of Toronto), which apply SRT-aligned approaches to research and policy convening activities. Others, such as US-based non-profit All Tech is Human, focus on responsible tech community-building that bridges disciplines, from the technologists in engineering and data science to fields including law, economics and anthropology. Of course, corporate lobbying also plays a significant role in contributing to the development of technology policy, which can raise concerns about undue power and influence exerted on legislators (Beretta, <span>2019</span>).</p><p>The second category of <b>technologist frameworks, toolkits and training</b> aims to raise awareness of secure and responsible tech principles and equips and trains technologists and companies to apply these principles and practices in the development, evaluation, and monitoring of technology. The Tech Stewardship Practice Program, for instance, trains undergraduate students at Canadian universities in engineering and technology-related programs to think critically about the social, ethical, and environmental impacts of their work. Some Canadian universities have introduced programs to teach and train students on responsible tech including, for instance, the Master of Public Policy in Digital Society degree (McMaster University) and the Responsible AI program (Toronto Metropolitan University) as well as the Responsible AI course offered at Concordia University as part of its larger certificate program in AI Proficiency.</p><p><b>Multilateral and cross-sector declarations</b> are formal announcements or statements of shared commitment to a set of principles or actions, developed through a multi-party process that can include governments, businesses or industry groups, civil society organizations, and others. Participation or signing typically represent a moral, corporate or political commitment in the public sphere, but not a legally binding commitment. The Montreal Declaration for a Responsible Development of AI, launched in 2017 through the Université de Montreal, is a Canadian example. Another is the Canada Declaration on Electoral Integrity Online—a voluntary code developed by the federal government with online platforms including Facebook, Twitter, Google, TikTok, and LinkedIn—to promote responsible governance of platforms during elections (Government of Canada, <span>2021</span>). The Government of Canada and Canadian organizations are signatories to many international initiatives, such as the recent Paris Call for Trust and Security in Cyberspace, through which major states and companies have signaled a commitment to a set of nine principles.</p><p>Companies and industry sectors can play an important role in the self-governance of tech through the design of their products and policies to self-regulate, though these can also give rise to conflicts between business and societal interests. <b>Corporate policies and product design</b> seek to monitor their own adherence to legal, ethical, or safety standards, rather than being subject to an outside, independent entity or governmental regulator to monitor and enforce those standards. Meta's content policies and moderation activities for the Facebook and Instagram platforms are an obvious and controversial example (Medzini, <span>2022</span>). Apple's App Tracking Transparency policy, allowing users to limit third party tracking of their online activities, is another.</p><p>Typically voluntary processes, <b>industry standards are developed for products, practices, or operations, while certifications</b> are developed for individuals, organizations or products who choose to abide by a set of occupational, industry or other technical requirements, established by a reputable organization through expert-led processes. While there are many initiatives by tech skills certification and standards-setting organizations, the Digital Governance Council, a national forum bringing together the country's CIOs and executive technology leaders, is leading an important initiative to set technical industry standards for digital tech in Canada in 14 areas such as data governance, cyber security, AI and digital credentials. Certifications like the Responsible AI Institute (RAII) Certification qualify AI systems and support practitioners as they navigate the development of responsible AI.</p><p>Progressing to more legally-enforceable categories, <b>public directives</b> provide guidance or set rules for public, private or civil society actors that are not created by a legislative body or enshrined in law. For example, the Government of Canada has introduced a Directive on Automated Decision Making to guide use of AI in government institutions. These incorporate principles like transparency and accountability and outline policy, ethical, and legal considerations (Bitar et al., <span>2022</span>). As with the investigation of OpenAI by Canada's information and privacy commissioners, quasi-judicial public agencies or independent officers of Parliament also play a growing role as <b>oversight bodies</b> in the digital economy in areas like digital privacy (Office of the Privacy Commissioner of Canada, <span>2023</span>). Some of these bodies, however, lack powers to hold organizations to account including legal authority to proactively audit, investigate, and make orders (e.g., Information and Privacy Commissioner of Ontario, <span>2021</span>).</p><p>The final category is <b>government laws and regulations</b>. This includes legislative initiatives directly aimed at enhanced technology governance, like Canada's proposed Bill C-27, which includes the <i>AI and Data Act</i> (AIDA) (Parliament of Canada, <span>2022</span>), or its proposed bill for addressing online harms on social platforms (Government of Canada, <span>2023</span>). Emerging examples of policies being informed and influenced by SRT movements focus on user protection (Green, <span>2021</span>; Standards Council of Canada, <span>2021</span>). Arnaldi et al. (<span>2015</span>) note that the influence of responsible tech has grown, increasingly reflected in government policy and strategic documents. For example, the direct contribution of the Ethics Guidelines for Trustworthy AI process led to the formation of the EU's Artificial Intelligence Act (Stix, <span>2021</span>).</p><p>Some question the effectiveness of these SRT movements and initiatives in producing real, substantive change in policies or corporate product design and practice. The movement's inclusion of a broad range of stakeholders, including governments and global technology firms, is seen as a positive outcome (World Economic Forum, <span>2019</span>), but others criticize their commitments as performative and not reflecting genuine commitment to SRT principles (or “ethics washing”) (Green, <span>2021</span>). Others have pointed to the lack of clarity in some guidelines leading to different interpretations. Although many responsible tech documents reflect similar principles like transparency, justice, and fairness, it can be unclear how organizations operationalize such principles (Mittelstadt, <span>2019</span>; Stix, <span>2021</span>). There are also ongoing debates on definitions, like “ethical AI,” which further contributes to uncertainty in application.</p><p>Closing the gap between digital innovation and technology governance with a responsible technology ethos urgently requires equipping actors in government, industry and civil society with the knowledge and skills to effectively shape technology policies in their various forms. This calls for a scholarship agenda focused on developing further research insights about SRT and how such movements can effectively influence changes in tech policy and practice. It also requires efforts to mobilize actors across key tech policy communities in Canada—governments and regulators, tech firms and industry, academic researchers and civil society, and citizens at-large—to grow knowledge, capacity and common agendas for secure and responsible technology governance.</p><p>Although some important efforts are belatedly underway, by Canadian governments, industry actors and civil society organizations, to establish guardrails for disruptive technologies like social media platforms or cryptocurrencies, there remain significant challenges in the Canadian technology policy landscape. Chief among these concerns is how Canadian businesses can on the one hand contribute to equitable and sustainable economic growth, and on the other, be organized around principles of responsible tech. It will only be through concerted efforts to grow tech policymaking capacity in Canada, grounded in evidence-based research and guided by shared democratic values, that we will effectively govern our increasingly digital society.</p>","PeriodicalId":46145,"journal":{"name":"Canadian Public Administration-Administration Publique Du Canada","volume":"66 3","pages":"439-446"},"PeriodicalIF":1.1000,"publicationDate":"2023-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/capa.12535","citationCount":"0","resultStr":"{\"title\":\"An agenda for responsible technology policy in Canada\",\"authors\":\"Sam Andrey,&nbsp;Joe Masoodi,&nbsp;Tiffany Kwok,&nbsp;André Côté\",\"doi\":\"10.1111/capa.12535\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The launch of ChatGPT and the explosion of interest and concern about generative artificial intelligence (AI) has again reinforced that rapid advancements in technology outpace policy and regulatory responses. There is a growing need for scholarship, engagement, and capacity-building around technology governance in Canada to unpack and demystify emerging digital innovations and equip decision-makers in government, industry, academia and civil society to advance responsible, accountable, and trustworthy technology. To help bridge this gap, this note contextualizes today's challenges of governing the design, deployment and use of digital technologies, and describes a new set of secure and responsible technology policy movements and initiatives that can inform and support effective, public interest-oriented technology policymaking in Canada. We conclude by outlining a potential research agenda, with multi-sector mobilization opportunities, to accelerate this critical work.</p><p>The arrival of the internet in 1991 brought hopes for a freer society. Early narratives surrounding the expansion of internet technologies were largely positive, driven by a libertarian spirit highlighting the beneficial features of information technology for democracy, social and economic progress (Korvela, <span>2021</span>). By the mid-2010s, this optimism had dissipated due to several international developments, including the Snowden revelations of widespread state surveillance of citizens, and the Cambridge Analytica scandal resulting from the unwitting exposure of vast troves of personal data collected by social media platforms to influence electoral outcomes.</p><p>Against this backdrop, there have been calls for more policy and regulation in Canada and elsewhere, and several new pieces of legislation aimed at regulating digital technologies and platforms have been introduced.1 Policymakers are also recognizing the need to grow their “tech policy literacy.” For example, two Canadian federal legislators recently launched the Parliamentary Caucus on Emerging Technology, which aims to serve as a cross-party forum linking parliamentarians with experts to fill the knowledge gaps on new and emerging technology (Rempel Garner, <span>2023</span>). Yet, accelerating digital innovation continues to race ahead of the capacity and timeliness of policymakers and public institutions in governing technology. What has emerged is a narrative about government and policymakers lacking understanding and expertise about emerging technology, and therefore incapable of effectively regulating it.</p><p>Former Google CEO Eric Schmidt recently argued against government regulation of AI technologies, noting “there's no way a nonindustry person can understand what's possible” (Hetzner, <span>2023</span>). This comment can be seen as self-serving, reflecting the tech giants' continuing preference for self-regulation. It also reflects a broader view that digital technology is solely the domain of technologists and industry experts—particularly from the science, technology, engineering and math (STEM) disciplines—who are the guardians of society's, or even humanity's, best interests (Allison, <span>2023</span>). Not only is this problematic when viewed through the lens of liberal-democratic representation, it also ignores historical precedents.</p><p>Long before the internet and consolidated online platforms, scholars studied the impacts of technology on society (see for e.g., Adorno &amp; Horkheimer, <span>1972</span>; Arendt, <span>1958</span>; Ellul, <span>1964</span>; Heidegger, <span>1977</span>; Marx, <span>1976</span>). Such works paved the way to scholarship viewing technology as a social process influenced by interests and public processes, with human biases embodied in its design (Bijker et al., <span>1989</span>; Caro, <span>1974</span>; Winner, <span>1986</span>). This scholarship has acted as a springboard for contemporary studies questioning the effects of today's data-driven society on rights, freedoms, race, power, equity and democracy. For instance, research has shed light on the black-boxes of today's digital technologies, including by revealing the inherent biases embedded by human developers in AI large language models (LLMs), which produce discriminatory and racist outcomes, harming marginalized and vulnerable groups of populations (Benjamin, <span>2019</span>; Buolamwini &amp; Gebru, <span>2018</span>; Noble, <span>2018</span>). Such works offer new ways of conceptualizing and theorizing current tech policy issues with direct applications to policymaking, such as calls for moratoriums on police use of facial recognition technologies or regulating online platforms (McPhail, <span>2022</span>; Owen, <span>2019</span>).</p><p>The past five years have also seen the emergence and acceleration of secure and responsible technology movements and initiatives within and across Western jurisdictions. Applying various labels—from “tech for good” and “privacy/security by design” to “ethical AI” and “tech stewardship”—they typically share a common aim: to better align the development and deployment of digital technology with values and principles of an open, inclusive, equitable, and democratic society. While offering significant promise for advancing principles-based tech policymaking in Canada and internationally, these efforts remain nascent, uncoordinated and relatively limited in Canada.</p><p>A lengthy list of secure and responsible technology (SRT) initiatives has recently emerged in Canada and internationally. Generally, such initiatives are focused at the intersection of technological development and culture with human values (MIT Technology Review Insights, <span>2023</span>), seeking to ensure the design and deployment of digital technologies align with democratic values and human rights. A recent G7 Communique, for instance, outlined these as including fairness, accountability, transparency, safety, protection from online harassment, and respect for privacy and the protection of personal data (White House, <span>2023</span>). While a powerful statement, it offers little guidance about what forms of tech policy these values should be applied to. We introduce a basic approach for considering these SRT initiatives that aim to influence tech policy, developed for a new Secure and Responsible Tech Policy professional education program offered at Toronto Metropolitan University.</p><p>The concept of “tech policy” itself is not commonly defined or understood. As policymaking for today's digital technologies clearly extends well beyond the scope of “public policy” formulated and advanced primarily by governments and public institutions, tech policy can be defined as the public, industry, and civil society policies and initiatives that set the conditions, rules and oversight for the development, use and impact of digital technology in Canada and globally. To help differentiate and organize the various types of tech policies, we developed a framework outlining these on a spectrum (see Figure 1, below). It progresses from ideas and advocacy initiatives meant to influence tech policy, to voluntary commitments and discretionary organizational actions, and finally legally-enforceable requirements.</p><p>An initial scan across this landscape reveals several types of initiatives. <b>Thought-leadership and activism</b> includes the wide-ranging work of nonprofits, academia and civil society seeking to inform, influence or create public accountability for a shift towards SRT in policy and practice. In Canada, there are research and policy institutes like the Centre for Media, Technology and Democracy (McGill University) and the Schwartz Reisman Institute for Technology and Society (University of Toronto), which apply SRT-aligned approaches to research and policy convening activities. Others, such as US-based non-profit All Tech is Human, focus on responsible tech community-building that bridges disciplines, from the technologists in engineering and data science to fields including law, economics and anthropology. Of course, corporate lobbying also plays a significant role in contributing to the development of technology policy, which can raise concerns about undue power and influence exerted on legislators (Beretta, <span>2019</span>).</p><p>The second category of <b>technologist frameworks, toolkits and training</b> aims to raise awareness of secure and responsible tech principles and equips and trains technologists and companies to apply these principles and practices in the development, evaluation, and monitoring of technology. The Tech Stewardship Practice Program, for instance, trains undergraduate students at Canadian universities in engineering and technology-related programs to think critically about the social, ethical, and environmental impacts of their work. Some Canadian universities have introduced programs to teach and train students on responsible tech including, for instance, the Master of Public Policy in Digital Society degree (McMaster University) and the Responsible AI program (Toronto Metropolitan University) as well as the Responsible AI course offered at Concordia University as part of its larger certificate program in AI Proficiency.</p><p><b>Multilateral and cross-sector declarations</b> are formal announcements or statements of shared commitment to a set of principles or actions, developed through a multi-party process that can include governments, businesses or industry groups, civil society organizations, and others. Participation or signing typically represent a moral, corporate or political commitment in the public sphere, but not a legally binding commitment. The Montreal Declaration for a Responsible Development of AI, launched in 2017 through the Université de Montreal, is a Canadian example. Another is the Canada Declaration on Electoral Integrity Online—a voluntary code developed by the federal government with online platforms including Facebook, Twitter, Google, TikTok, and LinkedIn—to promote responsible governance of platforms during elections (Government of Canada, <span>2021</span>). The Government of Canada and Canadian organizations are signatories to many international initiatives, such as the recent Paris Call for Trust and Security in Cyberspace, through which major states and companies have signaled a commitment to a set of nine principles.</p><p>Companies and industry sectors can play an important role in the self-governance of tech through the design of their products and policies to self-regulate, though these can also give rise to conflicts between business and societal interests. <b>Corporate policies and product design</b> seek to monitor their own adherence to legal, ethical, or safety standards, rather than being subject to an outside, independent entity or governmental regulator to monitor and enforce those standards. Meta's content policies and moderation activities for the Facebook and Instagram platforms are an obvious and controversial example (Medzini, <span>2022</span>). Apple's App Tracking Transparency policy, allowing users to limit third party tracking of their online activities, is another.</p><p>Typically voluntary processes, <b>industry standards are developed for products, practices, or operations, while certifications</b> are developed for individuals, organizations or products who choose to abide by a set of occupational, industry or other technical requirements, established by a reputable organization through expert-led processes. While there are many initiatives by tech skills certification and standards-setting organizations, the Digital Governance Council, a national forum bringing together the country's CIOs and executive technology leaders, is leading an important initiative to set technical industry standards for digital tech in Canada in 14 areas such as data governance, cyber security, AI and digital credentials. Certifications like the Responsible AI Institute (RAII) Certification qualify AI systems and support practitioners as they navigate the development of responsible AI.</p><p>Progressing to more legally-enforceable categories, <b>public directives</b> provide guidance or set rules for public, private or civil society actors that are not created by a legislative body or enshrined in law. For example, the Government of Canada has introduced a Directive on Automated Decision Making to guide use of AI in government institutions. These incorporate principles like transparency and accountability and outline policy, ethical, and legal considerations (Bitar et al., <span>2022</span>). As with the investigation of OpenAI by Canada's information and privacy commissioners, quasi-judicial public agencies or independent officers of Parliament also play a growing role as <b>oversight bodies</b> in the digital economy in areas like digital privacy (Office of the Privacy Commissioner of Canada, <span>2023</span>). Some of these bodies, however, lack powers to hold organizations to account including legal authority to proactively audit, investigate, and make orders (e.g., Information and Privacy Commissioner of Ontario, <span>2021</span>).</p><p>The final category is <b>government laws and regulations</b>. This includes legislative initiatives directly aimed at enhanced technology governance, like Canada's proposed Bill C-27, which includes the <i>AI and Data Act</i> (AIDA) (Parliament of Canada, <span>2022</span>), or its proposed bill for addressing online harms on social platforms (Government of Canada, <span>2023</span>). Emerging examples of policies being informed and influenced by SRT movements focus on user protection (Green, <span>2021</span>; Standards Council of Canada, <span>2021</span>). Arnaldi et al. (<span>2015</span>) note that the influence of responsible tech has grown, increasingly reflected in government policy and strategic documents. For example, the direct contribution of the Ethics Guidelines for Trustworthy AI process led to the formation of the EU's Artificial Intelligence Act (Stix, <span>2021</span>).</p><p>Some question the effectiveness of these SRT movements and initiatives in producing real, substantive change in policies or corporate product design and practice. The movement's inclusion of a broad range of stakeholders, including governments and global technology firms, is seen as a positive outcome (World Economic Forum, <span>2019</span>), but others criticize their commitments as performative and not reflecting genuine commitment to SRT principles (or “ethics washing”) (Green, <span>2021</span>). Others have pointed to the lack of clarity in some guidelines leading to different interpretations. Although many responsible tech documents reflect similar principles like transparency, justice, and fairness, it can be unclear how organizations operationalize such principles (Mittelstadt, <span>2019</span>; Stix, <span>2021</span>). There are also ongoing debates on definitions, like “ethical AI,” which further contributes to uncertainty in application.</p><p>Closing the gap between digital innovation and technology governance with a responsible technology ethos urgently requires equipping actors in government, industry and civil society with the knowledge and skills to effectively shape technology policies in their various forms. This calls for a scholarship agenda focused on developing further research insights about SRT and how such movements can effectively influence changes in tech policy and practice. It also requires efforts to mobilize actors across key tech policy communities in Canada—governments and regulators, tech firms and industry, academic researchers and civil society, and citizens at-large—to grow knowledge, capacity and common agendas for secure and responsible technology governance.</p><p>Although some important efforts are belatedly underway, by Canadian governments, industry actors and civil society organizations, to establish guardrails for disruptive technologies like social media platforms or cryptocurrencies, there remain significant challenges in the Canadian technology policy landscape. Chief among these concerns is how Canadian businesses can on the one hand contribute to equitable and sustainable economic growth, and on the other, be organized around principles of responsible tech. It will only be through concerted efforts to grow tech policymaking capacity in Canada, grounded in evidence-based research and guided by shared democratic values, that we will effectively govern our increasingly digital society.</p>\",\"PeriodicalId\":46145,\"journal\":{\"name\":\"Canadian Public Administration-Administration Publique Du Canada\",\"volume\":\"66 3\",\"pages\":\"439-446\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2023-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/capa.12535\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Canadian Public Administration-Administration Publique Du Canada\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/capa.12535\",\"RegionNum\":4,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PUBLIC ADMINISTRATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Canadian Public Administration-Administration Publique Du Canada","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/capa.12535","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
引用次数: 0

摘要

ChatGPT的推出以及对生成式人工智能(AI)的兴趣和关注的爆炸式增长再次证明,技术的快速进步超过了政策和监管的反应。加拿大越来越需要围绕技术治理的学术研究、参与和能力建设,以解开新兴数字创新的面纱,并为政府、行业、学术界和民间社会的决策者提供装备,以推进负责任、负责任和值得信赖的技术。为了帮助弥合这一差距,本文介绍了当今管理数字技术的设计、部署和使用所面临的挑战,并描述了一套新的安全和负责任的技术政策运动和举措,这些政策和举措可以为加拿大有效的、面向公众利益的技术政策制定提供信息和支持。最后,我们概述了一项潜在的研究议程,其中包括多部门动员机会,以加速这项关键工作。1991年互联网的到来为一个更自由的社会带来了希望。围绕互联网技术扩张的早期叙述在很大程度上是积极的,由自由意志主义精神驱动,强调信息技术对民主、社会和经济进步的有益特征(Korvela, 2021)。到2010年代中期,由于一些国际事态的发展,这种乐观情绪已经消散,包括斯诺登(Snowden)揭露了国家对公民的广泛监控,以及剑桥分析公司(Cambridge Analytica)的丑闻——社交媒体平台为影响选举结果而收集的大量个人数据在无意中被曝光。在这种背景下,加拿大和其他地方一直呼吁出台更多的政策和法规,并出台了几项旨在规范数字技术和平台的新立法政策制定者也认识到提高“科技政策素养”的必要性。例如,两位加拿大联邦立法者最近发起了关于新兴技术的议会核心小组,其目的是作为一个跨党派论坛,将议员与专家联系起来,以填补有关新兴技术的知识空白(Rempel Garner, 2023)。然而,不断加速的数字创新继续领先于政策制定者和公共机构管理技术的能力和及时性。出现的说法是,政府和政策制定者缺乏对新兴技术的理解和专业知识,因此无法有效监管它。谷歌前首席执行官埃里克·施密特最近反对政府对人工智能技术的监管,指出“非行业人士无法理解什么是可能的”(Hetzner, 2023)。这一评论可以被视为是自私的,反映了科技巨头对自我监管的持续偏好。它还反映了一种更广泛的观点,即数字技术仅仅是技术专家和行业专家的领域,特别是来自科学、技术、工程和数学(STEM)学科的专家,他们是社会甚至人类最大利益的守护者(Allison, 2023)。从自由民主代表制的角度来看,这不仅存在问题,而且还忽视了历史先例。早在互联网和整合的在线平台出现之前,学者们就研究了技术对社会的影响(参见Adorno &霍克,1972;阿伦特,1958;Ellul, 1964;海德格尔,1977;马克思,1976)。这些作品为学术研究铺平了道路,将技术视为受利益和公共过程影响的社会过程,其设计体现了人类的偏见(Bijker et al., 1989;卡罗,1974;赢家,1986)。这些学术研究为当代研究提供了一个跳板,这些研究质疑当今数据驱动的社会对权利、自由、种族、权力、公平和民主的影响。例如,研究揭示了当今数字技术的黑箱,包括揭示了人类开发人员在人工智能大型语言模型(llm)中嵌入的固有偏见,这些偏见产生了歧视性和种族主义的结果,伤害了边缘化和弱势群体(Benjamin, 2019;Buolamwini,Gebru, 2018;高贵的,2018)。这些工作提供了将当前技术政策问题概念化和理论化的新方法,并直接应用于政策制定,例如呼吁暂停警察使用面部识别技术或监管在线平台(McPhail, 2022;欧文,2019)。在过去的五年里,我们也看到了安全和负责任的技术运动和倡议在西方司法管辖区内外的出现和加速。 从“科技为善”和“设计隐私/安全”到“道德人工智能”和“技术管理”,它们通常都有一个共同的目标:更好地将数字技术的发展和部署与开放、包容、公平和民主社会的价值观和原则结合起来。虽然这些努力为推动加拿大和国际上基于原则的技术政策制定提供了巨大的希望,但在加拿大,这些努力仍然处于起步阶段,缺乏协调,而且相对有限。最近在加拿大和国际上出现了一长串安全和负责任的技术(SRT)计划。一般来说,这些举措的重点是技术发展和文化与人类价值观的交叉点(麻省理工学院技术评论见解,2023),旨在确保数字技术的设计和部署与民主价值观和人权保持一致。例如,最近的七国集团公报概述了这些原则,包括公平、问责制、透明度、安全、免受网络骚扰、尊重隐私和保护个人数据(白宫,2023年)。虽然这是一份强有力的声明,但对于这些价值观应该应用于何种形式的技术政策,它几乎没有提供任何指导。我们介绍了一种基本的方法来考虑这些旨在影响技术政策的SRT倡议,这是为多伦多城市大学提供的一个新的安全与负责任的技术政策专业教育项目而开发的。“技术政策”的概念本身并没有得到普遍的定义或理解。由于当今数字技术的政策制定显然远远超出了主要由政府和公共机构制定和推进的“公共政策”的范围,技术政策可以被定义为为加拿大和全球数字技术的开发、使用和影响设定条件、规则和监督的公众、行业和民间社会政策和举措。为了帮助区分和组织各种类型的技术政策,我们开发了一个框架,在一个范围内概述这些政策(参见下面的图1)。它从旨在影响技术政策的想法和倡议,到自愿承诺和酌情组织行动,最后是法律上可执行的要求。对这一领域的初步扫描揭示了几种类型的计划。思想领导和行动主义包括非营利组织、学术界和民间社会的广泛工作,旨在为政策和实践转向SRT提供信息、影响或建立公共问责制。在加拿大,有研究和政策机构,如媒体、技术和民主中心(麦吉尔大学)和施瓦茨·赖斯曼技术和社会研究所(多伦多大学),它们将与srt一致的方法应用于研究和政策召集活动。其他组织,如美国非营利组织All Tech is Human,则专注于负责任的技术社区建设,将各个学科(从工程和数据科学领域的技术专家,到法律、经济学和人类学等领域)联系起来。当然,企业游说在促进技术政策的制定方面也发挥着重要作用,这可能会引发对立法者施加不当权力和影响的担忧(Beretta, 2019)。第二类技术人员框架、工具包和培训旨在提高对安全和负责任的技术原则和设备的认识,并培训技术人员和公司将这些原则和实践应用于技术的开发、评估和监测。例如,技术管理实践项目(Tech Stewardship Practice Program)训练加拿大大学工程和技术相关专业的本科生,让他们批判性地思考自己的工作对社会、伦理和环境的影响。一些加拿大大学已经引入了一些项目来教授和培训学生负责任的技术,例如,麦克马斯特大学的数字社会公共政策硕士学位和负责任的人工智能项目(多伦多城市大学),以及康科迪亚大学提供的负责任的人工智能课程,作为其更大的人工智能熟练程度证书项目的一部分。多边和跨部门宣言是对一系列原则或行动共同承诺的正式宣布或声明,这些原则或行动是通过多方进程制定的,多方进程可以包括政府、企业或行业团体、民间社会组织和其他方面。参与或签署通常代表公共领域的道德、企业或政治承诺,但不是具有法律约束力的承诺。2017年由蒙特利尔大学发起的《负责任发展人工智能蒙特利尔宣言》就是加拿大的一个例子。 另一个是《加拿大在线选举诚信宣言》,这是联邦政府与Facebook、Twitter、谷歌、TikTok和linkedin等在线平台共同制定的自愿准则,旨在促进选举期间平台的负责任治理(加拿大政府,2021年)。加拿大政府和加拿大组织签署了许多国际倡议,例如最近的《关于网络空间信任和安全的巴黎呼吁》,主要国家和公司通过该倡议承诺遵守一套九项原则。企业和行业部门可以通过设计产品和政策来自我监管,在技术的自我治理中发挥重要作用,尽管这也可能导致企业利益与社会利益之间的冲突。公司政策和产品设计寻求监督他们自己遵守法律、道德或安全标准,而不是受制于外部独立实体或政府监管机构来监督和执行这些标准。Meta针对Facebook和Instagram平台的内容政策和审核活动是一个明显而有争议的例子(Medzini, 2022)。苹果的应用程序跟踪透明度政策是另一个例子,该政策允许用户限制第三方跟踪他们的在线活动。通常是自愿过程,行业标准是为产品、实践或操作开发的,而认证是为选择遵守一套职业、行业或其他技术要求的个人、组织或产品开发的,这些要求是由信誉良好的组织通过专家主导的过程建立的。虽然技术技能认证和标准制定组织有许多举措,但数字治理委员会(Digital Governance Council)是一个汇集了该国首席信息官和执行技术领导者的全国性论坛,正在牵头一项重要举措,在数据治理、网络安全、人工智能和数字证书等14个领域为加拿大的数字技术制定技术行业标准。像负责任的人工智能研究所(RAII)认证这样的认证对人工智能系统进行了认证,并支持从业者引导负责任的人工智能的发展。公共指令为公共、私人或民间社会行为者提供指导或制定规则,而这些不是由立法机构制定或载入法律的。例如,加拿大政府推出了一项关于自动决策的指令,以指导政府机构使用人工智能。这些原则包括透明度和问责制等原则,并概述了政策、道德和法律方面的考虑(Bitar et al., 2022)。与加拿大信息和隐私专员对OpenAI的调查一样,准司法公共机构或议会独立官员在数字隐私等领域作为数字经济的监督机构也发挥着越来越大的作用(加拿大隐私专员办公室,2023)。然而,其中一些机构缺乏追究组织责任的权力,包括主动审计、调查和发布命令的法律权力(例如,安大略省信息和隐私专员,2021年)。最后一类是政府法律法规。这包括直接旨在加强技术治理的立法举措,如加拿大提议的C-27法案,其中包括人工智能和数据法案(AIDA)(加拿大议会,2022年),或其提议的解决社交平台在线危害的法案(加拿大政府,2023年)。新出现的受SRT运动影响的政策例子侧重于用户保护(Green, 2021;加拿大标准委员会,2021)。Arnaldi等人(2015)注意到,负责任技术的影响力越来越大,越来越多地反映在政府政策和战略文件中。例如,《可信赖人工智能过程道德准则》的直接贡献导致了欧盟人工智能法案的形成(Stix, 2021)。一些人质疑这些SRT运动和倡议在政策或公司产品设计和实践中产生真正的实质性变化方面的有效性。该运动纳入了广泛的利益相关者,包括政府和全球科技公司,被视为一个积极的结果(世界经济论坛,2019年),但其他人批评他们的承诺是表现性的,并没有反映对SRT原则的真正承诺(或“道德清洗”)(Green, 2021年)。其他人则指出,一些指导方针缺乏明确性,导致了不同的解释。尽管许多负责任的技术文件反映了类似的原则,如透明度、正义和公平,但组织如何实施这些原则可能尚不清楚(Mittelstadt, 2019;斯蒂克斯,2021)。关于“道德人工智能”等定义的争论也在继续,这进一步加剧了应用中的不确定性。 以负责任的技术精神弥合数字创新与技术治理之间的差距,迫切需要政府、行业和民间社会的行为体具备知识和技能,以有效地制定各种形式的技术政策。这需要一个奖学金议程,重点是发展关于SRT的进一步研究见解,以及此类运动如何有效地影响技术政策和实践的变化。它还需要努力动员加拿大关键技术政策社区的行动者-政府和监管机构,科技公司和行业,学术研究人员和民间社会以及广大公民-以增加安全和负责任的技术治理的知识,能力和共同议程。尽管加拿大政府、行业参与者和民间社会组织正在做出一些重要的努力,为社交媒体平台或加密货币等颠覆性技术建立护栏,但加拿大的技术政策领域仍存在重大挑战。这些担忧中最主要的是加拿大企业如何一方面为公平和可持续的经济增长做出贡献,另一方面围绕负责任的技术原则进行组织。只有通过共同努力,以循证研究为基础,以共同的民主价值观为指导,提高加拿大的技术决策能力,我们才能有效地治理我们日益数字化的社会。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

An agenda for responsible technology policy in Canada

An agenda for responsible technology policy in Canada

The launch of ChatGPT and the explosion of interest and concern about generative artificial intelligence (AI) has again reinforced that rapid advancements in technology outpace policy and regulatory responses. There is a growing need for scholarship, engagement, and capacity-building around technology governance in Canada to unpack and demystify emerging digital innovations and equip decision-makers in government, industry, academia and civil society to advance responsible, accountable, and trustworthy technology. To help bridge this gap, this note contextualizes today's challenges of governing the design, deployment and use of digital technologies, and describes a new set of secure and responsible technology policy movements and initiatives that can inform and support effective, public interest-oriented technology policymaking in Canada. We conclude by outlining a potential research agenda, with multi-sector mobilization opportunities, to accelerate this critical work.

The arrival of the internet in 1991 brought hopes for a freer society. Early narratives surrounding the expansion of internet technologies were largely positive, driven by a libertarian spirit highlighting the beneficial features of information technology for democracy, social and economic progress (Korvela, 2021). By the mid-2010s, this optimism had dissipated due to several international developments, including the Snowden revelations of widespread state surveillance of citizens, and the Cambridge Analytica scandal resulting from the unwitting exposure of vast troves of personal data collected by social media platforms to influence electoral outcomes.

Against this backdrop, there have been calls for more policy and regulation in Canada and elsewhere, and several new pieces of legislation aimed at regulating digital technologies and platforms have been introduced.1 Policymakers are also recognizing the need to grow their “tech policy literacy.” For example, two Canadian federal legislators recently launched the Parliamentary Caucus on Emerging Technology, which aims to serve as a cross-party forum linking parliamentarians with experts to fill the knowledge gaps on new and emerging technology (Rempel Garner, 2023). Yet, accelerating digital innovation continues to race ahead of the capacity and timeliness of policymakers and public institutions in governing technology. What has emerged is a narrative about government and policymakers lacking understanding and expertise about emerging technology, and therefore incapable of effectively regulating it.

Former Google CEO Eric Schmidt recently argued against government regulation of AI technologies, noting “there's no way a nonindustry person can understand what's possible” (Hetzner, 2023). This comment can be seen as self-serving, reflecting the tech giants' continuing preference for self-regulation. It also reflects a broader view that digital technology is solely the domain of technologists and industry experts—particularly from the science, technology, engineering and math (STEM) disciplines—who are the guardians of society's, or even humanity's, best interests (Allison, 2023). Not only is this problematic when viewed through the lens of liberal-democratic representation, it also ignores historical precedents.

Long before the internet and consolidated online platforms, scholars studied the impacts of technology on society (see for e.g., Adorno & Horkheimer, 1972; Arendt, 1958; Ellul, 1964; Heidegger, 1977; Marx, 1976). Such works paved the way to scholarship viewing technology as a social process influenced by interests and public processes, with human biases embodied in its design (Bijker et al., 1989; Caro, 1974; Winner, 1986). This scholarship has acted as a springboard for contemporary studies questioning the effects of today's data-driven society on rights, freedoms, race, power, equity and democracy. For instance, research has shed light on the black-boxes of today's digital technologies, including by revealing the inherent biases embedded by human developers in AI large language models (LLMs), which produce discriminatory and racist outcomes, harming marginalized and vulnerable groups of populations (Benjamin, 2019; Buolamwini & Gebru, 2018; Noble, 2018). Such works offer new ways of conceptualizing and theorizing current tech policy issues with direct applications to policymaking, such as calls for moratoriums on police use of facial recognition technologies or regulating online platforms (McPhail, 2022; Owen, 2019).

The past five years have also seen the emergence and acceleration of secure and responsible technology movements and initiatives within and across Western jurisdictions. Applying various labels—from “tech for good” and “privacy/security by design” to “ethical AI” and “tech stewardship”—they typically share a common aim: to better align the development and deployment of digital technology with values and principles of an open, inclusive, equitable, and democratic society. While offering significant promise for advancing principles-based tech policymaking in Canada and internationally, these efforts remain nascent, uncoordinated and relatively limited in Canada.

A lengthy list of secure and responsible technology (SRT) initiatives has recently emerged in Canada and internationally. Generally, such initiatives are focused at the intersection of technological development and culture with human values (MIT Technology Review Insights, 2023), seeking to ensure the design and deployment of digital technologies align with democratic values and human rights. A recent G7 Communique, for instance, outlined these as including fairness, accountability, transparency, safety, protection from online harassment, and respect for privacy and the protection of personal data (White House, 2023). While a powerful statement, it offers little guidance about what forms of tech policy these values should be applied to. We introduce a basic approach for considering these SRT initiatives that aim to influence tech policy, developed for a new Secure and Responsible Tech Policy professional education program offered at Toronto Metropolitan University.

The concept of “tech policy” itself is not commonly defined or understood. As policymaking for today's digital technologies clearly extends well beyond the scope of “public policy” formulated and advanced primarily by governments and public institutions, tech policy can be defined as the public, industry, and civil society policies and initiatives that set the conditions, rules and oversight for the development, use and impact of digital technology in Canada and globally. To help differentiate and organize the various types of tech policies, we developed a framework outlining these on a spectrum (see Figure 1, below). It progresses from ideas and advocacy initiatives meant to influence tech policy, to voluntary commitments and discretionary organizational actions, and finally legally-enforceable requirements.

An initial scan across this landscape reveals several types of initiatives. Thought-leadership and activism includes the wide-ranging work of nonprofits, academia and civil society seeking to inform, influence or create public accountability for a shift towards SRT in policy and practice. In Canada, there are research and policy institutes like the Centre for Media, Technology and Democracy (McGill University) and the Schwartz Reisman Institute for Technology and Society (University of Toronto), which apply SRT-aligned approaches to research and policy convening activities. Others, such as US-based non-profit All Tech is Human, focus on responsible tech community-building that bridges disciplines, from the technologists in engineering and data science to fields including law, economics and anthropology. Of course, corporate lobbying also plays a significant role in contributing to the development of technology policy, which can raise concerns about undue power and influence exerted on legislators (Beretta, 2019).

The second category of technologist frameworks, toolkits and training aims to raise awareness of secure and responsible tech principles and equips and trains technologists and companies to apply these principles and practices in the development, evaluation, and monitoring of technology. The Tech Stewardship Practice Program, for instance, trains undergraduate students at Canadian universities in engineering and technology-related programs to think critically about the social, ethical, and environmental impacts of their work. Some Canadian universities have introduced programs to teach and train students on responsible tech including, for instance, the Master of Public Policy in Digital Society degree (McMaster University) and the Responsible AI program (Toronto Metropolitan University) as well as the Responsible AI course offered at Concordia University as part of its larger certificate program in AI Proficiency.

Multilateral and cross-sector declarations are formal announcements or statements of shared commitment to a set of principles or actions, developed through a multi-party process that can include governments, businesses or industry groups, civil society organizations, and others. Participation or signing typically represent a moral, corporate or political commitment in the public sphere, but not a legally binding commitment. The Montreal Declaration for a Responsible Development of AI, launched in 2017 through the Université de Montreal, is a Canadian example. Another is the Canada Declaration on Electoral Integrity Online—a voluntary code developed by the federal government with online platforms including Facebook, Twitter, Google, TikTok, and LinkedIn—to promote responsible governance of platforms during elections (Government of Canada, 2021). The Government of Canada and Canadian organizations are signatories to many international initiatives, such as the recent Paris Call for Trust and Security in Cyberspace, through which major states and companies have signaled a commitment to a set of nine principles.

Companies and industry sectors can play an important role in the self-governance of tech through the design of their products and policies to self-regulate, though these can also give rise to conflicts between business and societal interests. Corporate policies and product design seek to monitor their own adherence to legal, ethical, or safety standards, rather than being subject to an outside, independent entity or governmental regulator to monitor and enforce those standards. Meta's content policies and moderation activities for the Facebook and Instagram platforms are an obvious and controversial example (Medzini, 2022). Apple's App Tracking Transparency policy, allowing users to limit third party tracking of their online activities, is another.

Typically voluntary processes, industry standards are developed for products, practices, or operations, while certifications are developed for individuals, organizations or products who choose to abide by a set of occupational, industry or other technical requirements, established by a reputable organization through expert-led processes. While there are many initiatives by tech skills certification and standards-setting organizations, the Digital Governance Council, a national forum bringing together the country's CIOs and executive technology leaders, is leading an important initiative to set technical industry standards for digital tech in Canada in 14 areas such as data governance, cyber security, AI and digital credentials. Certifications like the Responsible AI Institute (RAII) Certification qualify AI systems and support practitioners as they navigate the development of responsible AI.

Progressing to more legally-enforceable categories, public directives provide guidance or set rules for public, private or civil society actors that are not created by a legislative body or enshrined in law. For example, the Government of Canada has introduced a Directive on Automated Decision Making to guide use of AI in government institutions. These incorporate principles like transparency and accountability and outline policy, ethical, and legal considerations (Bitar et al., 2022). As with the investigation of OpenAI by Canada's information and privacy commissioners, quasi-judicial public agencies or independent officers of Parliament also play a growing role as oversight bodies in the digital economy in areas like digital privacy (Office of the Privacy Commissioner of Canada, 2023). Some of these bodies, however, lack powers to hold organizations to account including legal authority to proactively audit, investigate, and make orders (e.g., Information and Privacy Commissioner of Ontario, 2021).

The final category is government laws and regulations. This includes legislative initiatives directly aimed at enhanced technology governance, like Canada's proposed Bill C-27, which includes the AI and Data Act (AIDA) (Parliament of Canada, 2022), or its proposed bill for addressing online harms on social platforms (Government of Canada, 2023). Emerging examples of policies being informed and influenced by SRT movements focus on user protection (Green, 2021; Standards Council of Canada, 2021). Arnaldi et al. (2015) note that the influence of responsible tech has grown, increasingly reflected in government policy and strategic documents. For example, the direct contribution of the Ethics Guidelines for Trustworthy AI process led to the formation of the EU's Artificial Intelligence Act (Stix, 2021).

Some question the effectiveness of these SRT movements and initiatives in producing real, substantive change in policies or corporate product design and practice. The movement's inclusion of a broad range of stakeholders, including governments and global technology firms, is seen as a positive outcome (World Economic Forum, 2019), but others criticize their commitments as performative and not reflecting genuine commitment to SRT principles (or “ethics washing”) (Green, 2021). Others have pointed to the lack of clarity in some guidelines leading to different interpretations. Although many responsible tech documents reflect similar principles like transparency, justice, and fairness, it can be unclear how organizations operationalize such principles (Mittelstadt, 2019; Stix, 2021). There are also ongoing debates on definitions, like “ethical AI,” which further contributes to uncertainty in application.

Closing the gap between digital innovation and technology governance with a responsible technology ethos urgently requires equipping actors in government, industry and civil society with the knowledge and skills to effectively shape technology policies in their various forms. This calls for a scholarship agenda focused on developing further research insights about SRT and how such movements can effectively influence changes in tech policy and practice. It also requires efforts to mobilize actors across key tech policy communities in Canada—governments and regulators, tech firms and industry, academic researchers and civil society, and citizens at-large—to grow knowledge, capacity and common agendas for secure and responsible technology governance.

Although some important efforts are belatedly underway, by Canadian governments, industry actors and civil society organizations, to establish guardrails for disruptive technologies like social media platforms or cryptocurrencies, there remain significant challenges in the Canadian technology policy landscape. Chief among these concerns is how Canadian businesses can on the one hand contribute to equitable and sustainable economic growth, and on the other, be organized around principles of responsible tech. It will only be through concerted efforts to grow tech policymaking capacity in Canada, grounded in evidence-based research and guided by shared democratic values, that we will effectively govern our increasingly digital society.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.90
自引率
20.00%
发文量
43
期刊介绍: Canadian Public Administration/Administration publique du Canada is the refereed scholarly publication of the Institute of Public Administration of Canada (IPAC). It covers executive, legislative, judicial and quasi-judicial functions at all three levels of Canadian government. Published quarterly, the journal focuses mainly on Canadian issues but also welcomes manuscripts which compare Canadian public sector institutions and practices with those in other countries or examine issues in other countries or international organizations which are of interest to the public administration community in Canada.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信