{"title":"生成式人工智能的隐私和个人数据风险治理:中国视角","authors":"Xiongbiao Ye , Yuhong Yan , Jia Li , Bo Jiang","doi":"10.1016/j.telpol.2024.102851","DOIUrl":null,"url":null,"abstract":"<div><div>The rapid development of generative artificial intelligence (AI) has attracted global attention and posed challenges to existing data governance frameworks. The increased technical complexity and expanded scale of data usage not only make it more difficult to regulate AI but also present challenges for the current legal system. This article, which takes ChatGPT's training data and working principles as a starting point, examines specific privacy risks, data leakage risks, and personal data risks posed by generative AI. It also analyzes the latest practices in privacy and personal data protection in China. This article finds that while China's governance on privacy and personal data protection takes a macro-micro integration approach and a private-and-public law integration approach, there are shortcomings in the legal system. Given that the current personal data protection system centered on individual control is unsuitable for the modes of data processing by generative AI, and that private law is insufficient in safeguarding data privacy, urgent institutional innovation is needed to achieve the objective of “trustworthy AI.”</div></div>","PeriodicalId":22290,"journal":{"name":"Telecommunications Policy","volume":"48 10","pages":"Article 102851"},"PeriodicalIF":5.9000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Privacy and personal data risk governance for generative artificial intelligence: A Chinese perspective\",\"authors\":\"Xiongbiao Ye , Yuhong Yan , Jia Li , Bo Jiang\",\"doi\":\"10.1016/j.telpol.2024.102851\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The rapid development of generative artificial intelligence (AI) has attracted global attention and posed challenges to existing data governance frameworks. The increased technical complexity and expanded scale of data usage not only make it more difficult to regulate AI but also present challenges for the current legal system. This article, which takes ChatGPT's training data and working principles as a starting point, examines specific privacy risks, data leakage risks, and personal data risks posed by generative AI. It also analyzes the latest practices in privacy and personal data protection in China. This article finds that while China's governance on privacy and personal data protection takes a macro-micro integration approach and a private-and-public law integration approach, there are shortcomings in the legal system. Given that the current personal data protection system centered on individual control is unsuitable for the modes of data processing by generative AI, and that private law is insufficient in safeguarding data privacy, urgent institutional innovation is needed to achieve the objective of “trustworthy AI.”</div></div>\",\"PeriodicalId\":22290,\"journal\":{\"name\":\"Telecommunications Policy\",\"volume\":\"48 10\",\"pages\":\"Article 102851\"},\"PeriodicalIF\":5.9000,\"publicationDate\":\"2024-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Telecommunications Policy\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0308596124001484\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telecommunications Policy","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0308596124001484","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
Privacy and personal data risk governance for generative artificial intelligence: A Chinese perspective
The rapid development of generative artificial intelligence (AI) has attracted global attention and posed challenges to existing data governance frameworks. The increased technical complexity and expanded scale of data usage not only make it more difficult to regulate AI but also present challenges for the current legal system. This article, which takes ChatGPT's training data and working principles as a starting point, examines specific privacy risks, data leakage risks, and personal data risks posed by generative AI. It also analyzes the latest practices in privacy and personal data protection in China. This article finds that while China's governance on privacy and personal data protection takes a macro-micro integration approach and a private-and-public law integration approach, there are shortcomings in the legal system. Given that the current personal data protection system centered on individual control is unsuitable for the modes of data processing by generative AI, and that private law is insufficient in safeguarding data privacy, urgent institutional innovation is needed to achieve the objective of “trustworthy AI.”
期刊介绍:
Telecommunications Policy is concerned with the impact of digitalization in the economy and society. The journal is multidisciplinary, encompassing conceptual, theoretical and empirical studies, quantitative as well as qualitative. The scope includes policy, regulation, and governance; big data, artificial intelligence and data science; new and traditional sectors encompassing new media and the platform economy; management, entrepreneurship, innovation and use. Contributions may explore these topics at national, regional and international levels, including issues confronting both developed and developing countries. The papers accepted by the journal meet high standards of analytical rigor and policy relevance.