{"title":"如何规范大型语言模型,实现负责任的人工智能","authors":"J. Berengueres","doi":"10.1109/TTS.2024.3403681","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"191-197"},"PeriodicalIF":0.0000,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10536000","citationCount":"0","resultStr":"{\"title\":\"How to Regulate Large Language Models for Responsible AI\",\"authors\":\"J. Berengueres\",\"doi\":\"10.1109/TTS.2024.3403681\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.\",\"PeriodicalId\":73324,\"journal\":{\"name\":\"IEEE transactions on technology and society\",\"volume\":\"5 2\",\"pages\":\"191-197\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10536000\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on technology and society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10536000/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on technology and society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10536000/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
大型语言模型(LLM)是一种预测性概率模型,能够通过多项专业测试,其水平可与人类媲美。然而,这些能力也伴随着道德问题。一些基于 LLM 的产品在伦理方面存在疏漏,包括(i) 缺乏内容或来源归属,以及 (ii) 用于训练模型的内容缺乏透明度。本文指出了可以应用道德保障措施的四个接触点,以便在 LLM 中实现更负责任的人工智能。主要发现是,在训练之前应用保障措施符合从源头解决问题的既定工程实践。然而,这种方法目前却遭到了回避。最后,我们将历史与美国汽车行业相提并论,汽车行业最初抵制安全法规,但后来随着消费者态度的转变而接受了这些法规。
How to Regulate Large Language Models for Responsible AI
Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.