Zhengjie Gao, Xuanzi Liu, Yuanshuai Lan, Zheng Yang
{"title":"A Brief Survey on Safety of Large Language Models","authors":"Zhengjie Gao, Xuanzi Liu, Yuanshuai Lan, Zheng Yang","doi":"10.20532/cit.2024.1005778","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.","PeriodicalId":38688,"journal":{"name":"Journal of Computing and Information Technology","volume":"23 6","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Computing and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20532/cit.2024.1005778","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0
Abstract
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and have been widely adopted in various applications such as machine translation, chatbots, text summarization, and so on. However, the use of LLMs has raised concerns about their potential safety and security risks. In this survey, we explore the safety implications of LLMs, including ethical considerations, hallucination, and prompt injection. We also discuss current research efforts to mitigate these risks and identify areas for future research. Our survey provides a comprehensive overview of the safety concerns related to LLMs, which can help researchers and practitioners in the NLP community develop more safe and ethical applications of LLMs.
期刊介绍:
CIT. Journal of Computing and Information Technology is an international peer-reviewed journal covering the area of computing and information technology, i.e. computer science, computer engineering, software engineering, information systems, and information technology. CIT endeavors to publish stimulating accounts of original scientific work, primarily including research papers on both theoretical and practical issues, as well as case studies describing the application and critical evaluation of theory. Surveys and state-of-the-art reports will be considered only exceptionally; proposals for such submissions should be sent to the Editorial Board for scrutiny. Specific areas of interest comprise, but are not restricted to, the following topics: theory of computing, design and analysis of algorithms, numerical and symbolic computing, scientific computing, artificial intelligence, image processing, pattern recognition, computer vision, embedded and real-time systems, operating systems, computer networking, Web technologies, distributed systems, human-computer interaction, technology enhanced learning, multimedia, database systems, data mining, machine learning, knowledge engineering, soft computing systems and network security, computational statistics, computational linguistics, and natural language processing. Special attention is paid to educational, social, legal and managerial aspects of computing and information technology. In this respect CIT fosters the exchange of ideas, experience and knowledge between regions with different technological and cultural background, and in particular developed and developing ones.