Maren März, Monika Himmelbauer, Kevin Boldt, Alexander Oksche
{"title":"考试和论文中生成人工智能和大型语言模型的法律问题。","authors":"Maren März, Monika Himmelbauer, Kevin Boldt, Alexander Oksche","doi":"10.3205/zma001702","DOIUrl":null,"url":null,"abstract":"<p><p>The high performance of generative artificial intelligence (AI) and large language models (LLM) in examination contexts has triggered an intense debate about their applications, effects and risks. What legal aspects need to be considered when using LLM in teaching and assessment? What possibilities do language models offer? Statutes and laws are used to assess the use of LLM: - University statutes, state higher education laws, licensing regulations for doctors - Copyright Act (UrhG) - General Data Protection Regulation (DGPR) - AI Regulation (EU AI Act) LLM and AI offer opportunities but require clear university frameworks. These should define legitimate uses and areas where use is prohibited. Cheating and plagiarism violate good scientific practice and copyright laws. Cheating is difficult to detect. Plagiarism by AI is possible. Users of the products are responsible. LLM are effective tools for generating exam questions. Nevertheless, careful review is necessary as even apparently high-quality products may contain errors. However, the risk of copyright infringement with AI-generated exam questions is low, as copyright law allows up to 15% of protected works to be used for teaching and exams. The grading of exam content is subject to higher education laws and regulations and the GDPR. Exclusively computer-based assessment without human review is not permitted. For high-risk applications in education, the EU's AI Regulation will apply in the future. When dealing with LLM in assessments, evaluation criteria for existing assessments can be adapted, as can assessment programmes, e.g. to reduce the motivation to cheat. LLM can also become the subject of the examination themselves. Teachers should undergo further training in AI and consider LLM as an addition.</p>","PeriodicalId":45850,"journal":{"name":"GMS Journal for Medical Education","volume":"41 4","pages":"Doc47"},"PeriodicalIF":1.5000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474642/pdf/","citationCount":"0","resultStr":"{\"title\":\"Legal aspects of generative artificial intelligence and large language models in examinations and theses.\",\"authors\":\"Maren März, Monika Himmelbauer, Kevin Boldt, Alexander Oksche\",\"doi\":\"10.3205/zma001702\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The high performance of generative artificial intelligence (AI) and large language models (LLM) in examination contexts has triggered an intense debate about their applications, effects and risks. What legal aspects need to be considered when using LLM in teaching and assessment? What possibilities do language models offer? Statutes and laws are used to assess the use of LLM: - University statutes, state higher education laws, licensing regulations for doctors - Copyright Act (UrhG) - General Data Protection Regulation (DGPR) - AI Regulation (EU AI Act) LLM and AI offer opportunities but require clear university frameworks. These should define legitimate uses and areas where use is prohibited. Cheating and plagiarism violate good scientific practice and copyright laws. Cheating is difficult to detect. Plagiarism by AI is possible. Users of the products are responsible. LLM are effective tools for generating exam questions. Nevertheless, careful review is necessary as even apparently high-quality products may contain errors. However, the risk of copyright infringement with AI-generated exam questions is low, as copyright law allows up to 15% of protected works to be used for teaching and exams. The grading of exam content is subject to higher education laws and regulations and the GDPR. Exclusively computer-based assessment without human review is not permitted. For high-risk applications in education, the EU's AI Regulation will apply in the future. When dealing with LLM in assessments, evaluation criteria for existing assessments can be adapted, as can assessment programmes, e.g. to reduce the motivation to cheat. LLM can also become the subject of the examination themselves. Teachers should undergo further training in AI and consider LLM as an addition.</p>\",\"PeriodicalId\":45850,\"journal\":{\"name\":\"GMS Journal for Medical Education\",\"volume\":\"41 4\",\"pages\":\"Doc47\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11474642/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GMS Journal for Medical Education\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3205/zma001702\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION, SCIENTIFIC DISCIPLINES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GMS Journal for Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3205/zma001702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
Legal aspects of generative artificial intelligence and large language models in examinations and theses.
The high performance of generative artificial intelligence (AI) and large language models (LLM) in examination contexts has triggered an intense debate about their applications, effects and risks. What legal aspects need to be considered when using LLM in teaching and assessment? What possibilities do language models offer? Statutes and laws are used to assess the use of LLM: - University statutes, state higher education laws, licensing regulations for doctors - Copyright Act (UrhG) - General Data Protection Regulation (DGPR) - AI Regulation (EU AI Act) LLM and AI offer opportunities but require clear university frameworks. These should define legitimate uses and areas where use is prohibited. Cheating and plagiarism violate good scientific practice and copyright laws. Cheating is difficult to detect. Plagiarism by AI is possible. Users of the products are responsible. LLM are effective tools for generating exam questions. Nevertheless, careful review is necessary as even apparently high-quality products may contain errors. However, the risk of copyright infringement with AI-generated exam questions is low, as copyright law allows up to 15% of protected works to be used for teaching and exams. The grading of exam content is subject to higher education laws and regulations and the GDPR. Exclusively computer-based assessment without human review is not permitted. For high-risk applications in education, the EU's AI Regulation will apply in the future. When dealing with LLM in assessments, evaluation criteria for existing assessments can be adapted, as can assessment programmes, e.g. to reduce the motivation to cheat. LLM can also become the subject of the examination themselves. Teachers should undergo further training in AI and consider LLM as an addition.
期刊介绍:
GMS Journal for Medical Education (GMS J Med Educ) – formerly GMS Zeitschrift für Medizinische Ausbildung – publishes scientific articles on all aspects of undergraduate and graduate education in medicine, dentistry, veterinary medicine, pharmacy and other health professions. Research and review articles, project reports, short communications as well as discussion papers and comments may be submitted. There is a special focus on empirical studies which are methodologically sound and lead to results that are relevant beyond the respective institution, profession or country. Please feel free to submit qualitative as well as quantitative studies. We especially welcome submissions by students. It is the mission of GMS Journal for Medical Education to contribute to furthering scientific knowledge in the German-speaking countries as well as internationally and thus to foster the improvement of teaching and learning and to build an evidence base for undergraduate and graduate education. To this end, the journal has set up an editorial board with international experts. All manuscripts submitted are subjected to a clearly structured peer review process. All articles are published bilingually in English and German and are available with unrestricted open access. Thus, GMS Journal for Medical Education is available to a broad international readership. GMS Journal for Medical Education is published as an unrestricted open access journal with at least four issues per year. In addition, special issues on current topics in medical education research are also published. Until 2015 the journal was published under its German name GMS Zeitschrift für Medizinische Ausbildung. By changing its name to GMS Journal for Medical Education, we wish to underline our international mission.