关于对人工智能及其在教学、学习和研究中的应用的态度的调查结果

Edward Palmer, Daniel Lee, Matthew Arnold, Dimitra Lekkas, Katrina Plastow, Florian Ploeckl, Amit Srivastav, Peter Strelan
{"title":"关于对人工智能及其在教学、学习和研究中的应用的态度的调查结果","authors":"Edward Palmer, Daniel Lee, Matthew Arnold, Dimitra Lekkas, Katrina Plastow, Florian Ploeckl, Amit Srivastav, Peter Strelan","doi":"10.14742/apubs.2023.537","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence (AI) is having an advancing dramatic impact on Technology Enhanced Learning (TEL) in Higher Education. (Popenici & Kerr, 2017) observed an emergence of the use of AI in HE (Higher Education) and pinpointed challenges for institutions and students including issues of academic integrity, privacy and “the possibility of a dystopian future” (p. 11). Potential benefits of AI in HE includes creating learning communities through chatbots (Studente & Ellis, 2020), automated grading, individualized learning strategies and improved plagiarism detection (Owoc et al., 2019). It is unclear how often, and in what manner, students are engaging with AI during their learning and in creating submissions for assessments tasks and if this engagement is creating unrealistic outcomes. It is also unclear how educators are engaging with AI during their teaching and curriculum/assessment design and how this may be impacting the learning outcomes of their cohorts. This research study was conducted to investigate the perceived immediate and long-term implications of engaging with AI of both staff and students on learning and teaching within the University of Adelaide. The design of the research study is underpinned by a blended approach combining Situational Ethics and Planned Behavior Theory to understand the ethical considerations and behavioral activities and future intentions of staff and students regarding the use of AI. Situational Ethics provides a framework for examining the contextual nature of ethical decision-making regarding AI (Boddington, 2017; Memarian & Doleck, 2023). Planned Behavior Theory provides understanding of individuals' motivation and rationalization to engage with AI (Wang et al., 2022). By employing a mixed qualitative and quantitative design, collecting data via online surveys, the study's findings shed light on the ethical challenges and attitudes associated with AI implementation in higher education and provided insights into the factors that influence staff and students’ individual intentions to engage with AI technologies in Learning and Teaching.  Participants from all faculties across a wide diversity of student cohorts and staff responded to the surveys. Initial findings reveal educators are suspecting a greater student use of AI than the data demonstrates. The most frequent use of AI by students is for checking grammar and this is more prominent in the international student cohort. Students trust their human educators more than AI for course content and feedback on assessments. Educators are comfortable using AI but feel also they need greater support and training. The majority of students (70%, n=126) are not concerned about the implications of using Generative AI in higher education, regarding issues related to privacy, bias, ethics, or discrimination. However, demonstrating an active concern in this field, the most common use of AI by university staff is to test its capabilities to complete assignments. These and other findings from the study can provide guidance to staff and students by describing current practices and making recommendations regarding assessment, curriculum design, and Learning and Teaching (L&T) activities.","PeriodicalId":236417,"journal":{"name":"ASCILITE Publications","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Findings from a survey looking at attitudes towards AI and its use in teaching, learning and research\",\"authors\":\"Edward Palmer, Daniel Lee, Matthew Arnold, Dimitra Lekkas, Katrina Plastow, Florian Ploeckl, Amit Srivastav, Peter Strelan\",\"doi\":\"10.14742/apubs.2023.537\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial Intelligence (AI) is having an advancing dramatic impact on Technology Enhanced Learning (TEL) in Higher Education. (Popenici & Kerr, 2017) observed an emergence of the use of AI in HE (Higher Education) and pinpointed challenges for institutions and students including issues of academic integrity, privacy and “the possibility of a dystopian future” (p. 11). Potential benefits of AI in HE includes creating learning communities through chatbots (Studente & Ellis, 2020), automated grading, individualized learning strategies and improved plagiarism detection (Owoc et al., 2019). It is unclear how often, and in what manner, students are engaging with AI during their learning and in creating submissions for assessments tasks and if this engagement is creating unrealistic outcomes. It is also unclear how educators are engaging with AI during their teaching and curriculum/assessment design and how this may be impacting the learning outcomes of their cohorts. This research study was conducted to investigate the perceived immediate and long-term implications of engaging with AI of both staff and students on learning and teaching within the University of Adelaide. The design of the research study is underpinned by a blended approach combining Situational Ethics and Planned Behavior Theory to understand the ethical considerations and behavioral activities and future intentions of staff and students regarding the use of AI. Situational Ethics provides a framework for examining the contextual nature of ethical decision-making regarding AI (Boddington, 2017; Memarian & Doleck, 2023). Planned Behavior Theory provides understanding of individuals' motivation and rationalization to engage with AI (Wang et al., 2022). By employing a mixed qualitative and quantitative design, collecting data via online surveys, the study's findings shed light on the ethical challenges and attitudes associated with AI implementation in higher education and provided insights into the factors that influence staff and students’ individual intentions to engage with AI technologies in Learning and Teaching.  Participants from all faculties across a wide diversity of student cohorts and staff responded to the surveys. Initial findings reveal educators are suspecting a greater student use of AI than the data demonstrates. The most frequent use of AI by students is for checking grammar and this is more prominent in the international student cohort. Students trust their human educators more than AI for course content and feedback on assessments. Educators are comfortable using AI but feel also they need greater support and training. The majority of students (70%, n=126) are not concerned about the implications of using Generative AI in higher education, regarding issues related to privacy, bias, ethics, or discrimination. However, demonstrating an active concern in this field, the most common use of AI by university staff is to test its capabilities to complete assignments. These and other findings from the study can provide guidance to staff and students by describing current practices and making recommendations regarding assessment, curriculum design, and Learning and Teaching (L&T) activities.\",\"PeriodicalId\":236417,\"journal\":{\"name\":\"ASCILITE Publications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-11-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ASCILITE Publications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14742/apubs.2023.537\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ASCILITE Publications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14742/apubs.2023.537","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)正在对高等教育中的技术强化学习(TEL)产生日益显著的影响。(Popenici & Kerr, 2017)观察到人工智能在高等教育(HE)中的应用正在兴起,并指出了机构和学生面临的挑战,包括学术诚信、隐私和 "可能出现的乌托邦式未来 "等问题(第11页)。人工智能在高等教育中的潜在优势包括通过聊天机器人创建学习社区(Studente & Ellis, 2020)、自动评分、个性化学习策略和改进剽窃检测(Owoc et al.)目前还不清楚学生在学习过程中以及在为评估任务提交材料时与人工智能接触的频率和方式,也不清楚这种接触是否会产生不切实际的结果。此外,还不清楚教育工作者在教学和课程/评估设计过程中如何使用人工智能,以及这对学生的学习成果有何影响。本研究旨在调查阿德莱德大学教职员工和学生参与人工智能对学习和教学的直接和长期影响。 研究设计以情景伦理学和计划行为理论相结合的混合方法为基础,旨在了解教职员工和学生在使用人工智能方面的伦理考虑、行为活动和未来意向。情境伦理学为研究人工智能伦理决策的情境性质提供了一个框架(Boddington,2017;Memarian & Doleck,2023)。计划行为理论(Planned Behavior Theory)提供了对个人参与人工智能的动机和合理性的理解(Wang 等人,2022 年)。本研究采用定性与定量相结合的设计,通过在线调查收集数据,研究结果揭示了在高等教育中实施人工智能所面临的伦理挑战和相关态度,并深入分析了影响教职员工和学生在 "学与教 "中使用人工智能技术的个人意愿的因素。 来自各院系的学生和教职员工参与了调查。初步调查结果显示,教育工作者怀疑学生对人工智能的使用比数据显示的要多。学生最常使用的人工智能是检查语法,这在国际学生群体中更为突出。在课程内容和评估反馈方面,学生更信任人类教育工作者,而不是人工智能。教育工作者对使用人工智能感到得心应手,但也认为他们需要更多的支持和培训。大多数学生(70%,n=126)并不担心在高等教育中使用生成式人工智能会涉及隐私、偏见、道德或歧视等问题。然而,大学教职员工对人工智能最常见的使用是测试其完成作业的能力,这表明了他们对这一领域的积极关注。研究中的这些发现和其他发现可以为教职员工和学生提供指导,说明当前的做法,并就评估、课程设计和学习与教学(L&T)活动提出建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Findings from a survey looking at attitudes towards AI and its use in teaching, learning and research
Artificial Intelligence (AI) is having an advancing dramatic impact on Technology Enhanced Learning (TEL) in Higher Education. (Popenici & Kerr, 2017) observed an emergence of the use of AI in HE (Higher Education) and pinpointed challenges for institutions and students including issues of academic integrity, privacy and “the possibility of a dystopian future” (p. 11). Potential benefits of AI in HE includes creating learning communities through chatbots (Studente & Ellis, 2020), automated grading, individualized learning strategies and improved plagiarism detection (Owoc et al., 2019). It is unclear how often, and in what manner, students are engaging with AI during their learning and in creating submissions for assessments tasks and if this engagement is creating unrealistic outcomes. It is also unclear how educators are engaging with AI during their teaching and curriculum/assessment design and how this may be impacting the learning outcomes of their cohorts. This research study was conducted to investigate the perceived immediate and long-term implications of engaging with AI of both staff and students on learning and teaching within the University of Adelaide. The design of the research study is underpinned by a blended approach combining Situational Ethics and Planned Behavior Theory to understand the ethical considerations and behavioral activities and future intentions of staff and students regarding the use of AI. Situational Ethics provides a framework for examining the contextual nature of ethical decision-making regarding AI (Boddington, 2017; Memarian & Doleck, 2023). Planned Behavior Theory provides understanding of individuals' motivation and rationalization to engage with AI (Wang et al., 2022). By employing a mixed qualitative and quantitative design, collecting data via online surveys, the study's findings shed light on the ethical challenges and attitudes associated with AI implementation in higher education and provided insights into the factors that influence staff and students’ individual intentions to engage with AI technologies in Learning and Teaching.  Participants from all faculties across a wide diversity of student cohorts and staff responded to the surveys. Initial findings reveal educators are suspecting a greater student use of AI than the data demonstrates. The most frequent use of AI by students is for checking grammar and this is more prominent in the international student cohort. Students trust their human educators more than AI for course content and feedback on assessments. Educators are comfortable using AI but feel also they need greater support and training. The majority of students (70%, n=126) are not concerned about the implications of using Generative AI in higher education, regarding issues related to privacy, bias, ethics, or discrimination. However, demonstrating an active concern in this field, the most common use of AI by university staff is to test its capabilities to complete assignments. These and other findings from the study can provide guidance to staff and students by describing current practices and making recommendations regarding assessment, curriculum design, and Learning and Teaching (L&T) activities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信