Michael Henderson, Jennifer Chung, R. Awdry, Matthew Mundy, Mike Bryant, Cliff Ashford, K. Ryan
{"title":"Factors associated with online examination cheating","authors":"Michael Henderson, Jennifer Chung, R. Awdry, Matthew Mundy, Mike Bryant, Cliff Ashford, K. Ryan","doi":"10.1080/02602938.2022.2144802","DOIUrl":"https://doi.org/10.1080/02602938.2022.2144802","url":null,"abstract":"Abstract Online examinations are a common experience in higher education. Their security is a key concern for education communities, and has resulted in a variety of cheating countermeasures. There is broad consensus in the literature that there is no one measure, including proctoring, which eradicates cheating behaviours. As a result, this study is exploratory, seeking to add to our understanding of the range of factors that may interact with frequency of cheating behaviour in online examinations. This large-scale study (N = 7839) is based in one Australian university which pivoted to online examinations during the 2021 Covid-19 lockdowns. Students who reported cheating (n = 216) revealed a wide range of factors that may have influenced their behaviours. A key observation is that cheating, although less frequent than reported elsewhere, occurred regardless of the security measure, assessment design, examination condition, and across the spectrum of student demographic variables. However, there were statistically significant differences in relation to frequency of cheating according to certain demographics, examination conditions, motivations, attitudes and perceptions. Although some forms of proctoring did demonstrate reduced frequencies in self-reported cheating, they are demonstrably incomplete solutions due to the complexity of other variables.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82909071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bifactor modelling of the psychological constructs of learner feedback literacy: conceptions of feedback, feedback trust and self-efficacy","authors":"B. Song","doi":"10.1080/02602938.2022.2042187","DOIUrl":"https://doi.org/10.1080/02602938.2022.2042187","url":null,"abstract":"Abstract In contemporary feedback research, effective feedback does not depend on merely the characteristics of feedback but also on learners’ ability to understand, manage and use the information. Known as feedback literacy, it refers to learners’ social cognitive capacity, affective capacity and disposition prior to substantial engagement with feedback. In addition, feedback literacy is said to be developed in social learning settings. Despite the importance of the learner feedback literacy construct, little is known about its quantitative conceptualization or the specific psychological variables underpinning it. The purpose of this paper is to propose and validate a bifactor model of learner feedback literacy consisting of (1) conceptions of feedback, (2) feedback trust and (3) self-efficacy. Drawing from data on 923 learners from a polytechnic in Singapore that practices the social constructivist learning approach, results from Rasch and bifactor modelling analyses revealed that the learner feedback literacy model is psychometrically sound and robust with the potential to be developed further. Limitations and future research directions in the use of this model are discussed in the paper.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49365570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cecile Janse van Rensburg, S. Coetzee, Astrid Schmulian
{"title":"Talking during a test?! Embracing mobile instant messaging during assessment","authors":"Cecile Janse van Rensburg, S. Coetzee, Astrid Schmulian","doi":"10.1080/02602938.2022.2041545","DOIUrl":"https://doi.org/10.1080/02602938.2022.2041545","url":null,"abstract":"Abstract This study reports on the incorporation of mobile instant messaging (MIM) in assessments, as a collaborative learning tool, to enable students to socially construct knowledge and develop their collaborative problem solving competence, while being assessed individually. In particular, this study explores: what is the extent and timing of students’ use of MIM to communicate with other students while being assessed individually? What communicative activities are evident in the content of students’ MIM communications while being assessed individually? How do students experience being able to use MIM while being assessed individually? The results of this study’s analysis of the messages sent during various assessments suggests that when incorporating MIM into assessments, instructors should consider the objective of those assessments together with the nature (e.g. essay style) and the stakes of the assessments as these appear to influence the extent, timing and content of the instant messaging communications by the students during the assessment. A survey of the students suggested that their experiences of being able to use MIM during assessments were largely positive due to the learning opportunities, collaboration and teamwork, authenticity and equity that the introduction of instant messaging during assessment enabled.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42131707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Developing assessment literacy among trainee translators: scaffolding self and peer assessment as an intervention","authors":"Guangjiao Chen, Xiangling Wang, Lyu Wang","doi":"10.1080/02602938.2022.2142515","DOIUrl":"https://doi.org/10.1080/02602938.2022.2142515","url":null,"abstract":"Abstract Recent growth in research on assessment has focused on the importance of university students’ assessment literacy, yet few studies have investigated how self and peer assessment combined with scaffolding can develop student assessment literacy. To address this, we developed scaffolding self and peer assessment (SSPA) as an intervention to build student assessment literacy. A quasi-experimental design was implemented to test its usefulness; one class (N = 21) received the intervention and the other (N = 23) did not. The findings indicated that SSPA had a positive effect on the trainee translators’ assessment literacy levels. Consistent across the time points, the intervention group demonstrated significant improvements in self and peer feedback provision. In contrast, no such trend was noticed among the non-intervention group. The intervention group also significantly outperformed the non-intervention group in translation performance. Students were generally positive about SSPA, showing its feasibility in translation instruction. This study highlights the role that teacher scaffolding can play in helping trainee translators develop assessment literacy to meet language service industry requirements.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42909214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Pastor, Thorsten Schank, O. Troitschanskaia, K. Wälde
{"title":"A practical approach to overcoming biases in comparing student performance in higher education","authors":"Kai Pastor, Thorsten Schank, O. Troitschanskaia, K. Wälde","doi":"10.1080/02602938.2022.2134841","DOIUrl":"https://doi.org/10.1080/02602938.2022.2134841","url":null,"abstract":"Abstract In times of rankings and performance benchmarks, data on average marks of higher education students are very common internationally and are often used as quality indicators in practice. We discuss the principles behind the distribution of average marks of students. These principles need to be taken into account when calculating the percentile of (the average mark of) a student. An informative percentile is obtained only if the average mark is compared to a distribution of averages that have been calculated based on the same number of credit points obtained by the student. We provide an empirical example from a university in Germany, which shows that percentile information can differ considerably when based upon different samples. Our findings indicate that the approach proposed in this study may not only be the most efficient approach for ranking students to be implemented into university practice, but may also contribute to a much more objective and credible grade reporting system for student performance.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47686766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A review of the changes in higher education assessment and grading policy during covid-19","authors":"C. Chan","doi":"10.1080/02602938.2022.2140780","DOIUrl":"https://doi.org/10.1080/02602938.2022.2140780","url":null,"abstract":"Abstract The Covid-19 pandemic has not only brought about sociocultural, psychological and economic challenges, but also educational problems that have entailed radical policy changes to allow universities and colleges to fulfil their role. Before the pandemic, the majority of university courses were designed for face-to-face settings; but under the social distancing policies since Covid-19, institutions have had to react quickly, revising their teaching and assessment policies to ensure that education continues for all students and accommodate the new remote reality of education. This paper provides a review of the changes in educational assessment and grading policies of universities around the world in response to the Covid-19 pandemic, where five key themes were identified: covid-19 binary grading system, revised regulations governing the final degree award, relaxed academic progression policies, revised extension, deferral and exceptional circumstances policies, and mark adjustments. This review will serve as a useful summary for institutions to reflect upon and inform planning strategies for subsequent unexpected future events.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44534424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatemeh Salehian Kia, Abelardo Pardo, S. Dawson, Heather O’Brien
{"title":"Exploring the relationship between personalized feedback models, learning design and assessment outcomes","authors":"Fatemeh Salehian Kia, Abelardo Pardo, S. Dawson, Heather O’Brien","doi":"10.1080/02602938.2022.2139351","DOIUrl":"https://doi.org/10.1080/02602938.2022.2139351","url":null,"abstract":"Abstract The increasing use of technology in education has brought new opportunities for the systematic collection of student data. Analyzing technology-mediated trace data, for example, has enabled researchers to bring new insights into student learning processes and the factors involved to support learning and teaching. However, many of these learning analytic studies have drawn conclusions from limited data sets that are derived from a single course or program of study. This impacts the generalizability of noted outcomes and calls for research on larger institutional data sets. The institutional adoption and analysis of learning technology can provide deeper insights into a wide range of learning contexts in practice. This study focused on examining how instructors used the learning tool, OnTask, to provide personalized feedback for students in large classes. We collected usage data from 99 courses and 19,385 students to examine how the instructors customized feedback to different groups of students. The findings reveal that there is a significant association between the topics of feedback and students with different performance. The results also demonstrated that instructors most frequently provided feedback related to student assessment. The study emphasizes the importance of teacher and student feedback literacy for creating effective feedback loops.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47365731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mark distribution is affected by the type of assignment but not by features of the marking scheme in a biomedical sciences department of a UK university","authors":"M. Daw","doi":"10.1080/02602938.2022.2134552","DOIUrl":"https://doi.org/10.1080/02602938.2022.2134552","url":null,"abstract":"Abstract Marking schemes are a tool to ensure fairness in assessment of student work. Key features of fairness are that different markers would award the same mark to the same work and that the resulting marks effectively discriminate between different levels of student attainment. This study focuses on the ability of assessment to discriminate by analysing the mark distributions resulting from the use of different types of marking scheme in a real-world setting in a research-intensive UK university. This analysis shows that, in qualitative assessment, the mark distribution is unaffected by features of the marking scheme used. Instead, it shows that the type of assignment used has a significant effect on the mark distribution and that these effects are sometimes counterintuitive. Marking schemes are unlikely to be an effective tool in shaping mark distributions. To determine the effectiveness of approaches to assessment, we need to interrogate data rather than make assumptions.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42298797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program-level assessment planning in Australia: the considerations and practices of university academics","authors":"N. Charlton, R. Newsham-West","doi":"10.1080/02602938.2022.2134553","DOIUrl":"https://doi.org/10.1080/02602938.2022.2134553","url":null,"abstract":"Abstract Quality assessment in higher education needs to drive learning to produce graduates with disciplinary knowledge and employable skills. Performance-based funding requires universities to focus on graduate outcomes and future employment status. Research was conducted to investigate how program-level assessment planning is occurring within Australian universities, to determine academics understanding what aspects they consider and the assessment practice they identify as essential at the program-planning level. Constructivist grounded theory underpins the methodological approach and 18 academics were interviewed. Participants included Associate Deans and Program Directors (or equivalent roles) from seven Australian universities, in the accredited disciplines of dietetics and physiotherapy and non-accredited degrees in biomedical science and science. The results indicate that academics consider a myriad of factors including policies, curriculum, skills, and accreditation. Assessment practices identified as essential incorporate constructive alignment, a range of assessment tasks, authentic, and continuous assessments. The goal of program-level assessment planning is to holistically sequence assessment tasks to enhance students’ employability, so they become successful graduates and secure employment, which enables universities to acquire future funding.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43703318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing engineering students’ perspectives of entrepreneurship education within higher education: a comparative study in Hong Kong","authors":"Hannah Y. H. Wong, Cecilia K. Y. Chan","doi":"10.1080/02602938.2022.2137103","DOIUrl":"https://doi.org/10.1080/02602938.2022.2137103","url":null,"abstract":"Abstract With the increasing interest in entrepreneurship education within engineering education, there are questions on what engineering entrepreneurship education should include. As engineering entrepreneurship education aims to foster entrepreneurial individuals who will contribute to knowledge-based societies and economic growth, student perspectives are crucial. This study assessed first and final year engineering students’ perceptions on entrepreneurship education in a university in Hong Kong, identifying important competencies as learning outcomes and motivating and deterring factors for students to pursue entrepreneurship as a career. Findings offer implications for curriculum design and educational practices, particularly on formally offering entrepreneurship education in the engineering discipline, involving competencies development in educational practices and developing opportunities addressing students’ factors of motivation and deterrence.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2022-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47226312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}