{"title":"Missing the forest for the trees: investigating factors influencing student evaluations of teaching","authors":"Richard O’Donovan","doi":"10.1080/02602938.2023.2266862","DOIUrl":"https://doi.org/10.1080/02602938.2023.2266862","url":null,"abstract":"Student evaluations of teaching (SETs) feature prominently in higher education and can impact an academic’s career. As a result, they have attracted considerable research attention in order to identify evidence of bias and the influence of factors beyond an educator’s control. This study investigates the influence of seven factors on a large dataset of student evaluations (N = 376,805) of academics teaching at an Australian university. Students were invited to rate their experience at the end of each teaching period using an online survey instrument. The following factors are analysed comparing means between relevant groups to verify if: i) SET is dominated by students with strong feelings; ii) revenge reviews are given by angry students; iii) larger units are rated lower than smaller units; iv) different expectations/ratings are given by students of different gender and backgrounds; v) reticence of international students lowers overall ratings; vi) bigoted students skew results for some staff; and, vii) SET surveys during examinations disadvantaging academics teaching units with examinations. Overall, while statistically significant differences were found, they represented only small or trivial effects, with medium effects in only two limited cases. The results highlight the importance of explicitly reporting effect size of statistically significant results, and the benefits of representing differences visually in ways that avoid over-emphasising differences.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135855081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Measurement of higher education students’ and teachers’ experiences in learning management systems: a scoping review","authors":"Patricia Simon, Juming Jiang, Luke K. Fryer","doi":"10.1080/02602938.2023.2266154","DOIUrl":"https://doi.org/10.1080/02602938.2023.2266154","url":null,"abstract":"AbstractLearning management systems (LMSs) have facilitated access to courses beyond conventional classroom environments via distance and asynchronous education. Although numerous studies have examined LMS usage in higher education institutions, review of scales measuring the LMS experience of both students and teachers remains scarce. This scoping review aimed to identify scales assessing student and teacher experiences with LMS, along with the attributes of studies employing these scales. The systematic search encompassed five databases, ultimately incorporating 79 of 5536 peer-reviewed articles in the final review. Findings revealed that included studies predominantly focused on student samples, with fewer examining teacher samples and even fewer involving both stakeholders. The majority of included studies created their own measurement, and over half of the newly created measurements were combined with constructs that were extracted from multiple theories. The System Usability Scale is the only measurement that has been used in multiple studies. The Technology Acceptance Model (TAM) and DeLone and McLean’s Information Success (IS) model emerged as the most frequently employed frameworks for investigating factors influencing LMS utilization. Moodle ranked as the most commonly assessed LMS within the reviewed studies. Based on this data, recommendations for future LMS research are discussed.Keywords: Distance educationonline learningpost-secondary educationevaluation methodologies AcknowledgementThe authors would like to thank the help and support from Miss. Yijin Li and Miss. Ying Su. This work could not be accomplished without their tireless hard work.Disclosure statementThere is no conflict of interest to declare.Notes on contributorsPatricia D. Simon is a postdoctoral fellow at the University of Hong Kong. Her research interests include the promotion of students’ mental health, well-being and engagement in both physical and virtual classrooms. She is also interested in applying psychological principles for the improvement of educational technologies and for the promotion of environmental sustainability, health, and well-being.Juming Jiang is a post-doctoral fellow at the University of Hong Kong, Hong Kong. His research programme focuses on how to support students’ learning motivation and interest in offline and online learning environments, with extended reality (i.e., virtual/augmented/mixed reality) and artificial intelligence technologies.Luke K. Fryer is an Associate Professor at the University of Hong Kong, Hong Kong. His research programme addresses motivations to learn, learning strategies and teaching-learning on/off-line.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135198819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shifting feedback agency to students by having them write their own feedback comments","authors":"David Nicol, Lovleen Kushwah","doi":"10.1080/02602938.2023.2265080","DOIUrl":"https://doi.org/10.1080/02602938.2023.2265080","url":null,"abstract":"In higher education, there is a tension between teachers providing comments to students about their work and students developing agency in producing that work. Most proposals to address this tension assume a dialogic conception of feedback where students take more agency in eliciting and responding to others’ advice, recently framed as developing their feedback literacy. This conception does not however acknowledge the feedback agency students exercise implicitly during learning, through interactions with resources (e.g. textbooks, videos). This study therefore adopted a different framing - that all feedback is internally generated by students through comparing their work against different sources of reference information, human and material; and that agency is increased when these comparisons are made explicit. Students produced a literature review, compared it against information in two published reviews, and wrote their own self-feedback comments. The small sample size enabled detailed analysis of these comments and of students’ experiences in producing them. Results show students can generate significant self-feedback by making resource comparisons, that this feedback can replace or complement teacher feedback, be activated when required and help students fine-tune feedback requests to teachers. This widely applicable methodology strengthens students’ natural capacity for agency and makes dialogic feedback more effective.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135251573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collaboration, collusion, and barter-cheating: an analysis of academic help-seeking behaviors","authors":"Alexander Amigud, Samira Hosseini","doi":"10.1080/02602938.2023.2259631","DOIUrl":"https://doi.org/10.1080/02602938.2023.2259631","url":null,"abstract":"AbstractThis study explores the social nature of learning and discusses its implications for student assessment. To this end, we analyzed a sample of unique first-hand accounts of students seeking help with academic work, relying on the grounded theory approach for the identification of incentives for academic support (n = 807), and used time-series analysis (n = 5,637) to identify temporal trends. Our findings demonstrate an overlap in collaboration, collusion, and contact cheating practices and highlight a trade element in peer-relationships. In contrast to outsourcing of academic work to commercial providers, whereby academic support is exchanged for money, students’ tend to trade what they have available. The incentives offered in exchange for academic support included food, personal attention, money, alcohol, personal items, and sexual opportunities. The top subjects students sought help with were mathematics, history, and English. When examined on a timeline (2018–2023), the help-seeking behaviors persisted throughout the pandemic-related lockdowns; however, there was a shift toward monetary transactions. We argue that peer community can be considered an economy. Transacting with peers is more accessible, more affordable, and less risky than transacting with commercial providers. Furthermore, when students are partially involved in the production of academic work, it becomes harder to detect.Keywords: Student assessmentpeer supportcontract cheatingcollusionhelp-seekingsocial networks Disclosure statementNo potential conflict of interest was reported by the author(s).","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135591261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible assessment: some benefits and costs for students and instructors","authors":"Mairi Cowan","doi":"10.1080/02602938.2023.2263668","DOIUrl":"https://doi.org/10.1080/02602938.2023.2263668","url":null,"abstract":"AbstractResearch on flexible assessment suggests that providing students with choice in assignments can increase motivation and deepen investment in learning. Although instructors are often advised to adopt flexible assessment, they are also warned about potential detriments such as perceived lack of rigour among colleagues, the stress that decision-making can bring to students, and increased workload for themselves. This paper draws upon student responses to a survey, a class discussion, and instructor observations to identify benefits and costs of flexible assessment in a fourth-year history course. Among the benefits are that students can pursue their interests more freely in both content and form, while the instructor can enjoy creative and original student work. The costs include anxiety among students who may be unsure how best to choose their assessments, and additional work for the instructor who must manage a multiplicity of assignments within the confines of an institutional grading system. The implementation of flexible assessment is recommended provided that the flexibility is compatible with the course’s learning outcomes, the students’ level of independence, and the instructor’s capacity to take on an unpredictable amount of extra work. Suggestions are offered for how to implement flexible assessment without creating too much of a burden for either students or instructors.Keywords: Flexible assessmentchoicemotivationworkloadhistory AcknowledgmentsThe author would like to thank colleagues and students at the University of Toronto Mississauga. In particular, the author is grateful for the encouragement and guidance of professors Sanja Hinić-Frlog, Nicole Laliberté, and Fiona Rawle, who helped develop this version of flexible assessment, and the students in HIS409, who remained open and generous in sharing their thoughts throughout the experiment.Disclosure statementNo potential conflict of interest was reported by the author.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135895860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are assessment accommodations cheating? A critical policy analysis","authors":"Juuso Henrik Nieminen, Sarah Elaine Eaton","doi":"10.1080/02602938.2023.2259632","DOIUrl":"https://doi.org/10.1080/02602938.2023.2259632","url":null,"abstract":"Assessment accommodations are used globally in higher education systems to ensure that students with disabilities can participate fairly in assessment. Even though assessment accommodations are supposed to promote access, not success, they are commonly portrayed as potentially being cheating in that they provide certain students with unfair advantages. This may lead students to avoid applying for accommodations for fear of being labelled ‘cheaters’. Various security practices are often implemented within assessment accommodation processes to detect and prevent cheating and malingering. However, there remains a lack of theoretical understanding of the discursive interconnections between assessment accommodations and assessment security. In this study, we conduct a critical policy analysis to unpack how Canadian assessment accommodation policies have problematised assessment accommodations as a potential site for cheating. We show that Canadian universities use considerable resources to prevent cheating as accommodations are administered. In doing so, they portray students with disabilities as potential cheaters. We situate these policies in the wider societal context of the ‘fear of the disability con’, which perpetuates discrimination towards people with disabilities. We argue that assessment accommodation policies belong to the realm of assessment security rather than integrity and may thus fail to promote equity and inclusion.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136135734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ten years of editing Assessment and Evaluation in Higher Education","authors":"Malcolm Tight","doi":"10.1080/02602938.2023.2181601","DOIUrl":"https://doi.org/10.1080/02602938.2023.2181601","url":null,"abstract":"","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43859974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quantifying halo effects in students’ evaluation of teaching: a response to Michela","authors":"Edmund Cannon, Giam Pietro Cipriani","doi":"10.1080/02602938.2023.2180484","DOIUrl":"https://doi.org/10.1080/02602938.2023.2180484","url":null,"abstract":"In Cannon and Cipriani (Citation2022) we contributed to the literature on halo effects in student evaluations of teaching (SETs) by proposing and implementing a method to separate the effect of halo effects in student responses from an external measure of the item being assessed. Our paper has been criticised by Michela (Citation2022). Many of his comments about problems with SETs are not directly relevant as they discuss issues other than halo. We re-visit our data and confirm that our conclusion that halo does not necessarily make SETs uninformative is correct. However, we do find heterogeneity in the importance of halo between SETs from two different campuses.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136389491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Consensus moderation: the voices of expert academics","authors":"Jaci Mason, L. Roberts","doi":"10.1080/02602938.2022.2161999","DOIUrl":"https://doi.org/10.1080/02602938.2022.2161999","url":null,"abstract":"Abstract Consensus moderation, where collaboration and discussion take place to reach an agreement on mark allocation, is a frequently used approach to quality assurance in higher education. This study explored expert academics’ perceptions of consensus moderation through 12 semi-structured open-ended interviews. Data were analysed using thematic analysis and resulted in six themes: accept that marking is subjective; consensus moderation is a learning process; use calibration to develop and maintain standards; moderation is core academic work; resources are needed to enable consensus moderation; and different moderation practices are needed for different moderation purposes. Consensus moderation is a complex activity with many challenges, and the findings from this study contribute to our current understanding of consensus moderation. The findings have implications for policy and practice, and have identified ways in which we can enhance consensus moderation practice.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86660420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marloes L. Nederhand, Judith Auer, B. Giesbers, Ad W. A. Scheepers, Elise van der Gaag
{"title":"Improving student participation in SET: effects of increased transparency on the use of student feedback in practice","authors":"Marloes L. Nederhand, Judith Auer, B. Giesbers, Ad W. A. Scheepers, Elise van der Gaag","doi":"10.1080/02602938.2022.2052800","DOIUrl":"https://doi.org/10.1080/02602938.2022.2052800","url":null,"abstract":"Abstract Student evaluations of teaching (SET) are an influential – and often sole – tool in higher education to determine course and teacher effectiveness. It is therefore problematic that SET results are disturbed by low response rates and response quality. An important factor discussed in prior research to increase SET effectiveness and students’ motivation to participate is transparency about how their feedback is being applied in practice. The current study is the first to empirically test effects of transparency in a quasi-experimental field setting. After students filled in the SET, the intervention group was given a summary of the students’ comments and how the teacher will use these to improve the course. We examined student participation on subsequent course evaluations. In contrast to our expectations, there was no significant improvement in response rates nor response quality between the intervention and control group. Furthermore, perceptions of meaningfulness did not significantly differ between the control and intervention group. This study indicates that more empirical research is needed to define the conditions under which transparency influences student participation. Further implications and recommendations for future research are discussed.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47553870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}