{"title":"Setting out SET: a situational mapping of student evaluation of teaching in Australian higher education","authors":"Margaret Lloyd, Freya Wright-Brough","doi":"10.1080/02602938.2022.2130169","DOIUrl":"https://doi.org/10.1080/02602938.2022.2130169","url":null,"abstract":"Abstract The student evaluation of teaching (SET) in higher education has become an increasingly complex and subjectively contested area. From a singular purpose in seeking information to improve teaching in the 1920s, evaluation has now expanded to encompass administrative and regulatory purposes. Currently, evaluation impacts on personal and institutional reputation and is frequently used as a benchmark in determining and shaping individual academic careers. The value and purpose of evaluation is open to ongoing debate, as is the notion of transparency regarding who should have access to evaluation data (quantitative scores and/or free text comments). This paper presents the outcome of a situational mapping we conducted to better understand student evaluations of teaching in Australian higher education. We identified the component actors, actants and elements of the setting and derived a list of the discursive constructions which drive the relationships between them. To test the efficacy of our mapping in terms of isolating situations within the broader setting, we describe three hypothetical case studies: making student evaluations public, closing the loop and academic surveillance.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"790 - 805"},"PeriodicalIF":4.4,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41528645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthieu Hausman, J. Dancot, B. Petre, M. Guillaume, P. Detroz
{"title":"‘I don’t know if people realize the impact of their words’: how does feedback during internship impact nursing student learning?","authors":"Matthieu Hausman, J. Dancot, B. Petre, M. Guillaume, P. Detroz","doi":"10.1080/02602938.2022.2130168","DOIUrl":"https://doi.org/10.1080/02602938.2022.2130168","url":null,"abstract":"Abstract Previous studies on the factors that can affect self-esteem and clinical skills during training among bachelor’s-level nursing students in Belgium have shown that internships – and evaluation and feedback moments, more specifically – were key points in that process. We did a study to better understand how students experience those moments and which specific aspects of feedback are involved. This article focuses on how feedback is experienced and on the consequences of that in terms of learning. Here we identify the aspects of feedback that can result in positive or negative experiences, with different implications for learning. Our findings highlight the key role that – along with valence – the focus and tone of feedback plays. In addition, students’ lived experience can heighten or dampen their motivation to act on feedback and affect how they regulate their learning behavior when feedback is experienced either positively or negatively. Generally speaking, students show resistance or rejection when feedback is experienced negatively. While these results are consistent with other studies, further research is needed to explore the emotional process at work in feedback processing.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"777 - 789"},"PeriodicalIF":4.4,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48298156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are men and women really different? The effects of gender and training on peer scoring and perceptions of peer assessment","authors":"J. C. Ocampo, E. Panadero, Fernando Díez","doi":"10.1080/02602938.2022.2130167","DOIUrl":"https://doi.org/10.1080/02602938.2022.2130167","url":null,"abstract":"Abstract A number of studies have expressed that gender might be a source of difference and bias in peer assessment activities. However, evidence supporting this remains mixed and scant. The present study examined gender difference and accuracy bias between men and women assessors’ peer scoring of same-sex or opposite-sex writing samples using a quasi-experimental approach in which we implemented peer assessment training to explore if it could minimise gender difference and bias. Additionally, we also explored the effects on participants’ perceptions of trust and comfort in giving peer scores. A total of 145 (men = 25) psychology students enrolled in four separate courses participated in this study. Two of the classes received peer assessment training, while the other two only received task instructions. Participants were divided into eight scoring subgroups where they peer scored three writing samples of varying quality (poor, average and excellent) using a scoring rubric in Eduflow. We found that, regardless of their training condition, men and women assessors did not differ in their peer scores of men and women peers. Only untrained men assessors showed less trust in their abilities and discomfort when peer scoring women assessees’ writing samples.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"760 - 776"},"PeriodicalIF":4.4,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59373329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"We cannot agree to disagree: ensuring consistency, transparency and fairness across bachelor thesis writing, supervision and evaluation","authors":"Riina Koris, Rauno Pello","doi":"10.1080/02602938.2022.2125931","DOIUrl":"https://doi.org/10.1080/02602938.2022.2125931","url":null,"abstract":"Abstract Writing, supervision and evaluation of students’ dissertations has received a fair share of attention in academic literature, with a focus on problems of marking, information processing mechanisms, worldviews and more. In most texts, problems of inconsistency, transparency and fairness are identified, leading to frustration among supervisors, assessors and students. This article shares positive experience on the creation and application of an instrument which would benefit educators and contribute practical solutions on how the issues of inconsistency, transparency and fairness within the bachelor thesis process could be tackled. Using design science research, which utilizes gained knowledge to solve problems, create change or improve existing solutions, we developed the bachelor thesis writing and assessing instrument. Following three iterations of the instrument among writers, supervisors and assessors of bachelor theses at the Estonian Business School, we conclude that the use of the instrument has greatly improved consistency, transparency and fairness of the thesis-process, thus benefitting all three parties in particular and the university in general.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"736 - 747"},"PeriodicalIF":4.4,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41781929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Lakeman, R. Coutts, M. Hutchinson, D. Massey, Dima Nasrawi, Jann Fielden, Megan Lee
{"title":"Playing the SET game: how teachers view the impact of student evaluation on the experience of teaching and learning","authors":"R. Lakeman, R. Coutts, M. Hutchinson, D. Massey, Dima Nasrawi, Jann Fielden, Megan Lee","doi":"10.1080/02602938.2022.2126430","DOIUrl":"https://doi.org/10.1080/02602938.2022.2126430","url":null,"abstract":"Abstract Student evaluation of teaching (SET) has become a ubiquitous feature of higher education. The attainment and maintenance of positive SET is essential for most teaching staff to obtain and maintain tenure. It is not uncommon for teachers to receive offensive and non-constructive commentary unrelated to teaching quality. Regular exposure to SET contributes to stress and adversely impacts mental health and well-being. We surveyed Australian teaching academics in 2021, and in this paper, we explore the perceived impacts of SET on the teaching and learning experience, academic standards and quality. Many respondents perceived that SET contributes to an erosion of standards and inflation of grades. A thematic analysis of open-ended questions revealed potential mechanisms for these impacts. These include enabling a culture of incivility, elevating stress and anxiety in teaching staff, and pressure to change approaches to teaching and assessment to achieve the highest scores. Playing the SET game involves balancing a commitment to quality and standards with concessions to ensure optimal student satisfaction. Anonymous SET is overvalued, erodes standards and contributes to incivility. The process of SET needs urgent reform.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"749 - 759"},"PeriodicalIF":4.4,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41672336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How does grade inflation affect student evaluation of teaching?","authors":"Byung-Ryun Park, Joonmo Cho","doi":"10.1080/02602938.2022.2126429","DOIUrl":"https://doi.org/10.1080/02602938.2022.2126429","url":null,"abstract":"Abstract Student evaluation of teaching (SET) is important for assessing university instructors’ performance. However, this system seems biased as students’ grade expectations result in rewards or penalties in SET. As a fair evaluation of grades became difficult during the COVID-19 pandemic, universities implemented a relaxed grade policy that expanded the distribution of high grades. This grade inflation altered students’ expected grade. Through empirical analysis, this study examined the change in the relationship between bias and SET due to grade inflation. A top-ranking South Korean university provided 125,003 cases of SET data in 2019 and 2020 for the analysis. Grade inflation diminished the biasing effect on SET, mainly in terms of reward. Furthermore, the group with the lowest grade point average (GPA) showed the highest decrease in rewards, and the group with the highest GPA showed maximum decrease in punishment. This finding implies that a change in expected grades due to factors other than lectures may alter students’ attitudes toward SET, and grade expectations may play a key role in reducing bias in SET.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"723 - 735"},"PeriodicalIF":4.4,"publicationDate":"2022-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59373321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Mascadri, Nerida Spina, Rebecca Spooner-Lane, Elizabeth Briant
{"title":"Assessors’ perspectives of an oral assessment within a teaching performance assessment","authors":"Julia Mascadri, Nerida Spina, Rebecca Spooner-Lane, Elizabeth Briant","doi":"10.1080/02602938.2022.2122930","DOIUrl":"https://doi.org/10.1080/02602938.2022.2122930","url":null,"abstract":"Abstract Australia has recently implemented Teaching Performance Assessments (TPAs) as a national accreditation requirement to assess final year preservice teachers’ classroom readiness. In 2019, an Australian university developed a TPA to meet this requirement, comprising three written components and one oral component. This exploratory study investigated 18 TPA assessors’ perceptions of the oral component. Focus group data revealed that both explicit and latent assessment criteria influenced assessors’ professional judgments of the oral component. A discourse competence framework was used to analyse the data, illustrating how preservice teachers’ personal experience and their professional and institutional discourse competence are evident in their orals. Thematic analysis revealed that benefits and issues of fairness and equity contributed to assessors’ perspectives about the value of the oral component.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"613 - 626"},"PeriodicalIF":4.4,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45960829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The end of the line? A quantitative analysis of two decades of competitive assessment and research funding in UK higher education","authors":"A. Kelly","doi":"10.1080/02602938.2022.2120961","DOIUrl":"https://doi.org/10.1080/02602938.2022.2120961","url":null,"abstract":"Abstract The Research Excellence Framework is a high-stakes exercise used by the UK government to allocate billions of pounds of quality-related research (QR) funding and used by the media to rank universities and their departments in national league tables. The 2008, 2014 and 2021 assessments were zero-sum games in terms of league table position because the outcomes were captured as Grade Point Averages (GPA) on a ratio scale, unlike the 1996 and 2001 iterations when departments were ranked on a simple seven-point ordinal scale. Although league tables were never part of the assessment itself, they were inevitable in 2008, 2014 and 2021 given the nature of the scoring, and subsequent league table position had a significant effect on investment and disinvestment within universities. This paper uses data from the 2008, 2014 and 2021 assessments to look at the changing competitiveness of different subjects, the size of submissions, and how these are related to QR funding. It finds that competition in the UK research sector is exceptionally tough, but that competitiveness and QR funding are so closely related to submission size that it calls into question the benefit of carrying out any more assessment exercises in their current format.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"701 - 722"},"PeriodicalIF":4.4,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47381200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What do college students think of feedback literacy? An ecological interpretation of Hong Kong students’ perspectives","authors":"Y. Zhan","doi":"10.1080/02602938.2022.2121380","DOIUrl":"https://doi.org/10.1080/02602938.2022.2121380","url":null,"abstract":"Abstract Recent discussions on student feedback literacy have been primarily conceptual and framed from the perspectives of scholars and educators. Few empirical studies have explored what and how college students conceive of student feedback literacy. To address this research gap, we explored Hong Kong college students’ conceptions of student feedback literacy. Fifteen Bachelor of Education students were individually interviewed to elaborate on the mind maps they had drawn about student feedback literacy. The data analysis revealed that the participants depicted several feedback competencies required for students to elicit and process feedback but paid scant attention to the competencies needed to enact feedback. Meanwhile, they believed that a feedback-literate student should appreciate the values of feedback and be active, modest and committed in the feedback process. The participants’ conceptions of student feedback literacy were ecologically influenced by Chinese cultural values, the university learning setting, their prior feedback experiences, and course learning.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"686 - 700"},"PeriodicalIF":4.4,"publicationDate":"2022-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"59373314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Martín-Antón, Leandro S. Almedia, M. Sáiz-Manzanares, Marta Álvarez‐Cañizo, M. Carbonero
{"title":"Psychometric properties of the academic procrastination scale in Spanish university students","authors":"L. Martín-Antón, Leandro S. Almedia, M. Sáiz-Manzanares, Marta Álvarez‐Cañizo, M. Carbonero","doi":"10.1080/02602938.2022.2117791","DOIUrl":"https://doi.org/10.1080/02602938.2022.2117791","url":null,"abstract":"Abstract Procrastination in academic activities is common amongst university students, and has negative consequences for their personal as well as academic development. As a result, there is a need for valid –yet at the same time brief and clear-cut– measurement tools that enable the specific procrastinating behaviour of university students to be measured. This work explores in depth the psychometric properties of the Spanish version of the Academic Procrastination Scale, a widely used brief tool in secondary and higher education in the Spanish speaking world. The scale was applied to a total of 1734 university students, together with the Procrastination Assessment Scale-Students (PASS), the Unintentional Procrastination Scale (UPS) and the Active Procrastination Scale (APS). Factor analyses indicate the best fit is a structure involving four interrelated factors (task aversion, poor time management, low emotional and motivational self-control, and risk assumption) compared to other proposed models. The model presents factorial invariance between men and women, and adequate convergent validity. We discuss the implications of using this scale in higher education, since differentiating the four factors might help to identify different support measures depending on university student needs.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":"48 1","pages":"642 - 656"},"PeriodicalIF":4.4,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48229762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}