Assessing interactional competence through group discussion: A mixed methods validation

John Syquia , Paul Leeming
{"title":"Assessing interactional competence through group discussion: A mixed methods validation","authors":"John Syquia ,&nbsp;Paul Leeming","doi":"10.1016/j.rmal.2024.100144","DOIUrl":null,"url":null,"abstract":"<div><p>The purpose of this mixed methods study was to assess the validity and functionality of an analytic rating scale for the assessment of interactional competence (IC). The participants were 79 low- to high-proficiency Japanese university students who completed 10-minute small-group discussions. Video recordings of the discussions were assessed by raters using the rating scale. The rater scores were then analyzed using many-facet Rasch measurement (MFRM) which indicated a very good fit to the model. The data were subsequently analyzed using generalizability theory in the form of a G-study and d-study. Those studies showed that the rating scale could be used with fewer raters, therefore increasing practicality without a substantial decrease in reliability. In addition to quantitative data, qualitative data were also collected in the form of interviews with raters and comments they made during assessment. Several raters noted unexpected participant behaviors which were difficult to evaluate using the rating scale, as well as ambiguous language in some category descriptors. The qualitative data provided an invaluable supplement to the quantitative analyses which did not indicate major issues with the rubric. Both forms of data were used to revise the original rating scale and those changes are discussed. This study adds to the limited but growing number of mixed methods studies on IC assessment.</p></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"3 3","pages":"Article 100144"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Methods in Applied Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772766124000508","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The purpose of this mixed methods study was to assess the validity and functionality of an analytic rating scale for the assessment of interactional competence (IC). The participants were 79 low- to high-proficiency Japanese university students who completed 10-minute small-group discussions. Video recordings of the discussions were assessed by raters using the rating scale. The rater scores were then analyzed using many-facet Rasch measurement (MFRM) which indicated a very good fit to the model. The data were subsequently analyzed using generalizability theory in the form of a G-study and d-study. Those studies showed that the rating scale could be used with fewer raters, therefore increasing practicality without a substantial decrease in reliability. In addition to quantitative data, qualitative data were also collected in the form of interviews with raters and comments they made during assessment. Several raters noted unexpected participant behaviors which were difficult to evaluate using the rating scale, as well as ambiguous language in some category descriptors. The qualitative data provided an invaluable supplement to the quantitative analyses which did not indicate major issues with the rubric. Both forms of data were used to revise the original rating scale and those changes are discussed. This study adds to the limited but growing number of mixed methods studies on IC assessment.

通过小组讨论评估互动能力:混合方法验证
这项混合方法研究旨在评估用于评估互动能力(IC)的分析评分量表的有效性和功能性。研究对象是 79 名能力从低到高的日本大学生,他们完成了 10 分钟的小组讨论。评分者使用评分量表对讨论录像进行评估。然后使用多面拉施测量法(MFRM)对评分者的分数进行分析,结果表明模型的拟合度非常高。随后,采用 G 研究和 d 研究的形式,利用可推广性理论对数据进行了分析。这些研究表明,评分量表可以在较少评分者的情况下使用,因此增加了实用性,而可靠性不会大幅下降。除定量数据外,还通过对评分员的访谈和他们在评估过程中的评论收集了定性数据。有几位评定者指出,参与者的意外行为难以用评分表进行评定,而且某些类别的描述语言含糊不清。定性数据为定量分析提供了宝贵的补充,而定量分析并未显示出评分标准存在重大问题。这两种形式的数据都被用于修改原始评分表,本文将对这些修改进行讨论。这项研究为数量有限但日益增多的集成电路评估混合方法研究增添了新的内容。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信