{"title":"Diversity, equity and inclusion in language research: Addressing English-centricity","authors":"Marc Brysbaert","doi":"10.1016/j.rmal.2025.100286","DOIUrl":"10.1016/j.rmal.2025.100286","url":null,"abstract":"<div><div>This article investigates the pervasive and often subtle dominance of English in academic scholarship. Drawing on existing literature and illustrative case studies, the analysis demonstrates and examines the deep-rooted prevalence of this phenomenon, particularly its compulsory nature for language researchers operating outside English-speaking contexts. The article concludes by proposing actionable solutions designed to foster greater inclusion of participants from diverse linguistic backgrounds.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100286"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145594504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aligning input and assessment: Pictorial measures in audiovisual vocabulary research","authors":"Kadir Kaderoğlu","doi":"10.1016/j.rmal.2025.100288","DOIUrl":"10.1016/j.rmal.2025.100288","url":null,"abstract":"<div><div>A common methodological issue in audiovisual vocabulary research is input-test incongruence. This is particularly evident in studies comparing captioned and non-captioned viewing, where the reliance on written vocabulary tests essentially biases results in favor of captioned groups. Grounded in the complementary frameworks of Transfer-Appropriate Processing and Dual Coding Theory, this brief report argues that the reliance on unimodal (written or aural) tests overlooks the visual channel, a defining element of the audiovisual input. Consequently, a portion of vocabulary knowledge encoded via imagery risks going unmeasured, threatening construct validity. A methodological case is made for integrating pictorial measures to achieve greater authenticity and alignment with the multimodal learning condition. The report concludes by outlining a proposed study to empirically test whether pictorial assessments can capture unique learning gains, thereby providing a more valid and complete account of vocabulary learning from audiovisual input.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100288"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Artificial intelligence-based automatic evaluation of human translation and interpreting: A systematic review of assessment and validation practices","authors":"Chao Han","doi":"10.1016/j.rmal.2026.100300","DOIUrl":"10.1016/j.rmal.2026.100300","url":null,"abstract":"<div><div>Human-generated translation and interpreting (T&I) are routinely evaluated in domains such as language education and professional certification. While artificial intelligence (AI) is increasingly used for automatic assessment, little research has examined its application to human T&I. Drawing on rigorous database search and screening, this systematic review attempts to close this gap. Based on a curated corpus of 69 studies, we identify important trends in assessment design, model architecture, and validation practice. The data analysis shows a marked increase in research since 2020, with a dominant focus on English-Chinese T&I, primarily within educational contexts. Most studies employed feature-based machine learning models or repurposed machine translation metrics for scoring, while only a minority explored end-to-end large language models. Benchmark construction was found to be inconsistently reported, with many studies omitting key information about rater qualification, training, reliability, and scoring criteria. Validation practices primarily relied on correlations with human benchmark scores, with limited evidence of convergent validity or cross-condition generalizability. Notably, post-hoc explainability, a crucial step for ensuring transparency in opaque AI systems, was rarely implemented. Overall, this review highlights both progress and persistent challenges in AI-based T&I assessment. While AI holds promise for enhancing assessment efficiency and scalability, methodological limitations and transparency gaps currently constrain its responsible use. We recommend improved reporting standards, multi-pronged validation strategies, development of large annotated benchmark datasets, and greater attention to model interpretability and explainability. These steps are essential for building robust, trustworthy AI systems for automatic T&I assessment.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100300"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ekaterina Sudina , Mina Bikmohammadi , Luke Plonsky
{"title":"Self-citation attitudes and practices in applied linguistics: A mixed-methods study","authors":"Ekaterina Sudina , Mina Bikmohammadi , Luke Plonsky","doi":"10.1016/j.rmal.2026.100307","DOIUrl":"10.1016/j.rmal.2026.100307","url":null,"abstract":"<div><div>Excessive self-citation is a questionable research practice that has been found to be positively correlated with several researcher background characteristics. This study focuses on the closely related practice of gratuitous self-citation, which has been defined as an unnecessary (although not necessarily excessive) reference to one’s own work that does not contribute to the scholarly value of a new piece. Toward this end, we conducted an anonymous online survey to gauge US-based applied linguists’ perceptions of and attitudes toward self-citation. The results, among other findings, suggest that the maximum number of self-citations per publication should be around five or six; however, 47% of respondents indicated that journals should not impose a limit. Additionally, 54% of participants noted that employing gratuitous self-citation is unacceptable. Nonetheless, 77% of scholars admitted that Google Scholar metrics are important for hiring and promotion. Correlational analyses revealed that several participant background variables were associated with the percentage of self-citation in general but not gratuitous self-citation. Multiple regression analyses indicated that gender was the only meaningful predictor of gratuitous self-citation, whereas age was the only meaningful predictor of self-citation in general. Finally, thematic analysis of qualitative comments provided additional perspectives on how applied linguists define gratuitous self-citation and identify reasons for engaging in self-citation as well as what harms gratuitous self-citation can cause.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100307"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A systemic functional linguistic coding scheme for context dependency and complexity in academic writing","authors":"Leah Tompkins , Anne McCabe","doi":"10.1016/j.rmal.2025.100284","DOIUrl":"10.1016/j.rmal.2025.100284","url":null,"abstract":"<div><div>In writing academic texts, secondary school students are expected to move between more abstract and complex concepts, on the one hand, and more context-dependent and simple ones, on the other. As they progress through years of schooling, they should gain greater control of these shifts. This ability can be particularly challenging to master in contexts where students learn subjects in a language other than their first. This cross-sectional study of biology writing in Content and Language Integrated Learning (CLIL) classrooms uses Systemic Functional Linguistics (SFL) to create a scheme for coding language resources related to context dependency and complexity. Within SFL, the meanings that create abstraction, or context dependency, are referred to as <em>presence</em>, and the meanings that create complexity, or condensation of meaning, are referred to as <em>mass</em>. These linguistic complexes have been proposed fairly recently and have not yet been operationalized for empirical research. Within the student texts, we focus specifically on responses to a question that elicits <em>explore</em>, a key Cognitive Discourse Function for hypothesizing. Our analysis shows that, as students progress through secondary school, their writing displays evidence of weaker <em>presence</em> (greater abstraction) and stronger <em>mass</em> (greater condensation of meaning), with year 10 students demonstrating greater control over the linguistic resources for hypothesizing in biology. These findings align with the typical developmental trajectory of secondary school students, suggesting that the coding scheme is valid. The findings also suggest a need for CLIL pedagogy that explicitly develops students’ control of <em>presence</em> and <em>mass</em> in disciplinary discourse.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100284"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Language teacher beliefs and teacher education programs: A 25-year methodological synthesis (2000-2024)","authors":"Farahnaz Faez , Michael Karas , Ata Ghaderi","doi":"10.1016/j.rmal.2026.100299","DOIUrl":"10.1016/j.rmal.2026.100299","url":null,"abstract":"<div><div>Language teacher beliefs are one of the main strands of teacher education research, and numerous studies explore how teacher education programs affect the development of such beliefs through the enacted program. There is a paucity of research, however, on the methodological design of these studies and what characterizes their initiatives. The aim of this research synthesis was to review and map out the methodological arrangements of the studies and illustrate how they implement their desired program. A comprehensive search was done in the three databases of Web of Science, Scopus, and Google Scholar with keywords related to language teacher beliefs and cognitions. A total number of 104 studies were identified and coded in 10 sections, which include such factors as (a) methodology, (b) theoretical framework, (c) data collection instruments, (d) number of participants, and (e) participants’ career stage. The results indicate an overall lack of clarity with the ontological framework of the studies, which is especially pronounced in light of the prevalence of qualitative designs within the corpus. Further, there is often a misalignment between the studies’ ontological paradigm and the methodological choices made. The findings call for greater ontological transparency, a higher degree of alignment between the theoretical framework and the methodological blueprint of research studies, and a broader and more versatile toolkit in identifying, examining, and transforming language teacher beliefs. The synthesis provides recommendations for advancing research on teacher beliefs through the methodological apparatus in this strand.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100299"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A tutorial on unsupervised Gaussian mixture model for performance clustering in second language research","authors":"Huiying Cai , Yan Tang , Xun Yan","doi":"10.1016/j.rmal.2026.100296","DOIUrl":"10.1016/j.rmal.2026.100296","url":null,"abstract":"<div><div>This tutorial introduces the application of unsupervised Gaussian Mixture Model (GMM) clustering to identify second language (L2) performance profiles. GMM employs a probabilistic clustering technique that accommodates overlapping profile membership and provides a flexible method for analyzing high-dimensional performance data commonly encountered in L2 research. Using L2 writing assessment data from a local English placement test, we present a step-by-step analytical pipeline, covering data preparation, dimensionality reduction, model selection, visualization, and interpretation. This approach is adaptable to other performance modalities (e.g, speaking) and can be enriched with additional performance features to support a more comprehensive understanding of L2 performance and underlying language ability.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100296"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring implicit, automatized, and explicit knowledge: Testing competing CFA models","authors":"Sepideh Mehraein , Hamideh Marefat, Hossein Karami","doi":"10.1016/j.rmal.2026.100303","DOIUrl":"10.1016/j.rmal.2026.100303","url":null,"abstract":"<div><div>The construct validity of implicit and explicit measures remains debated. Grammaticality judgment tests (GJTs) and elicited imitation (EI), assumed to tap into implicit knowledge, have been criticized for involving explicit processes; additionally, the notion of automatized explicit knowledge has further blurred construct boundaries. This study examined the validity of word monitoring task (WMT), self-paced task (SPT), EI, timed and untimed GJTs, and metalinguistic knowledge test (MKT) in advanced EFL learners (N=48) using confirmatory factor-analytic models. The two-factor model, distinguishing psycholinguistic tasks as implicit measures and form-focused tasks as explicit measures, provided the best fit. The second-order model also fit well, suggesting that EI and timed GJT reflect automatized explicit knowledge distinct from metalinguistic knowledge (untimed GJT, MKT), though both align along a single explicit knowledge continuum. However, the three-factor model failed to establish automatized knowledge as an independent construct due to near-perfect correlations with explicit measures. Overall, the findings challenge the classification of EI and timed GJT as implicit measures, highlight the value of second-order modeling, and underscore the importance of real-time psycholinguistic tasks for capturing implicit knowledge.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100303"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emergent possibilities: A sociotechnical approach to generative AI in discourse analysis","authors":"Jay M. Woodhams","doi":"10.1016/j.rmal.2026.100306","DOIUrl":"10.1016/j.rmal.2026.100306","url":null,"abstract":"<div><div>Generative AI has fundamentally reshaped many research practices within academic institutions. One area experiencing substantial change is qualitative discourse analysis within the sociolinguistic paradigm. Adopting a critical realist-sociotechnical perspective, this study positions large language models (LLMs) not as replacements for human analysts but as components of emergent analytical systems where human-AI collaboration yields novel capabilities. The study examines two discourse analysis excerpts from the author’s pre-GenAI research, subjecting them to zero-shot prompting using GPT-4o and Gemini 2.5 Flash. By comparing the models’ outputs with the original human analyses, the research identifies opportunities for genuine synergy between analysts and AI. The findings show that LLMs, particularly ChatGPT, produce sophisticated interactional sociolinguistic analyses and excel at maintaining theoretical consistency, though tend to lack the contextual depth and cultural specificity that human analysts can provide. The paper advocates for a synergistic approach where LLMs augment rather than replace discourse analytic practice, proposing task-appropriate divisions of labour for researchers who want to integrate GenAI into their workflows. As qualitative research grapples with AI integration, this study suggests that the human-AI system represents a site of genuine analytical emergence which can transcend the capabilities of either component alone.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100306"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using human-Al collaboration to explore meanings of semiotic resources in L2 multimodal writing","authors":"Duygu Candarli","doi":"10.1016/j.rmal.2026.100305","DOIUrl":"10.1016/j.rmal.2026.100305","url":null,"abstract":"<div><div>This paper proposes a methodological framework for annotating semiotic resources and analysing the variations of their meanings in a corpus of second language (L2) disciplinary student writing. Most previous studies on semiotic resources or their meanings in student writing have relied on qualitative coding, which may limit the number of texts that could be analysed. This study proposes a quantitatively driven framework for analysing semantic domains of semiotic resources and uses a balanced corpus of 100 successful multimodal texts written by L2 student writers at the postgraduate level in UK higher education. Through collaboration between a locally run open-source vision language model and a human annotator, semiotic resources were annotated in the corpus, and then the meanings of these features were examined using semantic tagging and principal component analysis. The findings of the principal component analysis revealed three patterns of variation in the meanings of semiotic resources. One of the key findings is that the meanings of semiotic resources varied along a continuum, suggesting multifaceted rather than discrete meanings. This study’s annotation framework has implications for corpus construction and annotation in corpus studies of student writing that have largely overlooked multimodal features to date. Importantly, the study has methodological implications to examine the meanings and discourse functions of semiotic resources in future studies of L2 multimodal writing by using a mixed methods approach that moves from quantitative to qualitative analysis in a principled manner.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"5 1","pages":"Article 100305"},"PeriodicalIF":0.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}