Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith
{"title":"Unleashing the Creative Potential of Research Tensions: Toward a Paradox Approach to Methods","authors":"Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith","doi":"10.1177/10944281251346804","DOIUrl":"https://doi.org/10.1177/10944281251346804","url":null,"abstract":"Research is a paradoxical process. Scholars confront conflicting yet interwoven pressures, considering methodologies that engage complexity and simplicity, induction and deduction, novelty and continuity, and more. Paradox theory offers insights that embrace such tensions, providing empirical examples that harness creative friction to foster more novel and useful, rigorous, and relevant research. Leveraging this lens, we open a conversation on research tensions, developing the foundations of a Paradox Approach to Methods applicable to organization studies more broadly. To do so, we first identify tensions raised at six methodological decision points: research scope, construct definition, underlying assumptions, data collection, data analysis, and interpretation. Second, we build on paradox theory to identify navigating practices: accepting, differentiating, integrating, and knotting. By doing so, we contribute to organizational research broadly by embracing methods of tensions to advance scholarly insight.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"21 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144578317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philseok Lee, Mina Son, Steven Zhou, Sean Joo, Zihao Jia, Virginia Cheng
{"title":"The Journey of Forced Choice Measurement Over 80 Years: Past, Present, and Future","authors":"Philseok Lee, Mina Son, Steven Zhou, Sean Joo, Zihao Jia, Virginia Cheng","doi":"10.1177/10944281251350687","DOIUrl":"https://doi.org/10.1177/10944281251350687","url":null,"abstract":"Over the past two decades, forced-choice (FC) measures have received considerable attention from researchers and practitioners in industrial and organizational psychology. Despite the growing body of research on FC measures, there has not yet been a comprehensive review synthesizing the diverse lines of research. This article bridges this gap by presenting a systematic review of post-2000 literature on FC measures, addressing ten critical questions, including: 1) validity evidence, 2) faking resistance, 3) FC IRT models, 4) FC test design, 5) FC measure development, 6) test-taker reactions and response processes, 7) measurement and predictive bias, 8) reliability, 9) computerized adaptive testing, and 10) random responding. The review adopts a historical perspective, tracing the development of FC measures and highlighting key empirical findings, methodological advances, current trends, and future directions. By synthesizing a substantial body of evidence across multiple research streams, this article serves as a valuable resource, providing insights into the psychometric properties, theoretical underpinnings, and practical applications of FC measures in organizational contexts such as personnel selection, development, and assessment.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"109 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144578319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Torsten Biemann, Irmela F. Koch-Bayram, Madleen Meier-Barthold, Herman Aguinis
{"title":"Using Markov Chains to Detect Careless Responding in Survey Research","authors":"Torsten Biemann, Irmela F. Koch-Bayram, Madleen Meier-Barthold, Herman Aguinis","doi":"10.1177/10944281251334778","DOIUrl":"https://doi.org/10.1177/10944281251334778","url":null,"abstract":"Careless responses by survey participants threaten data quality and lead to misleading substantive conclusions that result in theory and practice derailments. Prior research developed valuable precautionary and post-hoc approaches to detect certain types of careless responding. However, existing approaches fail to detect certain repeated response patterns, such as diagonal-lining and alternating responses. Moreover, some existing approaches risk falsely flagging careful response patterns. To address these challenges, we developed a methodological advancement based on first-order Markov chains called <jats:italic>Lazy Respondents</jats:italic> (Laz.R) that relies on predicting careless responses based on prior responses. We analyzed two large datasets and conducted an experimental study to compare careless responding indices to Laz.R and provide evidence that its use improves validity. To facilitate the use of Laz.R, we describe a procedure for establishing sample-specific cutoff values for careless respondents using the “kneedle algorithm” and make an R Shiny application available to produce all calculations. We expect that using Laz.R in combination with other approaches will help mitigate the threat of careless responses and improve the accuracy of substantive conclusions in future research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"235 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144479220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew B. Speer, Frederick L. Oswald, Dan J. Putka
{"title":"Reliability Evidence for AI-Based Scores in Organizational Contexts: Applying Lessons Learned From Psychometrics","authors":"Andrew B. Speer, Frederick L. Oswald, Dan J. Putka","doi":"10.1177/10944281251346404","DOIUrl":"https://doi.org/10.1177/10944281251346404","url":null,"abstract":"Machine learning and artificial intelligence (AI) are increasingly used within organizational research and practice to generate scores representing constructs (e.g., social effectiveness) or behaviors/events (e.g., turnover probability). Ensuring the reliability of AI scores is critical in these contexts, and yet reliability estimates are reported in inconsistent ways, if at all. The current article critically examines reliability estimation for AI scores. We describe different uses of AI scores and how this informs the data and model needed for estimating reliability. Additionally, we distinguish between reliability and validity evidence within this context. We also highlight how the parallel test assumption is required when relying on correlations between AI scores and established measures as an index of reliability, and yet this assumption is frequently violated. We then provide methods that are appropriate for reliability estimation for AI scores that are sensitive to the generalizations one aims to make. In conclusion, we assert that AI reliability estimation is a challenging task that requires a thorough understanding of the issues presented, but a task that is essential to responsible AI work in organizational contexts.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144371285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Simonetti, Michele Tumminello, Pasquale Massimo Picone, Anna Minà
{"title":"A Machine Learning Toolkit for Selecting Studies and Topics in Systematic Literature Reviews","authors":"Andrea Simonetti, Michele Tumminello, Pasquale Massimo Picone, Anna Minà","doi":"10.1177/10944281251341571","DOIUrl":"https://doi.org/10.1177/10944281251341571","url":null,"abstract":"Scholars conduct systematic literature reviews to summarize knowledge and identify gaps in understanding. Machine learning can assist researchers in carrying out these studies. This paper introduces a machine learning toolkit that employs Network Analysis and Natural Language Processing methods to extract textual features and categorize academic papers. The toolkit comprises two algorithms that enable researchers to: (a) select relevant studies for a given theme; and (b) identify the main topics within that theme. We demonstrate the effectiveness of our toolkit by analyzing three streams of literature: cobranding, coopetition, and the psychological resilience of entrepreneurs. By comparing the results obtained through our toolkit with previously published literature reviews, we highlight its advantages in enhancing transparency, coherence, and comprehensiveness in literature reviews. We also provide quantitative evidence about the toolkit's efficacy in addressing the challenges inherent in conducting a literature review, as compared with state-of-the-art Natural Language Processing methods. Finally, we discuss the critical role of researchers in implementing and overseeing a literature review aided by our toolkit.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"51 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144145566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Coreference Resolution to Mitigate Measurement Error in Text Analysis","authors":"Farhan Iqbal, Michael D. Pfarrer","doi":"10.1177/10944281251334777","DOIUrl":"https://doi.org/10.1177/10944281251334777","url":null,"abstract":"Content analysis has enabled organizational scholars to study constructs and relationships that were previously unattainable at scale. One particular area of focus has been on sentiment analysis, which scholars have implemented to examine myriad relationships pertinent to organizational research. This article addresses certain limitations in sentiment analysis. More specifically, we bring attention to the challenge of accurately attributing sentiment in text that mentions multiple firms. Whereas traditional methods often result in measurement error due to misattributing text to firms, we offer coreference resolution—a natural language processing technique that identifies and links expressions referring to the same entity—as a solution to this problem. Across two studies, we demonstrate the potential of this approach to reduce measurement error and enhance the veracity of text analyses. We conclude by offering avenues for theoretical and empirical advances in organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"45 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diana Garcia Quevedo, Anna Glaser, Caroline Verzat
{"title":"Enhancing Theorization Using Artificial Intelligence: Leveraging Large Language Models for Qualitative Analysis of Online Data","authors":"Diana Garcia Quevedo, Anna Glaser, Caroline Verzat","doi":"10.1177/10944281251339144","DOIUrl":"https://doi.org/10.1177/10944281251339144","url":null,"abstract":"Online data are constantly growing, providing a wide range of opportunities to explore social phenomena. Large Language Models (LLMs) capture the inherent structure, contextual meaning, and nuance of human language and are the base for state-of-the-art Natural Language Processing (NLP) algorithms. In this article, we describe a method to assist qualitative researchers in the theorization process by efficiently exploring and selecting the most relevant information from a large online dataset. Using LLM-based NLP algorithms, qualitative researchers can efficiently analyze large amounts of online data while still maintaining deep contact with the data and preserving the richness of qualitative analysis. We illustrate the usefulness of our method by examining 5,516 social media posts from 18 entrepreneurs pursuing an environmental mission (ecopreneurs) to analyze their impression management tactics. By helping researchers to explore and select online data efficiently, our method enhances their analytical capabilities, leads to new insights, and ensures precision in counting and classification, thus strengthening the theorization process. We argue that LLMs push researchers to rethink research methods as the distinction between qualitative and quantitative approaches becomes blurred.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"16 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Processing of Long Sequence Text Data in Transformer: An Examination of Five Different Approaches","authors":"Zihao Jia, Philseok Lee","doi":"10.1177/10944281251326062","DOIUrl":"https://doi.org/10.1177/10944281251326062","url":null,"abstract":"The advent of machine learning and artificial intelligence has profoundly transformed organizational research, especially with the growing application of natural language processing (NLP). Despite these advances, managing long-sequence text input data remains a persistent and significant challenge in NLP analysis within organizational studies. This study introduces five different approaches for handling long sequence text data: term frequency-inverse document frequency with a random forest algorithm (TF-IDF-RF), Longformer, GPT-4o, truncation with averaged scores and our proposed construct-relevant text-selection approach. We also present analytical strategies for each approach and evaluate their effectiveness by comparing the psychometric properties of the predicted scores. Among them, GPT-4o, the truncation with averaged scores, and the proposed text-selection approach generally demonstrate slightly superior psychometric properties compared to TF-IDF-RF and Longformer. However, no single approach consistently outperforms the others across all psychometric criteria. The discussion explores the practical considerations, limitations, and potential directions for future research on these methods, enriching the dialogue on effective long-sequence text management in NLP-driven organizational research.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"22 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143653938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What Are Mechanisms? Ways of Conceptualizing and Studying Causal Mechanisms","authors":"Joep P. Cornelissen, Mirjam Werner","doi":"10.1177/10944281251318727","DOIUrl":"https://doi.org/10.1177/10944281251318727","url":null,"abstract":"Over the last two decades, much of management research has converged on the belief that one of its major aims is to identify the causal mechanisms that produce the phenomena that researchers seek to explain. In this paper, we review and synthesize the literature that has amassed around causal mechanisms. We do so by detailing the different methodological perspectives that are featured in management research, which we label as the contextual, constitutive, and interventionist perspectives. For each of these perspectives, we examine what it theoretically presupposes a mechanism to be, how this connects to methodological choices, and how this shapes the kind of mechanism-based explanations that each perspective offers. We also explore the main inferential challenges for each of these perspectives and offer specific methodological guidance in response. In this way, we aim to offer a common plank for theorizing and research on causal mechanisms in ways that recognize and harness the productive differences across different epistemologies and methodological traditions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"10 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yucheng Zhang, Yuyan Zheng, Dan Wang, Xiaowei Gu, Michael J. Zyphur, Lin Xiao, Shudi Liao, Yangyang Deng
{"title":"Shedding Light on the Black Box: Integrating Prediction Models and Explainability Using Explainable Machine Learning","authors":"Yucheng Zhang, Yuyan Zheng, Dan Wang, Xiaowei Gu, Michael J. Zyphur, Lin Xiao, Shudi Liao, Yangyang Deng","doi":"10.1177/10944281251323248","DOIUrl":"https://doi.org/10.1177/10944281251323248","url":null,"abstract":"In contemporary organizational research, when dealing with large heterogeneous datasets and complex relationships, statistical modeling focused on developing substantive explanations typically results in low predictive accuracy. In contrast, machine learning (ML) exhibits remarkable strength for prediction, but suffers from an unexplainable analytical process and output—thus ML is often known as a “black box” approach. The recent development of explainable machine learning (XML) integrates high predictive accuracy with explainability, which combines the advantages inherent in both statistical modeling and ML paradigms. This paper compares XML with statistical modeling and the traditional ML approaches, focusing on an advanced application of XML known as evolving fuzzy system (EFS), which enhances model transparency by clarifying the unique contribution of each modeled predictor. In an illustrative study, we demonstrate two EFS-based XML models and conduct comparative analyses among XML, ML, and statistical models with a commonly-used database in organizational research. Our study offers a thorough description of analysis procedures for implementing XML in organizational research, along with best-practice recommendations for each step as well as Python code to aid future research using XML. Finally, we discuss the benefits of XML for organizational research and its potential development.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"14 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}