Siyi Liu, Louis Hickman, Linus Dahlander, Henning Piezunka
{"title":"Textual Similarity in Organizational Research: Review of Applications, Consistency of Methods, and Best Practice Recommendations","authors":"Siyi Liu, Louis Hickman, Linus Dahlander, Henning Piezunka","doi":"10.1177/10944281261432629","DOIUrl":"https://doi.org/10.1177/10944281261432629","url":null,"abstract":"Organizational research increasingly uses natural language processing (NLP) to measure textual similarity. Despite common usage, the meaning and consistency of similarity measures (e.g., cosine similarity and Euclidean distance) across common NLP methods (e.g., <jats:italic toggle=\"yes\">n</jats:italic> -grams and document embeddings) is unclear. This risks misalignment between theoretical constructs and textual measures, undermining the comparability of findings across studies. To address this gap, we review studies using textual similarity in organizational and psychological research, finding a jingle-jangle fallacy: identical labels are used for similarity estimates from different NLP methods, and different labels are used for the same method. Additionally, we examine the consistency of similarity measures across and within NLP methods. Different transformer-based embeddings’ similarity results are interchangeable. However, <jats:italic toggle=\"yes\">n</jats:italic> -grams yield distinct, inconsistent results and are less appropriate for estimating similarity with distance measures. When applied to multi-word inputs, dictionaries and word embeddings return similar results reflecting linguistic style. We provide best practice recommendations and example code for operationalizing textual similarity, including clarifying which NLP methods correspond to content similarity, linguistic style similarity, and semantic similarity at the word, sentence, and document-levels of analysis.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"151 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147751554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kasper Trolle Elmholdt, Michael Gill, Jeppe Agger Nielsen
{"title":"“How Many Interviews Do I Need?” An Examination of Interview Numbers and Sampling Moves in Qualitative Research","authors":"Kasper Trolle Elmholdt, Michael Gill, Jeppe Agger Nielsen","doi":"10.1177/10944281261424516","DOIUrl":"https://doi.org/10.1177/10944281261424516","url":null,"abstract":"Qualitative researchers face an enduring question: How many interviews do I need? While a variety of guidelines exist, there is limited consensus over which specific factors should determine the number of interviews required. We examined the determination of interview sample sizes in 562 qualitative studies across six high-impact management and organizational journals over a decade. Our findings reveal considerable variance in interview numbers, yet limited information is often provided on the criteria used to determine them. To promote clearer alignment between sample sizes and methodology, we examined studies with detailed descriptions of their interview sampling. We identified specific “sampling moves” used to determine the number of interviews, categorized into three types—opening, focusing, and closing sampling moves—that researchers use to establish confidence in the sample and support theoretical insights. By implication, our study refutes the notion of a “magic” interview number. Instead, sampling moves are heuristic tools that qualitative researchers can thoughtfully adapt to their analytical aims when determining appropriate sample sizes.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"28 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ringo Moon-ho Ho, Jie Xin Lim, Olexander Chernyshenko
{"title":"Modeling Unfolding Response Data Within the Structural Equation Modeling Framework","authors":"Ringo Moon-ho Ho, Jie Xin Lim, Olexander Chernyshenko","doi":"10.1177/10944281261421529","DOIUrl":"https://doi.org/10.1177/10944281261421529","url":null,"abstract":"Dominance and unfolding response processes describe two ways in which individuals may respond to rating scale items. The dominance process assumes a monotonic relationship between a latent trait and the probability of endorsement and is typically modeled using a linear factor model within structural equation modeling (SEM). In contrast, the unfolding process assumes single-peaked response functions, with endorsement most likely when item and person locations are close on the latent continuum. Fitting unfolding models usually requires specialized software, which limits their integration with SEM. In this article, we proposed the ordered categorical response unfolding model (OCRUM), which can be estimated in Mplus. We illustrated its use with two empirical datasets and found that item and person locations were comparable to those obtained from the generalized graded unfolding model (GGUM). We also conducted Monte Carlo simulations to examine parameter recovery under varying sample sizes, test lengths, and response formats. Finally, we demonstrated that OCRUM can serve as the measurement component of a general structural equation model, enabling dominance and unfolding response processes to be represented within a single SEM framework.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"64 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147635727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathaniel M. Voss, Jiayi Liu, Saron Demeke, Martin C. Yu, Harrison J. Kell, Brian Prost, Dan J. Putka
{"title":"Applying Machine Learning and Natural Language Processing Methods to Support Taxonomy Development and Maintenance","authors":"Nathaniel M. Voss, Jiayi Liu, Saron Demeke, Martin C. Yu, Harrison J. Kell, Brian Prost, Dan J. Putka","doi":"10.1177/10944281261434975","DOIUrl":"https://doi.org/10.1177/10944281261434975","url":null,"abstract":"Taxonomies provide a systematic way to organize phenomena and have various practical and theoretical benefits for organizational researchers and practitioners. While taxonomy development and maintenance is often a burdensome process (e.g., time-consuming, costly, and prone to judgmental error), advances in natural language processing (NLP) have the potential to streamline this process. In this study, we employed various evaluation metrics (e.g., cosine similarity) to investigate how machine learning (ML) methods and large language models (LLMs) can automate taxonomy development and maintenance. We examined two embedding models, six clustering algorithms, and three generative LLMs (for creating cluster labels) to construct taxonomies and compared their alignment with four established taxonomies (CABIN, IPIP-NEO-120, ATAF, and O*NET). The confirmatory taxonomic method we examined resulted in effective clustering (i.e., similar text statements were consistently grouped), frequently yielded structures similar to the original taxonomies for ATAF, IPIP-NEO-120, and CABIN (with O*NET being more variable), and resulted in extremely efficient taxonomy title generation. These findings can provide researchers with a foundation for how to approach NLP-based taxonomy development and maintenance activities for their own contexts.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"15 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147518883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew B. Speer, Angie Y. Delacruz, Takudzwa A. Chawota, James Perrotta, Cort W. Rudolph
{"title":"Unpacking the Validity of Open-Ended Personality Assessments Using Fine-Tuned Large Language Models","authors":"Andrew B. Speer, Angie Y. Delacruz, Takudzwa A. Chawota, James Perrotta, Cort W. Rudolph","doi":"10.1177/10944281251413746","DOIUrl":"https://doi.org/10.1177/10944281251413746","url":null,"abstract":"Alternative approaches to personality measurement, such as open-ended narrative-based assessments, have potential advantages for organizational research and practice. In this research, we investigate factors that affect valid application of natural language processing (NLP) for scoring open-ended personality assessments and when, how, and why such assessments capture personality-related variance. Using a large sample of responses to open-ended assessments, convergence between NLP scores and self-report target scores increased as the degree of customization and the sophistication of the underlying model increased, with the worst psychometric performance occurring for zero-shot large language model (LLM) scores and the best for fine-tuned LLM scores. However, all scoring methods exhibited evidence of validity. Additionally, when trained to predict direct evaluations of the narrative responses, correlations with target scores were large ( <jats:italic toggle=\"yes\">M</jats:italic> = .83). NLP scores also exhibited discriminant and criterion-related validity evidence. However, validity was contingent upon the methodological rigor employed in developing writing prompts. Prompts designed to elicit trait-relevant information outperformed generic prompts, and this occurred because trait-specific prompts increased the amount of trait-relevant information (i.e., narrative units), which was associated with enhanced convergence with target scores.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"15 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147393412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imran Kadolkar, Divya V Doshi, Scott Tonidandel, Jose M Cortina
{"title":"I Gotta Feeling: Advancing Sentiment Analysis in Organizational Science","authors":"Imran Kadolkar, Divya V Doshi, Scott Tonidandel, Jose M Cortina","doi":"10.1177/10944281251408073","DOIUrl":"https://doi.org/10.1177/10944281251408073","url":null,"abstract":"Sentiment analysis (SA) has grown considerably in organizational science research over the past two decades, particularly in the last few years. While enthusiasm for integrating advanced natural language processing algorithms is encouraging, authors are not reaping the benefits of such tools fully. Our systematic review of SA application in the organizational sciences suggests that authors struggle to appreciate all of the decisions that are inherent to SA, the choices that are available at each decision point, and the consequences of each choice. To address this gap, we use a working example to illustrate four critical decision points authors confront when conducting SA, and the subsequent impact different choices can have on one's conclusion. Decision points include selecting the SA method, computing a sentiment score, preprocessing the data, and using an appropriate level of analysis. We conclude with a framework outlining five dimensions (e.g., accuracy, interpretability, computational cost) to guide the selection of an SA approach based on study goals and needs, along with seven recommendations to authors wishing to apply SA.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"250 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2026-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147358803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of Prototype Analysis to Organizational Research: A Critical Methodological Review","authors":"Sandra Kiffin-Petersen, Sharon Purchase, Doina Olaru","doi":"10.1177/10944281251399210","DOIUrl":"https://doi.org/10.1177/10944281251399210","url":null,"abstract":"Prototypes—internalized knowledge structures of the most typical or characteristic features of a concept—are important because they influence cognitive processing. Yet prototype analysis, the method used to examine prototypes, appears relatively underutilized in organizational research. To introduce prototype analysis to a wider audience of organizational scholars, we conducted a critical methodological literature review following a six-step procedure. Seventy-three prototype analyses published in 35 journals were categorized and their content analyzed. A prototype analysis typically includes a sequence of independent studies conducted over two stages, recently referred to as the standard procedure. Our review makes several contributions, including development of a taxonomy of prototype analysis applications, clarification of the standard procedure of a prototype analysis and possible variations, and suggestions for organizational research. Benefits of undertaking a prototype analysis include improved understanding of abstract workplace concepts that are difficult to measure directly, the ability to compare cross-cultural prototypes, and an approach for investigating the issue of construct redundancy. We conclude with best-practice recommendations, implications for organizational scholarship, methodological limitations, and future research suggestions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"22 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145812772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dane P. Blevins, David J. Skandera, Roberto Ragozzino
{"title":"Understanding Relative Differences with Magnitude-Based Hypotheses: A Methodological Conceptualization and Data Illustration","authors":"Dane P. Blevins, David J. Skandera, Roberto Ragozzino","doi":"10.1177/10944281251377139","DOIUrl":"https://doi.org/10.1177/10944281251377139","url":null,"abstract":"Our paper provides a conceptualization of magnitude-based hypotheses (MBHs). We define an MBH as a specific type of hypothesis that tests for relative differences in the independent impact (i.e., effect size difference) of at least two explanatory variables on a given outcome. We reviewed 1,715 articles across eight leading management journals and found that nearly 10% (165) of articles feature an MBH, employing 41 distinct methodological approaches to test them. However, approximately 40% of these papers show missteps in the post-estimation process required to evaluate MBHs. To address this issue, we offer a conceptual framework, an empirical illustration using Bayesian analysis and frequentist statistics, and a decision-tree guideline that outlines key steps for evaluating MBHs. Overall, we contribute a framework for applying MBHs, demonstrating how they can shift theoretical inquiry from binary questions of whether an effect exists, to more comparative questions about how much a construct matters,compared to what, and under which conditions.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"50 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145241869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative Artificial Intelligence in Qualitative Data Analysis: Analyzing—Or Just Chatting?","authors":"Duc Cuong Nguyen, Catherine Welch","doi":"10.1177/10944281251377154","DOIUrl":"https://doi.org/10.1177/10944281251377154","url":null,"abstract":"Researchers, engineers, and entrepreneurs are enthusiastically exploring and promoting ways to apply generative artificial intelligence (GenAI) tools to qualitative data analysis. From promises of automated coding and thematic analysis to functioning as a virtual research assistant that supports researchers in diverse interpretive and analytical tasks, the potential applications of GenAI in qualitative research appear vast. In this paper, we take a step back and ask what sort of technological artifact is GenAI and evaluate whether it is appropriate for qualitative data analysis. We provide an accessible, technologically informed analysis of GenAI, specifically large language models (LLMs), and put to the test the claimed transformative potential of using GenAI in qualitative data analysis. Our evaluation illustrates significant shortcomings that, if the technology is adopted uncritically by management researchers, will introduce unacceptable epistemic risks. We explore these epistemic risks and emphasize that the essence of qualitative data analysis lies in the interpretation of meaning, an inherently human capability.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"25 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith
{"title":"Unleashing the Creative Potential of Research Tensions: Toward a Paradox Approach to Methods","authors":"Stephanie Schrage, Constantine Andriopoulos, Marianne W. Lewis, Wendy K. Smith","doi":"10.1177/10944281251346804","DOIUrl":"https://doi.org/10.1177/10944281251346804","url":null,"abstract":"Research is a paradoxical process. Scholars confront conflicting yet interwoven pressures, considering methodologies that engage complexity and simplicity, induction and deduction, novelty and continuity, and more. Paradox theory offers insights that embrace such tensions, providing empirical examples that harness creative friction to foster more novel and useful, rigorous, and relevant research. Leveraging this lens, we open a conversation on research tensions, developing the foundations of a Paradox Approach to Methods applicable to organization studies more broadly. To do so, we first identify tensions raised at six methodological decision points: research scope, construct definition, underlying assumptions, data collection, data analysis, and interpretation. Second, we build on paradox theory to identify navigating practices: accepting, differentiating, integrating, and knotting. By doing so, we contribute to organizational research broadly by embracing methods of tensions to advance scholarly insight.","PeriodicalId":19689,"journal":{"name":"Organizational Research Methods","volume":"21 1","pages":""},"PeriodicalIF":9.5,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144578317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}