{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"161 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"43 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown
{"title":"Mixed-Keying or Desirability-Matching in the Construction of Forced-Choice Measures? An Empirical Investigation and Practical Recommendations","authors":"Mengtong Li, Bo Zhang, Lingyue Li, Tianjun Sun, Anna Brown","doi":"10.1177/10944281241229784","DOIUrl":"https://doi.org/10.1177/10944281241229784","url":null,"abstract":"Forced-choice (FC) measures are becoming increasingly popular as an alternative to single-statement (SS) measures. However, to ensure the practical usefulness of an FC measure, it is crucial to address the tension between psychometric properties and faking resistance by balancing mixed keying and social desirability matching. It is currently unknown from an empirical perspective whether the two design criteria can be reconciled, and how they impact respondent reactions. By conducting a two-wave experimental design, we constructed four FC measures with varying degrees of mixed-keying and social desirability matching from the same statement pool and investigated their differences in terms of psychometric properties, faking resistance, and respondent reactions. Results showed that all FC measures demonstrated comparable reliability and induced similar respondent reactions. Forced-choice measures with stricter social desirability matching were more faking resistant, while FC measures with more mixed keyed blocks had higher convergent validity with the SS measure and displayed similar discriminant and criterion-related validity profiles as the SS benchmark. More importantly, we found that it is possible to strike a balance between social desirability matching and mixed keying, such that FC measures can have adequate psychometric properties and faking resistance. A seven-step recommendation and a tutorial based on the autoFC R package were provided to help readers construct their own FC measures.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"572 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Building a Bigger Toolbox: The Construct Validity of Existing and Proposed Measures of Careless Responding to Cognitive Ability Tests","authors":"Mark C. Ramsey, N. Bowling","doi":"10.1177/10944281231223127","DOIUrl":"https://doi.org/10.1177/10944281231223127","url":null,"abstract":"Employers commonly use cognitive ability tests in the personnel selection process. Although ability tests are excellent predictors of job performance, their validity may be compromised when test takers engage in careless responding. It is thus important for researchers to have access to effective careless responding measures, which allow researchers to screen for careless responding and to evaluate efforts to prevent careless responding. Previous research has primarily used two types of measures to assess careless responding to ability tests—response time and self-reported carelessness. In the current paper, we expand the careless responding assessment toolbox by examining the construct validity of four additional measures: (1) infrequency, (2) instructed-response, (3) long-string, and (4) intra-individual response variability (IRV) indices. Expanding the available set of careless responding indices is important because the strengths of new indices may offset the weaknesses of existing indices and would allow researchers to better assess heterogeneous careless response behaviors. Across three datasets ( N = 1,193), we found strong support for the validity of the response-time and infrequency indices, and moderate support for the validity of the instructed-response and IRV indices.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"25 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang
{"title":"Confounding Effects of Insufficient Effort Responding Across Survey Sources: The Case of Personality Predicting Performance","authors":"Jason L. Huang, N. Bowling, Benjamin D. McLarty, Donald H. Kluemper, Zhonghao Wang","doi":"10.1177/10944281231212570","DOIUrl":"https://doi.org/10.1177/10944281231212570","url":null,"abstract":"Insufficient effort responding (IER) to surveys, which occurs when participants provide responses in a haphazard, careless, or random fashion, has been identified as a threat to data quality in survey research because it can inflate observed relationships between self-reported measures. Building on this discovery, we propose two mechanisms that lead to IER exerting an unexpected confounding effect between self-reported and informant-rated measures. First, IER can contaminate self-report measures when the means of attentive and inattentive responses differ. Second, IER may share variance with some informant-rated measures, particularly supervisor ratings of participants’ job performance. These two mechanisms operating in tandem would suggest that IER can act as a “third variable” that inflates observed relationships between self-reported predictor scores and informant-rated criteria. We tested this possibility using a multisource dataset ( N = 398) that included incumbent self-reports of five-factor model personality traits and supervisor-ratings of three job performance dimensions—task performance, organizational citizenship behavior (OCB), and counterproductive work behavior (CWB). We observed that the strength of the relationships between self-reported personality traits and supervisor-rated performance significantly decreased after IER was controlled: Across the five personality traits, the average reduction of magnitude from the zero-order to partial correlations was |.08| for task performance, |.07| for OCB, and |.14| for CWB. Because organizational practices are often driven by research linking incumbent-reported predictors to supervisor-rated criteria (e.g., validation of predictors used in various organizational contexts), our findings have important implications for research and practice.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"119 30","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139605438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Nuisance of Control Variables in Causal Regression Analysis","authors":"Paul Hünermund, Beyers Louw","doi":"10.1177/10944281231219274","DOIUrl":"https://doi.org/10.1177/10944281231219274","url":null,"abstract":"Control variables are included in regression analyses to estimate the causal effect of a treatment on an outcome. In this article, we argue that the estimated effect sizes of controls are unlikely to have a causal interpretation themselves, though. This is because even valid controls are possibly endogenous and represent a combination of several different causal mechanisms operating jointly on the outcome, which is hard to interpret theoretically. Therefore, we recommend refraining from interpreting the marginal effects of controls and focusing on the main variables of interest, for which a plausible identification argument can be established. To prevent erroneous managerial or policy implications, coefficients of control variables should be clearly marked as not having a causal interpretation or omitted from regression tables altogether. Moreover, we advise against using control variable estimates for subsequent theory building and meta-analyses.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"33 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139157393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To Omit or to Include? Integrating the Frugal and Prolific Perspectives on Control Variable Use","authors":"Fabian Mändli, Mikko Rönkkö","doi":"10.1177/10944281231221703","DOIUrl":"https://doi.org/10.1177/10944281231221703","url":null,"abstract":"Over the recent years, two perspectives on control variable use have emerged in management research: the first originates largely from within the management discipline and argues to remain frugal, to use control variables as sparsely as possible. The second is rooted in econometrics textbooks and argues to be prolific, to be generous in control variable inclusion to not risk omitted variable bias, and because including irrelevant exogenous variables has little consequences for regression results. We present two reviews showing that the frugal perspective is becoming increasingly popular in research practice, while the prolific perspective has received little explicit attention. We summarize both perspectives’ key arguments and test their specific recommendations in three Monte Carlo simulations. Our results challenge the two recommendations of the frugal perspective of “omitting impotent controls” and “avoiding proxies” but show the detrimental effects of including endogenous controls (bad controls). We recommend considering the control variable selection problem from the perspective of endogeneity and selecting controls based on theory using causal graphs instead of focusing on the many or few questions.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"24 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139166153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros
{"title":"Resisting the Objectification of Qualitative Research: The Unsilencing of Context, Researchers, and Noninterview Data","authors":"Hans Hansen, S. Elias, Anna Stevenson, Anne D. Smith, B. Alexander, Marcos Barros","doi":"10.1177/10944281231215119","DOIUrl":"https://doi.org/10.1177/10944281231215119","url":null,"abstract":"Based on an analysis of qualitative research papers published between 2019 and 2021 in four top-tier management journals, we outline three interrelated silences that play a role in the objectification of qualitative research: silencing of noninterview data, silencing the researcher, and silencing context. Our analysis unpacks six silencing moves: creating a hierarchy of data, marginalizing noninterview data, downplaying researcher subjectivity, weakening the value of researcher interpretation, thin description, and backgrounding context. We suggest how researchers might resist the objectification of qualitative research and regain its original promise in developing more impactful and interesting theories: noninterview data can be unsilenced by democratizing data sources and utilizing nonverbal data, the researcher can be unsilenced by leveraging engagement and crafting interpretations, and finally, context can be unsilenced by foregrounding context as an interpretative lens and contextualizing the researcher, the researched, and the research project. Overall, we contribute to current understandings of the objectification of qualitative research by both unpacking particular moves that play a role in it and delineating specific practices that help researchers embrace subjectivity and engage in inspired theorizing.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"18 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139257260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real Research with Fake Data: A Tutorial on Conducting Computer Simulation for Research and Teaching","authors":"Michael C. Sturman","doi":"10.1177/10944281231215024","DOIUrl":"https://doi.org/10.1177/10944281231215024","url":null,"abstract":"Although many have recognized the value of computer simulations as a research tool, instruction on building computer simulations is absent from most doctoral education and research methods texts. This paper provides an introductory tutorial on computer simulations for research and teaching. It shows the techniques needed to create data based on desired relationships among the variables or based on a specified model. The paper also introduces techniques to make data more “interesting,” including adding skew or kurtosis, creating multi-item measures with unreliability, making data multilevel, and incorporating mediated, moderated, and nonlinear relationships. The methods described in the paper are illustrated using Excel, Mplus, and R; furthermore, the functionality of using ChatGPT to create code in R is explored and compared to the paper's illustrative examples. Supplemental files are provided that illustrate each example used in the paper as well as several more advanced techniques mentioned in the paper. The goal of this paper is not to help inform experts on simulation; rather, it is to open up to all readers the powerful potential of this research and teaching tool.","PeriodicalId":507528,"journal":{"name":"Organizational Research Methods","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139255541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}