Laura Salerno, Analyn Alquitran, Noor Alibrahim, G. Lo Coco, M. Di Blasi, C. Giordano
{"title":"Evaluating Gender Differences in Problematic Smartphone Use","authors":"Laura Salerno, Analyn Alquitran, Noor Alibrahim, G. Lo Coco, M. Di Blasi, C. Giordano","doi":"10.1027/1015-5759/a000730","DOIUrl":"https://doi.org/10.1027/1015-5759/a000730","url":null,"abstract":"Abstract. The Smartphone Addiction Inventory (SPAI) is widely used to measure problematic smartphone use (PSU). Although the SPAI has been translated and validated in different countries, its measurement invariance across gender has received little research attention. This study aimed to examine whether men and women interpreted the Italian version of the SPAI (SPAI-I) similarly and, consequently, whether the observed gender differences in SPAI scores, which have been shown in previous studies, could be due to true differences, rather than to differences in measurement. Six hundred nineteen Italian young adults ( Mage = 22.02 ± 2.63; 55.7% women) took part in the study and completed the SPAI-I. Multigroup CFA was applied to test the measurement invariance across gender, and the item parameter invariance was investigated with the item-response theory (IRT) differential item functioning (DIF) method for multidimensional models. Evidence of measurement invariance across gender was found. Only one item (i.e., item 14, “The idea of using smartphone comes as the first thought on mind when waking up each morning”) out of 24 items of the SPAI-I showed DIF with a large effect size. Gender-related differences found with the SPAI-I reflect true differences in smartphone overuse rather than specific characteristics of the measure.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41683442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meta-Analysis of Factor Analyses of the General Health Questionnaire – Short Forms GHQ-28 and GHQ-30","authors":"Alan B. Shafer","doi":"10.1027/1015-5759/a000727","DOIUrl":"https://doi.org/10.1027/1015-5759/a000727","url":null,"abstract":"Abstract. Two meta-analyses of exploratory factor analyses of the General Health Questionnaire short forms, GHQ-28 ( N = 26,848, k = 40) and GHQ-30 ( N = 43,151 k = 25), were conducted to determine the consistent factors found in each test and any common factors across them. Five databases (PsycINFO, PubMed, BASE, Semantic, and Google Scholar) were searched in 2021. Reproduced correlations derived from the original studies’ factor matrices and aggregated across studies were factor analyzed for the meta-analyses. For the GHQ-28, the standard four subscales of somatic, anxiety, social dysfunction, and depression were clearly identified and strongly supported by a four-factor structure. For the GHQ-30, a four-factor solution identified factors of anxiety, depression, social dysfunction, and social satisfaction, the first three factors shared a number of items with the same scales found in the GHQ-28. These shared factors appear similar across tests and should help bridge research using the GHQ-30 and the GHQ-28. Confirmatory factor analyses supported the four-factor models in both tests. The four standard subscales of GHQ-28 were strongly supported and can be recommended. The three similar factors in the GHQ-30, as well as the social satisfaction factor, appear reasonable to use.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46321258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Psychological Assessment Contribute to a Better World?","authors":"D. Gallardo-Pujol, M. Ziegler, D. Iliescu","doi":"10.1027/1015-5759/a000739","DOIUrl":"https://doi.org/10.1027/1015-5759/a000739","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46432901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Fair Is My Test?","authors":"Denis G. Dumas, Yixiao Dong, Daniel M. McNeish","doi":"10.1027/1015-5759/a000724","DOIUrl":"https://doi.org/10.1027/1015-5759/a000724","url":null,"abstract":"Abstract. The degree to which test scores can support justified and fair decisions about demographically diverse participants has been an important aspect of educational and psychological testing for millennia. In the last 30 years, this aspect of measurement has come to be known as consequential validity, and it has sparked scholarly debate as to how responsible psychometricians should be for the fairness of the tests they create and how the field might be able to quantify that fairness and communicate it to applied researchers and other stakeholders of testing programs. Here, we formulate a relatively simple-to-calculate ratio coefficient that is meant to capture how well the scores from a given test can predict a criterion free from the undue influence of student demographics. We posit three example calculations of this Consequential Validity Ratio (CVR): one where the CVR is quite strong, another where the CVR is more moderate, and a third where the CVR is weak. We provide preliminary suggestions for interpreting the CVR and discuss its utility in instances where new tests are being developed, tests are being adapted to a new population, or the fairness of an established test has become an empirical question.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57277489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Workplace Stress in Real Time","authors":"L. Menghini, M. Pastore, C. Balducci","doi":"10.1027/1015-5759/a000725","DOIUrl":"https://doi.org/10.1027/1015-5759/a000725","url":null,"abstract":"Abstract. Experience sampling methods are increasingly used in workplace stress assessment, yet rarely developed and validated following the available best practices. Here, we developed and evaluated parsimonious measures of momentary stressors (Task Demand and Task Control) and the Italian adaptation of the Multidimensional Mood Questionnaire as an indicator of momentary strain (Negative Valence, Tense Arousal, and Fatigue). Data from 139 full-time office workers that received seven experience sampling questionnaires per day over 3 workdays suggested satisfactory validity (including weak invariance cross-level isomorphism), level-specific reliability, and sensitivity to change. The scales also showed substantial correlations with retrospective measures of the corresponding or similar constructs and a degree of sensitivity to work sampling categories (type and mean of job task, people involved). Opportunities and recommendations for the investigation and the routine assessment of workplace stress are discussed.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49383611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Vivid Is Your Mental Imagery?","authors":"D. Jankowska, M. Karwowski","doi":"10.1027/1015-5759/a000721","DOIUrl":"https://doi.org/10.1027/1015-5759/a000721","url":null,"abstract":"Abstract. Across five studies (total N > 3,600), we report the psychometric properties of the Polish version of the Vividness of Visual Imagery Questionnaire (VVIQ-2PL). Confirmatory factor analysis confirmed a unidimensional structure of this instrument; measurement invariance concerning participants’ gender was established as well. The VVIQ-2PL showed excellent test-retest reliability, high internal consistency, and adequate construct validity. As predicted, art students scored significantly higher in visual mental imagery than the non-artist group. We discuss these findings alongside future research directions and possible modifications of VVIQ-2PL.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42706368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating Measurement Invariance of the Psychological Entitlement Scale – Grandiose-Based and Vulnerable-Based","authors":"W. Hart, Joshua T. Lambert, Charlotte Kinrade","doi":"10.1027/1015-5759/a000726","DOIUrl":"https://doi.org/10.1027/1015-5759/a000726","url":null,"abstract":"Abstract. Entitlement has attracted interest across various social science disciplines due to its broad connection to selfish decision-making outcomes and mental health. Although unidimensional entitlement scales have been widely used, these scales conflate vulnerable- and grandiose-based entitlement forms. The Psychological Entitlement Scale – Grandiose-Based and Vulnerable-Based (PES-G/V) was recently devised to measure these entitlement forms. Prior work has supported the structure and construct validity of the PES-G/V, but no research has addressed the measurement invariance (MI) of the PES-G/V. Hence, we examined MI in relation to gender, two popular sampling frames in psychology studies (US MTurk participants and US college participants), and age. Results supported scalar MI across levels of each of the grouping variables. In sum, the structural properties of the PES-G/V seemed robust to the group distinctions.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44134511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Spaces Are Not Created Equal – How Should We Weigh Them in the Sequel?","authors":"Boris Forthmann, R. Beaty, D. Johnson","doi":"10.1027/1015-5759/a000723","DOIUrl":"https://doi.org/10.1027/1015-5759/a000723","url":null,"abstract":"Abstract. Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e., > .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor scores are thoroughly provided.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44236298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How a Few Inconsistent Respondents Can Confound the Structure of Personality Survey Data","authors":"V. Arias, Fernando P. Ponce, A. Martínez-Molina","doi":"10.1027/1015-5759/a000719","DOIUrl":"https://doi.org/10.1027/1015-5759/a000719","url":null,"abstract":"Abstract. In survey data, inconsistent responses due to careless/insufficient effort (C/IE) can lead to problems of replicability and validity. However, data cleaning prior to the main analyses is not yet a standard practice. We investigated the effect of C/IE responses on the structure of personality survey data. For this purpose, we analyzed the structure of the Core-Self Evaluations scale (CSE-S), including the detection of aberrant responses in the study design. While the original theoretical model of the CSE-S assumes that the construct is unidimensional ( Judge et al., 2003 ), recent studies have argued for a multidimensional solution (positive CSE and negative CSE). We hypothesized that this multidimensionality is not substantive but a result of the tendency of C/IE data to generate spurious dimensions. We estimated the confirmatory models before and after removing highly inconsistent response vectors in two independent samples (6% and 4.7%). The analysis of the raw samples clearly favored retaining the two-dimensional model. In contrast, the analysis of the clean datasets suggested the retention of a single factor. A mere 6% C/IE response rate showed enough power to confound the results of the factor analysis. This result suggests that the factor structure of positive and negative CSE factors is spurious, resulting from uncontrolled wording variance produced by a limited proportion of highly inconsistent response vectors. We encourage researchers to include screening for inconsistent responses in their research designs.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42402763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Psychometric Properties and Measurement Invariance of a Short Form of the Unified Multidimensional Calling Scale (UMCS)","authors":"Sophie Gerdel, Anna Dalla Rosa, M. Vianello","doi":"10.1027/1015-5759/a000722","DOIUrl":"https://doi.org/10.1027/1015-5759/a000722","url":null,"abstract":"Abstract. This paper reports on the development of a unidimensional short scale for measuring career calling (UMCS-7). The scale has been developed drawing from the theoretical model behind the Unified Multidimensional Calling Scale (UMCS; Vianello et al., 2018 ), according to which calling is composed of Passion, Prosociality, Purpose, Pervasiveness, Sacrifice, Transcendent Summons, and Identity. The UMCS-7 integrates classical and modern conceptualizations of career calling and can be used when time constraints prevent using the UMCS. The UMCS-7 has been validated in a sample of Italian workers ( N = 1,246) using exploratory and confirmatory factor analysis. A sample of US employees ( N = 165) was used to estimate measurement invariance across languages, establishing the equivalence of factor loadings, all but two intercepts, and all error variances. The UMCS-7 demonstrated nearly perfect convergent validity with the UMCS ( r = .97), excellent internal consistency (αItaly = .86; αUS = .87), and satisfactory concurrent validity with job satisfaction, life satisfaction, and turnover intentions.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42552979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}