Georgia Clay, C. Dumitrescu, Janina Habenicht, Isabel Kmiecik, Marzia Musetti, I. Domachowska
{"title":"Who Is Satisfied With Effort?","authors":"Georgia Clay, C. Dumitrescu, Janina Habenicht, Isabel Kmiecik, Marzia Musetti, I. Domachowska","doi":"10.1027/1015-5759/a000742","DOIUrl":"https://doi.org/10.1027/1015-5759/a000742","url":null,"abstract":"Abstract. The effort required to obtain certain rewards may influence the level of satisfaction with the following reward. Since people differ in beliefs about the availability of willpower resources required to pursue effortful actions, we investigated how willpower beliefs affect the perception of effort and satisfaction with reward. We hypothesized that people with limited willpower beliefs (i.e., believing that exerting effort leads to depletion of their inner resources) will perceive cognitive tasks as more effortful and will be less satisfied with the subsequent reward than those with non-limited beliefs (i.e., believing that exerting effort is invigorating rather than depleting). We tested this hypothesis by manipulating effort with different difficulty levels of the N-back task and measuring participants’ perception of effort expenditure and subjective satisfaction with a reward depending on their willpower beliefs. In accordance with the predictions, we found that those with limited willpower beliefs perceived the task as more effortful than those with non-limited willpower beliefs. Furthermore, when asked to subjectively rate their satisfaction with the reward gained for the task, limited believers rated their satisfaction lower than non-limited believers. These findings suggest that people take their willpower capacities into effort-satisfaction calculations. Results are discussed within the context of other models of effort, and practical implications of the findings are suggested.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44298066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beatrice Rammstedt, D. Grüning, Clemens M. Lechner
{"title":"Measuring Growth Mindset","authors":"Beatrice Rammstedt, D. Grüning, Clemens M. Lechner","doi":"10.1027/1015-5759/a000735","DOIUrl":"https://doi.org/10.1027/1015-5759/a000735","url":null,"abstract":"Abstract. A growth mindset is a belief that personal characteristics, specifically intellectual ability, are malleable and can be developed by investing time and effort. Numerous studies have investigated the associations between a growth mindset and academic achievement, and large intervention programs have been established to train adolescents to develop a stronger growth mindset. However, methodological research on the adequacy of the measures used to assess a growth mindset is scarce. In our study, we conducted one of the first comprehensive assessments of the psychometric properties of Dweck’s widely used three-item Growth Mindset Scale in two samples (adolescents aged 14–19 years and adults aged 20–64 years). We test the comparability (i.e., measurement invariance) of the scale across these age groups. Furthermore, using the same two samples, we identified and validated a single-item measure to assess growth mindset in settings with severe time constraints. Results reveal that both the three-item and the single-item scales have acceptable psychometric properties regarding reliability, comparability, and validity. However, the results did not support some of the central tenets of mindset theory, such as that a growth mindset is positively linked to goal regulation and achievement, calling for future research on the criterion validity of a growth mindset.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49264770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validation of the Dutch Version of the Plymouth Sensory Imagery Questionnaire","authors":"Mandy Woelk, M. Hagenaars, J. Krans","doi":"10.1027/1015-5759/a000729","DOIUrl":"https://doi.org/10.1027/1015-5759/a000729","url":null,"abstract":"Abstract. Mental imagery plays an important role in the onset and maintenance of psychological disorders as well as their treatment. Therefore, a reliable and valid measure of mental imagery is essential. Andrade and colleagues (2014) developed the Plymouth Sensory Imagery Questionnaire (PsiQ), which contains 35 items (long version) or 21 items (shortened version) measuring the vividness of mental imagery in seven different modalities: vision, sound, smell, taste, touch, bodily sensation, and emotion. Andrade et al. reported a seven-factor structure corresponding to the different modalities for both versions rather than a one-factor model measuring general mental imagery. The current paper reports on the translation and validation of the Dutch version of the PsiQ (PsiQ-NL-35 and PsiQ-NL-21). In two independent samples (student and mixed), the PsiQ-NL-35 showed excellent internal consistency, adequate model fit for the seven-factor model, and a poor fit for the one-factor model. Test-retest reliability (Study 1, student sample) was good. Construct validity (Study 2, mixed sample) was adequate. The PsiQ-NL-21 also showed excellent internal consistency, good test-retest reliability, adequate seven-factor model fit, and adequate construct validity. Measurement invariance between the Dutch and the English version was found, implying that both versions measure the same construct.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42846769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Impact of Different Methods to Correct for Response Styles on the External Validity of Self-Reports","authors":"A. Scharl, Timo Gnambs","doi":"10.1027/1015-5759/a000731","DOIUrl":"https://doi.org/10.1027/1015-5759/a000731","url":null,"abstract":"Abstract. Response styles (RSs) such as acquiescence represent systematic respondent behaviors in self-report questionnaires beyond the actual item content. They distort trait estimates and contribute to measurement bias in questionnaire-based research. Although various approaches were proposed to correct the influence of RSs, little is known about their relative performance. Because different correction methods formalize the latent traits differently, it is unclear how model choice affects the external validity of the corrected measures. Therefore, the present study on N = 1,000 Dutch respondents investigated the impact of correcting responses to measures of self-esteem and the need for cognition using structural equation models with structured residuals, multidimensional generalized partial credit models, and multinomial processing trees. The study considered three RSs: extreme, midpoint, and acquiescence RS. The results showed homogeneous correlation patterns among the modeled latent and external variables, especially if they were not themselves subject to RSs. In that case, the IRT-based models, including an uncorrected model, still yielded consistent results. Nevertheless, the strength of the effect sizes showed variation.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41438191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Receptivity to Instructional Feedback","authors":"A. Lipnevich, Carolina Lopera-Oquendo","doi":"10.1027/1015-5759/a000733","DOIUrl":"https://doi.org/10.1027/1015-5759/a000733","url":null,"abstract":"Abstract. The purpose of this study was to report validity evidence for the instrument intended to measure receptivity to instructional feedback in a sample of secondary school students from Singapore ( N = 314). We tested a nested hierarchy of hypotheses for addressing the cross-group (i.e., gender) invariance and compared means on the receptivity to feedback subscales between gender groups. We also examined whether receptivity to feedback predicted student grades. The four-factor hypothesized model comprising experiential attitudes, instrumental attitudes, cognitive engagement, and behavioral engagement with feedback had a good model fit. Multi-group confirmatory factor analysis supported configural, metric, partial scalar, partial strict as well as variance and covariance invariance across gender groups. After controlling for gender, cognitive engagement, and experiential attitudes predicted increments in grades, suggesting evidence for discriminant validity among the receptivity factors as well as their relevance for the prediction of meaningful educational outcomes.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46036333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional Intelligence as a Personality State","authors":"Leonidas A. Zampetakis, E. M. Mitropoulou","doi":"10.1027/1015-5759/a000734","DOIUrl":"https://doi.org/10.1027/1015-5759/a000734","url":null,"abstract":"Abstract. Contemporary research has begun to explore the notion that emotional intelligence (EI) has an important state component in addition to the trait component, as represented in the whole trait theory. This implies that state EI (or enacted EI) has similar cognitive, affective, and motivational contents as its corresponding trait. The question, however, of whether a trait EI construct means the same across the individual (trait) and state levels of analysis has not been empirically investigated. To address this gap, the present study examines the assessment of enacted EI, using the full version of the Wong and Law Emotional Intelligence Scale (WLEIS) on both between-person and within-person levels of analysis. Participants were 493 Greek employees who completed the WLEIS for 5 consecutive workdays. Multilevel confirmatory factor analyses confirmed that the original four-factor multilevel model appeared to best fit the data. Multilevel measurement invariance analysis supported the equivalence of the measure across different levels of analysis. In conclusion, the WLEIS is a configural cluster construct, believed to be a valuable and reliable tool for assessing enacted EI within the workplace. Implications for future research on enacted EI are discussed.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41485099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julia Bobe, Theresa Schnettler, Anne Scheunemann, Stefan Fries, L. Bäulke, Daniel O. Thies, M. Dresel, D. Leutner, Joachim Wirth, Katrin B. Klingsieck, C. Grunschel
{"title":"Delaying Academic Tasks and Feeling Bad About It","authors":"Julia Bobe, Theresa Schnettler, Anne Scheunemann, Stefan Fries, L. Bäulke, Daniel O. Thies, M. Dresel, D. Leutner, Joachim Wirth, Katrin B. Klingsieck, C. Grunschel","doi":"10.1027/1015-5759/a000728","DOIUrl":"https://doi.org/10.1027/1015-5759/a000728","url":null,"abstract":"Abstract. Procrastination is the irrational delay of an intended task and is common among students. A delay can only be defined as procrastination when it is voluntary, the action was intended but not implemented, and the delay is accompanied by subjective discomfort. Established scales of procrastination cover mainly behavioral aspects but have neglected the emotional aspect. This inaccuracy concerning the construct validity might entail misconceptions of procrastination. Accordingly, we developed and validated the Behavioral and Emotional Academic Procrastination Scale (BEPS), which covers all aspects of the definition of procrastination. The 6-item scale measuring self-reported academic procrastination was tested in three studies. Study 1 ( N = 239) evaluated the psychometric qualities of the BEPS, indicating good item characteristics and internal consistency. Study 2 ( N = 1,441) used confirmatory factor analysis and revealed two correlated factors: one covering the behavioral aspect and the other reflecting the emotional aspect. Measurement invariance was shown through longitudinal and multigroup confirmatory factor analyses. Study 3 ( N = 234) provided evidence for the scale’s convergent validity through correlations with established procrastination scales, self-efficacy, and neuroticism. The BEPS thus economically operationalizes all characteristics of academic procrastination and appears to be a reliable and valid self-report measure.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44444759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Salerno, Analyn Alquitran, Noor Alibrahim, G. Lo Coco, M. Di Blasi, C. Giordano
{"title":"Evaluating Gender Differences in Problematic Smartphone Use","authors":"Laura Salerno, Analyn Alquitran, Noor Alibrahim, G. Lo Coco, M. Di Blasi, C. Giordano","doi":"10.1027/1015-5759/a000730","DOIUrl":"https://doi.org/10.1027/1015-5759/a000730","url":null,"abstract":"Abstract. The Smartphone Addiction Inventory (SPAI) is widely used to measure problematic smartphone use (PSU). Although the SPAI has been translated and validated in different countries, its measurement invariance across gender has received little research attention. This study aimed to examine whether men and women interpreted the Italian version of the SPAI (SPAI-I) similarly and, consequently, whether the observed gender differences in SPAI scores, which have been shown in previous studies, could be due to true differences, rather than to differences in measurement. Six hundred nineteen Italian young adults ( Mage = 22.02 ± 2.63; 55.7% women) took part in the study and completed the SPAI-I. Multigroup CFA was applied to test the measurement invariance across gender, and the item parameter invariance was investigated with the item-response theory (IRT) differential item functioning (DIF) method for multidimensional models. Evidence of measurement invariance across gender was found. Only one item (i.e., item 14, “The idea of using smartphone comes as the first thought on mind when waking up each morning”) out of 24 items of the SPAI-I showed DIF with a large effect size. Gender-related differences found with the SPAI-I reflect true differences in smartphone overuse rather than specific characteristics of the measure.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41683442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Meta-Analysis of Factor Analyses of the General Health Questionnaire – Short Forms GHQ-28 and GHQ-30","authors":"Alan B. Shafer","doi":"10.1027/1015-5759/a000727","DOIUrl":"https://doi.org/10.1027/1015-5759/a000727","url":null,"abstract":"Abstract. Two meta-analyses of exploratory factor analyses of the General Health Questionnaire short forms, GHQ-28 ( N = 26,848, k = 40) and GHQ-30 ( N = 43,151 k = 25), were conducted to determine the consistent factors found in each test and any common factors across them. Five databases (PsycINFO, PubMed, BASE, Semantic, and Google Scholar) were searched in 2021. Reproduced correlations derived from the original studies’ factor matrices and aggregated across studies were factor analyzed for the meta-analyses. For the GHQ-28, the standard four subscales of somatic, anxiety, social dysfunction, and depression were clearly identified and strongly supported by a four-factor structure. For the GHQ-30, a four-factor solution identified factors of anxiety, depression, social dysfunction, and social satisfaction, the first three factors shared a number of items with the same scales found in the GHQ-28. These shared factors appear similar across tests and should help bridge research using the GHQ-30 and the GHQ-28. Confirmatory factor analyses supported the four-factor models in both tests. The four standard subscales of GHQ-28 were strongly supported and can be recommended. The three similar factors in the GHQ-30, as well as the social satisfaction factor, appear reasonable to use.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46321258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can Psychological Assessment Contribute to a Better World?","authors":"D. Gallardo-Pujol, M. Ziegler, D. Iliescu","doi":"10.1027/1015-5759/a000739","DOIUrl":"https://doi.org/10.1027/1015-5759/a000739","url":null,"abstract":"","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46432901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}