C. Lau, F. Chiesi, A. Fermani, M. Muzi, Gonzalo del Moral Arroyo, Francesco Bruno, W. Ruch, L. Quilty, D. Saklofske, Carla Canestrari
{"title":"Measuring Gelotophobia, Gelotophilia, and Katagelasticism in Italy and Canada Using PhoPhiKat-30","authors":"C. Lau, F. Chiesi, A. Fermani, M. Muzi, Gonzalo del Moral Arroyo, Francesco Bruno, W. Ruch, L. Quilty, D. Saklofske, Carla Canestrari","doi":"10.1027/1015-5759/a000787","DOIUrl":"https://doi.org/10.1027/1015-5759/a000787","url":null,"abstract":"Abstract: The PhoPhiKat-30 is a self-report instrument for describing personality related to laughter and ridicule including gelotophobia, gelotophilia, and katagelasticism. The present study assessed the measurement properties of the newly translated Italian PhoPhiKat-30 across participants in Italy and Canada using multidimensional item response theory. Italian ( N = 326) and Canadian ( N = 1,467) participants completed the Italian and English PhoPhiKat-30, respectively. The parallel analysis supported the three-factor model in Italy. Conditional reliability estimates showed strong precision (> 0.80) of gelotophobia and gelotophilia along the latent continuum (−1.15 < θ < 3.08 and −1.69 < θ < 3.09, respectively). Katagelasticism showed a limited range (0.98 < θ < 2.85) for the latent attribute precisely measured, suggesting that new items that address the low to moderate difficulty of katagelasticism should be added in future studies. Item discrimination parameters varied across Reckase’s multidimensional normal-ogive model (MDISC mean = 0.79). Five items had uniform differential item functioning (DIF; McFadden’s pseudo R2 > .035 or β > .10) when comparing the Italian and English PhoPhiKat-30, with English items showing more agreement at the same level of the latent trait. The Italian PhoPhiKat-30 has good item discrimination across the latent continuum and showed cross-cultural equivalence for most items.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45945618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Response Formats on Response Style Strength","authors":"Mirka Henninger, Hansjörg Plieninger, T. Meiser","doi":"10.1027/1015-5759/a000779","DOIUrl":"https://doi.org/10.1027/1015-5759/a000779","url":null,"abstract":"Abstract: Many researchers use self-report data to examine abilities, personalities, or attitudes. At the same time, there is a widespread concern that response styles, such as the tendency to give extreme, midscale, or acquiescent responses, may threaten data quality. As an alternative to post hoc control of response styles using psychometric models, a priori control using specific response formats may be a means to reduce biasing response style effects in self-report data in day-to-day research practice. Previous research has suggested that response styles were less influential in a Drag-and-Drop (DnD) format compared to the traditional Likert-type format. In this article, we further examine the advantage of the DnD format, test its generalizability, and investigate its underlying mechanisms. In two between-participants experiments, we tested different versions of the DnD format against the Likert format. We found no evidence for reduced response style influence in any of the DnD conditions, nor did we find any difference between the conditions in terms of the validity of the measures to external criteria. We conclude that adaptations of response formats, such as the DnD format, may be promising, but require more thorough examination before recommending them as a means to reduce response style influence in psychological measurement.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":"1 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41548272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding the Clance Impostor Phenomenon Scale Through the Lens of a Bifactor Model","authors":"Kay Brauer, R. Proyer","doi":"10.1027/1015-5759/a000786","DOIUrl":"https://doi.org/10.1027/1015-5759/a000786","url":null,"abstract":"Abstract: The Clance Impostor Phenomenon Scale (CIPS) is the most frequently used self-report instrument for the assessment of the Impostor Phenomenon (IP). The literature provided mixed findings on the factorial structure of the CIPS. We extend previous work on the German-language CIPS by testing a bifactor exploratory factor model in two large and independently collected samples ( Ntotal = 1,794). Our analyses show that the bifactor model comprising a general IP factor and three group factors (labeled Luck, Fear of Failure, and Discount) fits the data well and 7 of the 20 items could be clearly assigned to the factors. The general factor (ω ≥ .90) and facets (α ≥ .67) show satisfying internal consistencies and differential correlations to attributional styles and the broader Big Five and HEXACO personality traits. Our findings support the use of the CIPS total score and expand the understanding of the CIPS’ multidimensional measurement model. Taking limitations into account, the identification and use of fine-grained facets contribute to understanding the correlates and consequences of the IP. We discuss potential improvements to the CIPS.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49045812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Response Tendencies to Positively and Negatively Worded Items of the Rosenberg Self-Esteem Scale With Eye-Tracking Methodology","authors":"Chrystalla C. Koutsogiorgi, M. Michaelides","doi":"10.1027/1015-5759/a000772","DOIUrl":"https://doi.org/10.1027/1015-5759/a000772","url":null,"abstract":"Abstract: The Rosenberg Self-Esteem Scale (RSES) was developed as a unitary scale to assess attitudes toward the self. Previous studies have shown differences in responses and psychometric indices between the positively and negatively worded items, suggesting differential processing of responses. The current study examined differences in response behaviors toward two positively and two negatively worded items of the RSES with eye-tracking methodology and explored whether those differences were more pronounced among individuals with higher neuroticism, controlling for verbal abilities and mood. Eighty-seven university students completed a computerized version of the scale, while their responses, response time, and eye movements were recorded through the Gazepoint GP3 HD eye-tracker. In linear mixed-effects models, two negatively worded items elicited higher scores (elicited stronger disagreement) in self-esteem, and different response processes, for example, longer viewing times, than two positively worded items. Neuroticism predicted lower responses and more revisits to item statements. Eye-tracking can enhance the examination of response tendencies and the role of wording and its interaction with individual characteristics at different stages of the response process.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42847801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carolin Hahnel, Alexander J. Jung, Frank Goldhammer
{"title":"Theory Matters","authors":"Carolin Hahnel, Alexander J. Jung, Frank Goldhammer","doi":"10.1027/1015-5759/a000776","DOIUrl":"https://doi.org/10.1027/1015-5759/a000776","url":null,"abstract":"Abstract: Following an extended perspective of evidence-centered design, this study provides a methodological exemplar of the theory-based construction of process indicators from log data. We investigated decision-making processes in web search as the target construct, assuming that individuals follow a heuristic search (focusing on search results vs. websites as a primary information source) and stopping rule (following a satisficing vs. sampling strategy). Drawing on these assumptions, we describe our reasoning for identifying the empirical evidence needed and selecting an assessment to obtain this evidence to derive process indicators that represent groups differentiated by search and stopping rule combinations. To evaluate our approach, we reanalyzed the processing behavior of 150 university students who were requested in four tasks to select a specific website from a list of five search results. We determined the process indicators per item and conducted multiple cluster analyses to investigate group recovery. For each item, we found three clusters, two of which matched our assumptions. Additionally, we explored the consistency of students’ cluster membership across items and investigated their relationship with students’ skills in evaluating online information. Based on the results, we discuss the tradeoff between construct breadth and process elaboration for deriving meaningful process indicators.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42031973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Process Data in Computer-Based Assessment","authors":"M. A. Lindner, Samuel Greiff","doi":"10.1027/1015-5759/a000790","DOIUrl":"https://doi.org/10.1027/1015-5759/a000790","url":null,"abstract":"Abstract: This editorial provides a comprehensive framework and overview of potential uses and next steps in research on process data in computer-based assessments, expanding toward broad perspectives on the field and an exploration of future directions and emerging trends. After briefly reflecting on the evolution of process data use in research and assessment practice, we discuss three key challenges, namely (1) the theoretical grounding and validation of process data indicators, (2) assessment design for process data, and (3) ethical standards. By considering best practice approaches in all three areas and current discussions in the literature, we conclude that a focus is needed on the following three areas: (1) strong, holistic theoretical frameworks for validating process data, (2) reliable, standardized data collections, preferably with a top-down approach to developing test items and preregistered hypotheses, and (3) ethical norms for data collection, data use, and guidelines for responsible inference, including restraints in decisions based on process data.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45220889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Omoeva, Nina Menezes Cunha, P. Kyllonen, Sarah Gates, Andres Martinez, H. Burke
{"title":"Developing a New Tool for International Youth Programs","authors":"C. Omoeva, Nina Menezes Cunha, P. Kyllonen, Sarah Gates, Andres Martinez, H. Burke","doi":"10.1027/1015-5759/a000770","DOIUrl":"https://doi.org/10.1027/1015-5759/a000770","url":null,"abstract":"Abstract: We developed and evaluated the YouthPower Action Youth Soft Skills Assessment (YAYSSA), a self-report soft skills measure. The YAYSSA targets 15- to 19-year-old youth in lower resource environments. In Study 1, we identified 16 key constructs based on a review of those associated with positive youth outcomes in sexual and reproductive health, violence prevention, and workforce success. We adapted promising items measuring those constructs from existing and openly available tools. We conducted cognitive interviews with 50 youth from six schools in Uganda, for wording and response formats, leading to a first draft tool. In Study 2 we administered that tool to N = 1,098 youth in 59 schools in Uganda. Confirmatory factor analyses did not support the hypothesized 16-factor structure, but exploratory factor analyses suggested a four-factor solution (Positive self-concept, Higher-order thinking skills, Social and Communication skills, and Negative self-concept). In Study 3, a revised tool was administered to Uganda youth ( N = 1,010, 59 sites). After cognitive testing with 45 youth in Guatemala, the tool was administered to youth ( N = 794; 59 sites) in Guatemala once, then 5 months later, with a mixture of retested and new participants ( N = 784; 67 sites). Factor analytic results supported the four-factor structure with 48 retained items and indicated that the instrument was reliable by internal consistency and test-retest correlations. The instrument correlated with demographic variables and outcomes in expected directions. We found evidence for measurement invariance across country, country and gender, country and socioeconomic status, and time. We discuss implications for scale validation and use in future research.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47522567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Face, Construct and Criterion Validity, and Test-Retest Reliability, of the Adult Rejection Sensitivity Questionnaire","authors":"Mandira Mishra, Mark S. Allen","doi":"10.1027/1015-5759/a000782","DOIUrl":"https://doi.org/10.1027/1015-5759/a000782","url":null,"abstract":"Abstract: This research sought to test the face, construct and criterion validity, and test-retest reliability of the Adult Rejection Sensitivity Questionnaire (ARSQ). In Study 1, participants ( n = 45) completed the ARSQ and questions assessing scale item relevancy, clarity, difficulty, and sensitivity. In Study 2, participants ( n = 513) completed the ARSQ and demographic questions. In Study 3, participants ( n = 244) completed the ARSQ and returned 2 weeks later to complete the ARSQ and measures of depression, anxiety, and self-silencing behavior. Study 1 provided strong support for face validity with all items deemed relevant, clear, easy to answer, and neither distressing nor judgmental. Study 2 provided adequate support for the factor structure of the ARSQ (single-factor model and two-factor model) but suggested modifications could be made to improve scale validity. Study 3 provided further support for an adequate (but not good) factor structure, and evidence for criterion validity established through medium-large effect size correlations with depression, anxiety, and self-silencing behavior. However, the 2-week scale stability appeared poor ( r = .45) in a subsample of participants. Overall, the ARSQ showed sufficient validity to recommend its continued use, but we recommend further tests of scale reliability and potential modifications to increase construct validity.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49228571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Alsubheen, Ana Oliveira, Razanne Habash, R. Goldstein, D. Brooks
{"title":"Measurement Properties and Cross-Cultural Adaptation of the De Jong Gierveld Loneliness Scale in Adults","authors":"S. Alsubheen, Ana Oliveira, Razanne Habash, R. Goldstein, D. Brooks","doi":"10.1027/1015-5759/a000784","DOIUrl":"https://doi.org/10.1027/1015-5759/a000784","url":null,"abstract":"Abstract: This systematic review evaluated the measurement properties of the De Jong Gierveld Loneliness Scale (DJGLS) in adults. A systematic search of four electronic databases (PubMed, EMBASE, Scopus, and PsycINFO) was conducted from inception until December 2022. The COSMIN (Consensus-Based Standards for the Selection of Health Measurement Instruments) guidelines were used to assess the methodological quality and evidence synthesis of the included studies. Forty-six studies assessed the validity and reliability of the DJGLS-11 and its short version, the DJGLS-6. Very-low-quality evidence supported the content validity, moderate to high-quality evidence confirmed the structural validity and internal consistency, and low-quality evidence supported the construct validity of the two versions. Test-retest reliability was examined for the DJGLS-6 with low-quality evidence supporting excellent interclass coefficient values of 0.73–1.00. Both scales were cross-culturally adapted and translated into 18 languages across 12 countries. Although the structural validity and internal consistency of the DJGLS were supported by high-quality evidence, very-low to low-quality evidence was available for its other measurement properties. Future studies are needed to perform a more comprehensive assessment of the measurement properties of the DJGLS before fully recommending the scale to assess loneliness in adults.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46013295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Work Resilience Scale","authors":"Doudou Liu, Xue Meng, Chaoping Li, Songke Xie","doi":"10.1027/1015-5759/a000780","DOIUrl":"https://doi.org/10.1027/1015-5759/a000780","url":null,"abstract":"Abstract: This study aims to investigate the psychometric properties of the WRS-C (Work Resilience Scale – Chinese version) through two studies. In Study 1 (Sample 1: N = 463), we conducted an exploratory factor analysis (EFA) and identified the two-factor solution representing work resilience after translating the Work Resilience Scale (WRS) into Chinese. In Study 2, the psychometric properties of the WRS-C were investigated in two samples (Sample 2: N = 477; Sample 3: N = 374). The confirmatory factor analysis (CFA) confirms a two-factor structure. Furthermore, the WRS-C shows measurement invariance across groups by gender, age, job tenure, and education level. Furthermore, the results provide evidence for the convergent and concurrent validity of the WRS-C. In addition, its predictive validity has been demonstrated through its associations with mental health outcomes, performance outcomes, and attitude outcomes. Overall, the WRS-C is a reliable and valid instrument in the Chinese context, which can be utilized in future research and practice.","PeriodicalId":48018,"journal":{"name":"European Journal of Psychological Assessment","volume":" ","pages":""},"PeriodicalIF":2.5,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44476236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}