{"title":"Systematic comparison of resilience scales using retrospective reports: A practical case study using South African data","authors":"C. V. Van Wijk","doi":"10.4102/ajopa.v6i0.150","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.150","url":null,"abstract":"The availability of different scales measuring similar constructs challenges scientists and practitioners when it comes to choosing the most appropriate instrument to use. As a result, systematic comparison frameworks have been developed to guide such decisions. The Consensus-based Standard for the Selection of Health Measurement Instruments (COSMIN) is one example of such a framework to examine the quality of psychometric studies. This article aimed, firstly, to explore the psychometric characteristics of resilience measures used in the South African Navy (SAN), in that context. Secondly, it aimed to illustrate the application of the COSMIN guide for comparing psychometric scales and employing data from the aforementioned resilience measures, as a practical case study. The study drew on both published and unpublished data from seven SAN samples, using eight psychometric scales associated with resilience. It assessed structural validity, construct validity, internal reliability and predictive ability. The outcomes were tabulated, and the COSMIN criteria were applied to each data point. All eight scales provided some degree of evidence of validity. However, it was at times difficult to differentiate between the scales when using the COSMIN guidelines. In such cases, more nuanced criteria were necessary to demonstrate more clearly the differences between the psychometric characteristics of the scales and ease in subsequent decision-making.Contribution: This article illustrated the application of COSMIN guidelines to systematically compare the quality of psychometric study outcomes on local South African data. It further offered evidence of validity for a range of resilience-related measures in a South African context.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aline Ferreira-Correia, Hillary Banjo, Nicole Israel
{"title":"Phonemic verbal fluency in non-WEIRD populations: Demographic differences in performance in the Controlled Oral Word Association Test-FAS","authors":"Aline Ferreira-Correia, Hillary Banjo, Nicole Israel","doi":"10.4102/ajopa.v6i0.152","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.152","url":null,"abstract":"This study aimed to investigate whether age, level of education, gender, number of spoken languages, and the self-reported position of language within this multilingual experience predicted performance on the Controlled Oral Word Association Test (COWAT-FAS). Using a cross-sectional research design, the phonemic verbal fluency of a sample (n = 156) of healthy adults (ages 18–60 years) with different linguistic and educational backgrounds from a non-WEIRD (western, educated, industrialised, rich and democratic) context was assessed using the COWAT-FAS (including the F, A, S, total correct, repetition, incorrect, and total errors). Pearson’s correlations showed significant negative associations between age and most of the COWAT scores, including the total (r = –0.47; p 0.01) and significant positive associations between years of education and all of the COWAT scores, including the total (r = 0.49; p 0.01). The number of languages spoken was not significantly correlated with any of the COWAT scores, but multilinguals who identified English as a first language performed significantly better than those who identified English as a secondary language for several COWAT scores, including the total (t154 = 3.85; p 0.001; d = 0.79). Age (B = –0.32; p 0.001), years of education (B = 0.35; p 0.001), and language position (B = –0.20; p 0.01) also significantly predicted the COWAT total score (r2 = 0.38; F = 18.34; p 0.001; f2 = 0.61). The implications of these findings for use of the COWAT-FAS in multilingual and non-WEIRD contexts are discussed.Contribution: This article supports the importance of understanding the role demographic variables play in cognitive performance and how they represent a source of bias in cognitive testing, particularly in the COWAT-FAS. It highlights how age, level of education, and the correspondence, or lack thereof, between first language and language of assessment, impacts phonemic fluency tasks. This knowledge may help to manage biases when conducting verbal fluency assessments with multilingual individuals and in non-WEIRD contexts.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"10 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141099778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experiences of experts in intelligence measurement of South African school learners","authors":"Ilze Van der Merwe, P. Erasmus, Werner De Klerk","doi":"10.4102/ajopa.v6i0.148","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.148","url":null,"abstract":"This qualitative research study emerged from the essential need for reliable and valid intelligence test instruments for South African school learners, who are characterised as a diverse population with their variety in culture, ethnicity, and language, as well as having unequal socio-economic and educational backgrounds. The aim of this research study was to use a qualitative interpretive description research design to explore and describe the experiences of both experts in intelligence test development and/or adaptation as well as psychologists and psychometrists who have administered intelligence tests to South African school learners in various contexts. Twelve psychologists and/or psychometrists were interviewed, of which six were also experts in test development and/or adaptation, which yielded four themes after thematic analysis, namely, utilised intelligence measurements in the current South African school learner context are less relevant; the South African education system is a major issue specifically within lower socio-economic status (SES) contexts; it does not seem feasible to design or adapt suitable intelligence measures that are valid and reliable in the current South African school learner context; and key informants’ recommendations from their experiences.Contribution: This research study contributes to the understanding of the measurement of intelligence of South African school learners in diverse contexts. Findings of this research study can guide the strategic process to design an intelligence instrument suitable for a South African population of school learners, informing fair assessment practices for multiethnic equalisation.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"34 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gina Görgens-Ekermans, V. Ghezzi, T. Probst, Claudio Barbaranelli, Laura Petitta, Lixin Jiang, Sanman Hu
{"title":"Measurement invariance of cognitive and affective job insecurity: A cross-national study","authors":"Gina Görgens-Ekermans, V. Ghezzi, T. Probst, Claudio Barbaranelli, Laura Petitta, Lixin Jiang, Sanman Hu","doi":"10.4102/ajopa.v6i0.147","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.147","url":null,"abstract":"Empirical evidence of established measurement invariance of job insecurity measures may enhance the practical utility of job insecurity as a valid predictor when utilised over different cross-national samples. This study investigated the measurement invariance of the nine-item versions of the Job Security Index (a measure of cognitive job insecurity) and the Job Security Satisfaction Scale (a measure of affective job insecurity), across four countries (i.e. the United States, N = 486; China, N = 629; Italy, N = 482 and South Africa, N = 345). Based on a novel bifactor-(S-1) model approach we found evidence for partial metric, partial scalar and partial strict invariance of our substantive bifactor-(S-1) structure. The results extend measurement invariance research on job insecurity with obvious pragmatic implications (e.g. scaling units, measurement bias over cross-national interpretations).Contribution: This research provides evidence to support the applied use of cross-national comparisons of job insecurity scores across the nationalities included in this study. Theoretically, this research advances the debate about the nature of the relationship between cognitive and affective job insecurity, suggesting that in this cross-national dataset, a model where cognitive job insecurity is specified as the reference domain outperforms a model where affective job insecurity is assigned this status. Practically, it demonstrates that it is sensible and necessary to differentiate between cognitive and affective job insecurity and include measures of both constructs in future research on the construct.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"15 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140658618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Criterion validity of the 10 personality aspects for performance in South Africa","authors":"Xander van Lill, Cobi Hayes","doi":"10.4102/ajopa.v6i0.129","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.129","url":null,"abstract":"","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":" 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140210035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invariance and item bias of the Mental Health Continuum Short-Form for South African university first-year students","authors":"Karina Mostert, L. D. de Beer, Ronalda De Beer","doi":"10.4102/ajopa.v6i0.143","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.143","url":null,"abstract":"Over the last decade, higher education institutions (HEIs) have become increasingly interested in student well-being. However, since the student population is very diverse in South Africa, questionnaires measuring the well-being of students must be psychometrically sound for different cultural and demographic groups. This study aimed to determine the psychometric properties of the Mental Health Continuum Short-Form (MHC-SF), including factorial validity, measurement invariance, item bias and internal consistency. The sample consisted of 1285 first-year university students. The three-factor structure of the MHC-SF was confirmed, indicating that emotional, social and psychological well-being are three independent factors. Invariance results showed that the MHC-SF produced similar results across campuses and gender sub-groups, although partial invariance was present among language groups. Item bias was present for different sub-groups, but the practical impact was negligible. Reliability scores indicated that all three dimensions are reliable in this sample. This study’s findings could help higher education institutions with preliminary results on the validity and reliability of a widely used well-being measure to assess university students’ subjective well-being and could aid in investigating and measuring first-year students’ overall well-being during their transition to tertiary education.Contribution: This study contributes to creating knowledge about fair and unbiased measurement of student well-being across different sub-groups in South Africa.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"35 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140378392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Montreal Cognitive Assessment test: Psychometric analysis of a South African workplace sample","authors":"C. V. Van Wijk, W. A. Meintjes, Chris J.B. Muller","doi":"10.4102/ajopa.v6i0.151","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.151","url":null,"abstract":"The Montreal Cognitive Assessment (MoCA) test is a widely used tool to screen for mild neurocognitive impairment. However, its structural validity has not been fully described in South Africa. The study aimed to replicate and extend earlier work with South African samples, to provide an expanded description of the psychometric properties of the MoCA. The study examined the MoCA in a sample of neurocognitively healthy working adults (N = 402) and individuals diagnosed with mild neurocognitive disorders (N = 42); both groups reported good English proficiency. Analysis included general scale descriptions, and structural and discriminant validity. Age and language, but not gender, influenced MoCA scores, with mean total scores of healthy individuals falling below the universal cut-off. Structural analysis showed that a multidimensional model with a higher-order general factor fit the data well, and measurement invariance for gender and language was confirmed. Discriminant validity was supported, and receiver operating characteristics curve analysis illustrated the potential for grey-zone lower and upper thresholds to identify risk.Contribution: This study replicated previous findings on the effects of age, language and gender, and challenged the universal application of ≤ 26 as cut-off for cognitive impairment indiscriminately across groups or contexts. It emphasised the need for context-specific adaptation in cognitive assessments, especially for non-English first language speakers, to enhance practical utility. Novel to this study, it extended knowledge on the structural validity of the test and introduced grey-zone scores as a potential guide to the identification of risk in resource-restricted settings.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"198 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139840123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Montreal Cognitive Assessment test: Psychometric analysis of a South African workplace sample","authors":"C. V. Van Wijk, W. A. Meintjes, Chris J.B. Muller","doi":"10.4102/ajopa.v6i0.151","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.151","url":null,"abstract":"The Montreal Cognitive Assessment (MoCA) test is a widely used tool to screen for mild neurocognitive impairment. However, its structural validity has not been fully described in South Africa. The study aimed to replicate and extend earlier work with South African samples, to provide an expanded description of the psychometric properties of the MoCA. The study examined the MoCA in a sample of neurocognitively healthy working adults (N = 402) and individuals diagnosed with mild neurocognitive disorders (N = 42); both groups reported good English proficiency. Analysis included general scale descriptions, and structural and discriminant validity. Age and language, but not gender, influenced MoCA scores, with mean total scores of healthy individuals falling below the universal cut-off. Structural analysis showed that a multidimensional model with a higher-order general factor fit the data well, and measurement invariance for gender and language was confirmed. Discriminant validity was supported, and receiver operating characteristics curve analysis illustrated the potential for grey-zone lower and upper thresholds to identify risk.Contribution: This study replicated previous findings on the effects of age, language and gender, and challenged the universal application of ≤ 26 as cut-off for cognitive impairment indiscriminately across groups or contexts. It emphasised the need for context-specific adaptation in cognitive assessments, especially for non-English first language speakers, to enhance practical utility. Novel to this study, it extended knowledge on the structural validity of the test and introduced grey-zone scores as a potential guide to the identification of risk in resource-restricted settings.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"41 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139780201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can a general factor be derived from employees’ responses to items on the Individual Work Performance Review?","authors":"Xander van Lill, Leoni van der Vaart","doi":"10.4102/ajopa.v6i0.133","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.133","url":null,"abstract":"This study aimed to investigate whether permissible inferences can be derived from employees’ standing on a general performance factor from their responses to the Individual Work Performance Review (IWPR) items. The performance of 448 employees was rated (by their managers) using the IWPR. Latent variable modelling was performed through a bifactor exploratory structural equation model with the robust version of the maximum likelihood estimator. The general factor’s score was also used to inspect correlations with two work performance correlates: tenure and job level. In line with international findings, the results suggested that a general factor could explain 65% of the common variance in the 80 items of the IWPR. Job level, but not tenure, correlated with general job performance. The results support calculating an overall score for performance, which might be a suitable criterion to differentiate top performers, conduct criterion validity studies, and calculate the return on investment of selection procedures or training programmes.Contribution: The present study provides initial evidence for a general factor influencing employees’ responses to items on a generic performance measure in South Africa. In addition, the study showcases the application of advanced statistical methods in factor analyses, demonstrating their efficacy in evaluating the psychometric properties of hierarchical factor models derived from data provided on performance measures.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"67 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139606526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Pretorius, A. Padmanabhanunni, Kyle M. Jackson, Brendon D. Faroa
{"title":"Satisfied with teaching? Psychometric properties of the Teaching Satisfaction Scale","authors":"T. Pretorius, A. Padmanabhanunni, Kyle M. Jackson, Brendon D. Faroa","doi":"10.4102/ajopa.v5i0.140","DOIUrl":"https://doi.org/10.4102/ajopa.v5i0.140","url":null,"abstract":"Job satisfaction among teachers is a central feature of educational research owing to its benefits for both teachers and students. Compared with their counterparts, teachers satisfied with their roles and responsibilities in the work context demonstrate greater commitment to their organisation, are less likely to leave the profession, and contribute more positively to the educational attainment of their students. Theoretical advances in the study of job satisfaction have emphasised the importance of using stable and robust quantitative measurement tools to facilitate cross-cultural comparisons. This study aims to broaden research on the Teaching Satisfaction Scale (TSS) by examining its psychometric properties through classical test theory (CTT) and Rasch and Mokken analyses. Overall, the three approaches confirmed that the TSS is a unidimensional scale with sound validity and internal consistency. The TSS was also found to be a valuable resource for researchers in different cultural contexts, as it can be used without overburdening teachers and it provides valuable information to support interventions aimed at enhancing job satisfaction.Contribution: These approaches confirmed that the scale is unidimensional with satisfactory reliability and validity and that the TSS is a valuable resource as it can be used without overburdening teachers and can inform interventions aimed at enhancing job satisfaction.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"65 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138594765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}