Michelle Jäckel-Visser, Carl C Theron, Robert J Mash
{"title":"Medical practitioner compassion: Development and validation of a compassion competency questionnaire.","authors":"Michelle Jäckel-Visser, Carl C Theron, Robert J Mash","doi":"10.4102/ajopa.v7i0.170","DOIUrl":"10.4102/ajopa.v7i0.170","url":null,"abstract":"<p><p>There is a need for a psychometrically robust questionnaire measuring medical practitioner compassion in healthcare. Without such a measure, competence on this construct cannot be assessed, nor can the effectiveness of interventions designed to enhance compassion be determined. The aim of this study was to develop and validate a self-assessment measure of medical practitioner compassion competence (the Medical Practitioner Compassion Competency Questionnaire [MPCCQ]). The MPCCQ was administered to a sample of 234 medical practitioners in South Africa. They represented three healthcare system levels, namely, the primary level (healthcare centres and clinics), the secondary level (district and regional hospitals) and the tertiary level (central, specialised and sub-specialist hospitals). The quantitative data were analysed with statistical packages, namely, Statistical Package for the Social Sciences (SPSS) version 25 and LISREL 8.8, and structural equation modelling was used to fit the MPCCQ measurement model and structural model. Dimensionality and item analyses returned generally positive results. Fit statistics and criteria used to judge the fit of the models included Chi-square (χ<sup>2</sup>), goodness of fit index (GFI), adjusted goodness of fit index (AGFI), root mean square error of approximation (RMSEA), root mean square residual (RMR) and the standardised root mean square residual (SRMR). The results provided an excellent model fit for both the measurement and comprehensive LISREL models. The MPCCQ, measurement and structural model parameter estimates supported the position that the design intention underpinning the MPCCQ succeeded.</p><p><strong>Contribution: </strong>The statistical evidence generated thus far failed to refute the position that the MPCCQ shows construct validity, thus paving the way for the cautious utilisation of the instrument in healthcare and medical education institutions.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"7 ","pages":"170"},"PeriodicalIF":0.0,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12224002/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144561276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Psychometric evaluation of the self-undermining scale in South Africa using the Rasch model.","authors":"Sergio L Peral","doi":"10.4102/ajopa.v7i0.163","DOIUrl":"10.4102/ajopa.v7i0.163","url":null,"abstract":"<p><p>The self-undermining scale is used to assess employee behaviours that undermine job performance, including making mistakes, creating conflict, creating confusion and creating a backlog in work tasks. To date, its psychometric properties have not been thoroughly investigated using item-response theory applications, especially in South Africa. Applying the Rasch Rating Scale Model, this study aimed to investigate the reliability and internal validity of the self-undermining scale, including item fit and rating scale functionality. Data were collected from 318 South African employees using a non-experimental, cross-sectional survey design. The instrument demonstrated unidimensional scaling with Cronbach's alpha and person separation values of 0.77 and 1.57, respectively. Item and category fit statistics showed satisfactory fit to the Rasch model, with only one item warranting further attention. Some refinements regarding item wording and rating scale optimisation are provided.</p><p><strong>Contribution: </strong>This study is the first to investigate the reliability and validity of the self-undermining scale through the Rasch Measurement Model. It also offers cautionary insights into the applicability of the scale to measure self-undermining among South African employees because of the lack of discriminatory power. Recommendations for further validation studies are provided.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"7 ","pages":"163"},"PeriodicalIF":0.0,"publicationDate":"2025-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12135889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Emotional social screening tool for school readiness (E3SR-R): Adaptation into Afrikaans.","authors":"Erica Munnik, Nuraan Adams, Mario R Smith","doi":"10.4102/ajopa.v7i0.174","DOIUrl":"10.4102/ajopa.v7i0.174","url":null,"abstract":"<p><p>The Emotional Social Screening Tool for School Readiness - Revised (E3SR-R) is a contextually sensitive and psychometrically sound measure developed to screen emotional-social competence in preschool learners in South Africa, a multilingual country. The original measure was constructed in English. This article reports on the translation of the E3SR-R into Afrikaans. A three-phase design was adopted. Phase 1: Independent reviewers evaluated the E3SR-R for conceptual validity. The Conceptual Construct Validity Appraisal Checklist was used to assess whether the E3SR-R was theoretically sound prior to adaptation. Phase 2 entailed translation of the E3SR-R. Reviewers used the Quality of Translation and Linguistic Equivalence Checklist to assess compliance with International Test Commission (ITC) guidelines. Phase 3 established content validity of the translation using a Delphi panel of 9 experts. The panel concluded within one round. Ethics clearance was granted by the University of the Western Cape. All applicable ethics principles were upheld. In Phase 1, a high level of inter-rater agreement confirmed that the E3SR had conceptual construct validity that supported adaptation. Phase 2 produced an Afrikaans translation. Raters had a high level of agreement that the adaptation complied with ITC guidelines. The Delphi panel concluded that the Afrikaans version demonstrated content validity. The Afrikaans translation of the E3SR-R was linguistically equivalent.</p><p><strong>Contribution: </strong>The study employed a rigorous methodology that underscored the importance of establishing conceptual construct validity, evaluating the translation process and establishing content validity in translation studies. Access to screening tools for emotional-social competence in pre-schoolers was expanded.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"7 ","pages":"174"},"PeriodicalIF":0.0,"publicationDate":"2025-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12135888/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph T Akinboboye, Musa A Ayanwale, David A Adewuni, Yohanna I Vincent
{"title":"Impact of unanswered questions on examinees' latent traits: An item response theory perspective.","authors":"Joseph T Akinboboye, Musa A Ayanwale, David A Adewuni, Yohanna I Vincent","doi":"10.4102/ajopa.v6i0.161","DOIUrl":"10.4102/ajopa.v6i0.161","url":null,"abstract":"<p><p>In grading examinees' responses to test items, it is not uncommon to find that some examinees omit responses to specific items. The number of omitted responses must be considered in the psychometric analysis of test data. Omitted responses cannot be ignored, as mishandling them can jeopardise the validity of the test. This study investigates the impact of omitted responses on examinee characteristics in Mathematics Achievement Test (MAT), using item response theory (IRT) in Osun State, Nigeria. A descriptive survey research design was employed, with a sample of 600 senior secondary school 3 (SSS 3) students from eight randomly selected schools. The instrument used was a 40-item multiple-choice MAT, adapted from the West African Examinations Council's items, with a reliability coefficient of 0.88. The instrument was content-validated by experts in Mathematics using the Lawshe content validity ratio, giving a 0.82 content validity index. The results indicate significant differences in estimated ability levels among groups, with varying probabilities of examinees producing omitted responses. The study recommends the consideration of omitted responses in IRT-based ability estimation and emphasises the importance of comparable ability groupings. This research contributes to the understanding of the complexities of educational measurement and highlights the need for careful handling of omitted responses to ensure the validity of test inferences.</p><p><strong>Contribution: </strong>This study contributes by highlighting the importance of considering omitted responses in MAT, emphasising their impact on estimated ability levels and the validity of test inferences, thus informing fairer assessment practices and enhancing the reliability of educational measurements.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"6 ","pages":"161"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12082205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Examining the unidimensionality of the PHQ-9 with first responders: Evidence from different psychometric paradigms.","authors":"Tyrone B Pretorius, Anita Padmanabhanunni","doi":"10.4102/ajopa.v6i0.165","DOIUrl":"10.4102/ajopa.v6i0.165","url":null,"abstract":"<p><p>The Patient Health Questionnaire-9 (PHQ-9) is an effective tool for identifying depressive disorders in diverse populations, making it a valuable resource in both clinical practice and research. However, the factor structure and dimensionality of the instrument have been contested. Studies have raised questions about whether the PHQ-9 adequately captures a single underlying construct or reflects multiple distinct dimensions of depression. This study examines the factor structure of the PHQ-9 among South African first responders using exploratory factor analysis (EFA), confirmatory factor analysis (CFA) with ancillary bifactor indices, parallel analysis and Mokken analysis. A cross-sectional study design was used with data collected from police officers (<i>n</i> = 309) and paramedics (<i>n</i> =120). Although the EFA identified a two-factor structure, this was not supported by the other analyses. While the one-factor, correlated two-factor and bifactor models of the PHQ-9 had comparable fit indices, the one-factor model appeared to be marginally superior in the CFA. Ancillary bifactor and parallel analysis also did not support the interpretation of the PHQ-9 as multidimensional. Lastly, Mokken scale analysis confirmed that the PHQ-9 is a strong and reliable unidimensional scale of depression. These findings suggest that the PHQ-9 predominantly measures a single construct of depression, consistent with the unidimensional view of the disorder.</p><p><strong>Contribution: </strong>The present study provides evidence from different measurement perspectives that the commonly used PHQ-9 measures a single construct of depression and not two separate components as some studies suggested. In practice, this simplifies the interpretation of scores, allowing clinicians to assess overall depression severity without needing to differentiate between symptom types.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"6 ","pages":"165"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12082222/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment futures: Reflections on the next decade of psychological assessment in South Africa.","authors":"Sumaya Laher","doi":"10.4102/ajopa.v6i0.166","DOIUrl":"10.4102/ajopa.v6i0.166","url":null,"abstract":"","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"6 ","pages":"166"},"PeriodicalIF":0.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12082265/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliability, validity and dimensionality of the 12-Item General Health Questionnaire among South African healthcare workers.","authors":"Clement N Kufe, Colleen Bernstein, Kerry Wilson","doi":"10.4102/ajopa.v6i0.144","DOIUrl":"10.4102/ajopa.v6i0.144","url":null,"abstract":"<p><p>Healthcare workers (HCWs) were among the high-risk groups for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and suffer a high burden of challenges with mental health including depression, anxiety, traumatic stress, avoidance and burnout. The 12-Item General Health Questionnaire (GHQ-12) has shown the best fit in both a one-factor structure and a multidimensional structure for the screening of common mental disorders and psychiatric well-being. The aim was to test for reliability and validity and ascertain the factor structure of the GHQ-12 in a South African HCW population. Data were collected from 832 public hospital and clinic staff during the coronavirus disease 2019 (COVID-19) pandemic in Gauteng, South Africa. The factor structure of the GHQ-12 in this professional population was examined by exploratory factor analysis (EFA) to identify factors, confirmatory factor analysis (CFA) for construct validity and structural equation modelling (SEM) to establish model fit. The GHQ-12 median score was higher (<i>n</i> = 25) in women than in men (<i>n</i> = 24), <i>p</i> = 0.044. The assumptions for inferential statistics were tested: the determinant for the correlation matrix was 0.034, Bartlett's test of sphericity was <i>p</i> < 0.001, Chi-square 2262.171 and the Kaiser-Meyer-Olkin (KMO) of sampling adequacy was 0.877. The four factors identified were labelled as social dysfunction (37.8%), anxiety depression (35.4%), capable (24.9%) and self-efficacy (22.7%). The sample had a Cronbach's alpha and McDonald's Omega coefficient of 0.85.</p><p><strong>Contribution: </strong>The study highlighted the gaps in the use of GHQ-12. The findings affirm the validity and reliability of the GHQ-12 in this group of professionals and the multidimensional structure for screening for psychological distress.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"6 ","pages":"144"},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12082264/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Systematic comparison of resilience scales using retrospective reports: A practical case study using South African data","authors":"C. V. Van Wijk","doi":"10.4102/ajopa.v6i0.150","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.150","url":null,"abstract":"The availability of different scales measuring similar constructs challenges scientists and practitioners when it comes to choosing the most appropriate instrument to use. As a result, systematic comparison frameworks have been developed to guide such decisions. The Consensus-based Standard for the Selection of Health Measurement Instruments (COSMIN) is one example of such a framework to examine the quality of psychometric studies. This article aimed, firstly, to explore the psychometric characteristics of resilience measures used in the South African Navy (SAN), in that context. Secondly, it aimed to illustrate the application of the COSMIN guide for comparing psychometric scales and employing data from the aforementioned resilience measures, as a practical case study. The study drew on both published and unpublished data from seven SAN samples, using eight psychometric scales associated with resilience. It assessed structural validity, construct validity, internal reliability and predictive ability. The outcomes were tabulated, and the COSMIN criteria were applied to each data point. All eight scales provided some degree of evidence of validity. However, it was at times difficult to differentiate between the scales when using the COSMIN guidelines. In such cases, more nuanced criteria were necessary to demonstrate more clearly the differences between the psychometric characteristics of the scales and ease in subsequent decision-making.Contribution: This article illustrated the application of COSMIN guidelines to systematically compare the quality of psychometric study outcomes on local South African data. It further offered evidence of validity for a range of resilience-related measures in a South African context.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":" 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141828426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rivca Marais, Louise Stroud, Cheryl Foxcroft, Johan Cronje, Jennifer Jansen
{"title":"Story-linked item design in tablet-based assessment for preschool children: Insights from testing.","authors":"Rivca Marais, Louise Stroud, Cheryl Foxcroft, Johan Cronje, Jennifer Jansen","doi":"10.4102/ajopa.v6i0.154","DOIUrl":"10.4102/ajopa.v6i0.154","url":null,"abstract":"<p><p>This article provides a rationale for exploring the use of tablet-based assessment of children between the ages of 3 years and 5 years. The purpose of the study was to gain insights from young children's digital test-taking performances and experiences to inform the digitalisation of developmental tests. A mixed method design was followed to collect both qualitative and quantitative data. Animated tablet-based items following a storyline were field-tested on a sample of 60 South African children. Results support the viability of a story-linked, tablet-based gamification approach for assessing children 5 years and under, and emphasise the need for documented strategies and item examples to guide innovative developmental assessment. Digital items showed a degree of responsiveness to various factors, suggesting a potential influence on test taking performance which contributes to the necessity of re-imagining item and test development in the digital age.</p><p><strong>Contribution: </strong>This study departed from the conventional path of following the predictable and conservative approach of test development taken so far of merely adapting existing measures to a digital format. By empirically assessing the efficacy of newly developed items designed specifically for a digital format, this article addressed the intersection of technology and psychological assessment of the preschool child in a South African context.</p>","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"6 ","pages":"154"},"PeriodicalIF":0.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12082257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aline Ferreira-Correia, Hillary Banjo, Nicole Israel
{"title":"Phonemic verbal fluency in non-WEIRD populations: Demographic differences in performance in the Controlled Oral Word Association Test-FAS","authors":"Aline Ferreira-Correia, Hillary Banjo, Nicole Israel","doi":"10.4102/ajopa.v6i0.152","DOIUrl":"https://doi.org/10.4102/ajopa.v6i0.152","url":null,"abstract":"This study aimed to investigate whether age, level of education, gender, number of spoken languages, and the self-reported position of language within this multilingual experience predicted performance on the Controlled Oral Word Association Test (COWAT-FAS). Using a cross-sectional research design, the phonemic verbal fluency of a sample (n = 156) of healthy adults (ages 18–60 years) with different linguistic and educational backgrounds from a non-WEIRD (western, educated, industrialised, rich and democratic) context was assessed using the COWAT-FAS (including the F, A, S, total correct, repetition, incorrect, and total errors). Pearson’s correlations showed significant negative associations between age and most of the COWAT scores, including the total (r = –0.47; p 0.01) and significant positive associations between years of education and all of the COWAT scores, including the total (r = 0.49; p 0.01). The number of languages spoken was not significantly correlated with any of the COWAT scores, but multilinguals who identified English as a first language performed significantly better than those who identified English as a secondary language for several COWAT scores, including the total (t154 = 3.85; p 0.001; d = 0.79). Age (B = –0.32; p 0.001), years of education (B = 0.35; p 0.001), and language position (B = –0.20; p 0.01) also significantly predicted the COWAT total score (r2 = 0.38; F = 18.34; p 0.001; f2 = 0.61). The implications of these findings for use of the COWAT-FAS in multilingual and non-WEIRD contexts are discussed.Contribution: This article supports the importance of understanding the role demographic variables play in cognitive performance and how they represent a source of bias in cognitive testing, particularly in the COWAT-FAS. It highlights how age, level of education, and the correspondence, or lack thereof, between first language and language of assessment, impacts phonemic fluency tasks. This knowledge may help to manage biases when conducting verbal fluency assessments with multilingual individuals and in non-WEIRD contexts.","PeriodicalId":34043,"journal":{"name":"African Journal of Psychological Assessment","volume":"10 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141099778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}