{"title":"Comparison of machine learning algorithms for predicting cognitive impairment using neuropsychological tests.","authors":"Chanda Simfukwe, Seong Soo A An, Young Chul Youn","doi":"10.1080/23279095.2024.2392282","DOIUrl":"10.1080/23279095.2024.2392282","url":null,"abstract":"<p><strong>Objectives: </strong>Neuropsychological tests (NPTs) are standard tools for assessing cognitive function. These tools can evaluate the cognitive status of a subject, which can be time-consuming and expensive for interpretation. Therefore, this paper aimed to optimize the systematic NPTs by machine learning and develop new classification models for differentiating healthy controls (HC), mild cognitive impairment, and Alzheimer's disease dementia (ADD) among groups of subjects.</p><p><strong>Patients and methods: </strong>A total dataset of 14,926 subjects was obtained from the formal 46 NPTs based on the Seoul Neuropsychological Screening Battery (SNSB). The statistical values of the dataset included an age of 70.18 ± 7.13 with an education level of 8.18 ± 5.50 and a diagnosis group of three; HC, MCI, and ADD. The dataset was preprocessed and classified in two- and three-way machine-learning classification from scikit-learn (www.scikit-learn.org) to differentiate between HC versus MCI, HC versus ADD, HC versus Cognitive Impairment (CI) (MCI + ADD), and HC versus MCI versus ADD. We compared the performance of seven machine learning algorithms, including Naïve Bayes (NB), random forest (RF), decision tree (DT), k-nearest neighbors (KNN), support vector machine (SVM), AdaBoost, and linear discriminant analysis (LDA). The accuracy, sensitivity, specificity, positive predicted value (PPV), negative predictive value (NPV), area under the curve (AUC), confusion matrixes, and receiver operating characteristic (ROC) were obtained from each model based on the test dataset.</p><p><strong>Results: </strong>The trained models based on 29 best-selected NPT features were evaluated, the model with the RF algorithm yielded the best accuracy, sensitivity, specificity, PPV, NPV, and AUC in all four models: HC versus MCI was 98%, 98%, 97%, 98%, 97%, and 99%; HC versus ADD was 98%, 99%, 96%, 97%, 98%, and 99%; HC versus CI was 97%, 99%, 92%, 97%, 97%, and 99% and HC versus MCI versus ADD was 97%, 96%, 98%, 97%, 98%, and 99%, respectively, in predicting of cognitive impairment among subjects.</p><p><strong>Conclusion: </strong>According to the results, the RF algorithm was the best classification model for both two- and three-way classification among the seven algorithms trained on an imbalanced NPTs SNSB dataset. The trained models proved useful for diagnosing MCI and ADD in patients with normal NPTs. These models can optimize cognitive evaluation, enhance diagnostic accuracy, and reduce missed diagnoses.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vahid Valinejad, Maedeh Salehi Darjani, Ehsan Shekari
{"title":"Sentence comprehension deficits in aphasia disorders: A systematic review of mapping therapy.","authors":"Vahid Valinejad, Maedeh Salehi Darjani, Ehsan Shekari","doi":"10.1080/23279095.2024.2394091","DOIUrl":"https://doi.org/10.1080/23279095.2024.2394091","url":null,"abstract":"<p><p>Patients with aphasia (PWA), particularly those with agrammatic aphasia, experience problems in sentence comprehension. Studies have found that Mapping Therapy (MT) can improve sentence processing in PWA. This paper aims to review the literature on therapeutic studies using MT for the treatment of sentence processing in PWA. All studies on the treatment of sentence comprehension using MT were found by searching Cochrane Library, ISI Web of Knowledge, Google Scholar, Pubmed, and Scopus from 1986 until December 2023, with the combination of these search keywords: 'aphasia, sentence, comprehension, mapping therapy, treatment, rehabilitation'. All studies (single-subject or group design) on the treatment of sentence comprehension using MT in PWA were reviewed. An adaptation of the Cochrane Collaboration's risk of bias (RoB) tool was used to assess the risk of bias (RoB) in the reviewed studies. A total of 14 studies on 81 participants were selected and reviewed. All studies (13 studies) had employed a single-subject design, except for one study that had used a group design. Twelve studies (86%) showed that MT is effective in the remediation of sentence comprehension in PWA. Generalization to untrained sentences similar to the trained structure was also observed in 12 studies (86%). Generalization to untrained structures (usually passive sentences) was limited. In addition, cross-modal improvement in sentence production was observed in 8 studies (57%). This review highlights the need for a more detailed investigation of the effect of MT on cross-modal generalization using elicited production of the sentence types trained during comprehension treatment.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic and phonemic verbal fluency tests: Normative data for the Turkish population.","authors":"Tuğçe Şentürk, Derya Durusu Emek-Savaş","doi":"10.1080/23279095.2024.2391525","DOIUrl":"https://doi.org/10.1080/23279095.2024.2391525","url":null,"abstract":"<p><p>Semantic and phonemic verbal fluency tests are widely used neuropsychological assessments of executive functions and language skills and are easy to administer. The aim of this study was to determine the impact of age, education, and gender on semantic and phonemic verbal fluency tests and to establish normative data for Turkish adults aged between 18 and 86 years. The results revealed significant main effects of age and education on all subscores of verbal fluency tests. Furthermore, an interaction effect between age and education was observed on semantic fluency and letter K fluency scores. While no significant differences were found among the 18-29, 30-39, and 40-49 age groups in any of the subscores, performance on the tests decreased with increasing age. Significant differences were observed among all education groups in all subscores. No main or interaction effects of gender were found on any subscore. These normative data could prove useful in clinical and research settings for the assessment of cognitive impairment.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luana Comito Muner, Jaqueline de Carvalho Rodrigues, Natália Becker
{"title":"Adaptation of the Cognitive Screening Test (Triagem Cognitiva - TRIACOG) for computer-mediated assessments: TRIACOG-Online.","authors":"Luana Comito Muner, Jaqueline de Carvalho Rodrigues, Natália Becker","doi":"10.1080/23279095.2024.2398118","DOIUrl":"https://doi.org/10.1080/23279095.2024.2398118","url":null,"abstract":"<p><p>This study aims to present the adaptation, evidence of content validity and results of a pilot study of the Cognitive Screening Test - Online (TRIACOG-Online) in a clinical sample of patients after stroke. The process comprised four stages: 1) Adaptation of the instructions, stimulus and responses; 2) Seven experts analyzed the equivalence between the previous printed version and the online version; 3) A pilot study was carried out with seven adults who had experienced a stroke in order to check the comprehension and feasibility of the items; and 4) The development of the final version of TRIACOG-Online. Expert validity testing of the questionnaire yielded a content validity index (CVI) of 100% for correspondence and construct in 13 items, and a CVI of 87.71% in four items. In the pilot study, problems related to the internet led to the decision to use a single section form. No difficulties were observed in carrying out the tasks and understanding the instructions. Participants reported being able to adequately visualize the stimuli and remain motivated to complete the tasks presented. It was shown that TRIACOG-Online evaluated the same constructs as the pencil-and-paper version, can be used in remote neuropsychological assessments and face-to-face settings.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karl S Grewal, Michaella Trites, Andrew Kirk, Stuart W S MacDonald, Debra Morgan, Rory Gowda-Sookochoff, Megan E O'Connell
{"title":"CVLT-II short form forced choice recognition in a clinical dementia sample: Cautions for performance validity assessment.","authors":"Karl S Grewal, Michaella Trites, Andrew Kirk, Stuart W S MacDonald, Debra Morgan, Rory Gowda-Sookochoff, Megan E O'Connell","doi":"10.1080/23279095.2022.2079088","DOIUrl":"10.1080/23279095.2022.2079088","url":null,"abstract":"<p><p>Performance validity tests are susceptible to false positives from genuine cognitive impairment (e.g., dementia); this has not been explored with the short form of the California Verbal Learning Test II (CVLT-II-SF). In a memory clinic sample, we examined whether CVLT-II-SF Forced Choice Recognition (FCR) scores differed across diagnostic groups, and how the severity of impairment [Clinical Dementia Rating Sum of Boxes (CDR-SOB) or Mini-Mental State Examination (MMSE)] modulated test performance. Three diagnostic groups were identified: subjective cognitive impairment (SCI; <i>n</i> = 85), amnestic mild cognitive impairment (a-MCI; <i>n</i> = 17), and dementia due to Alzheimer's Disease (AD; <i>n</i> = 50). Significant group differences in FCR were observed using one-way ANOVA; <i>post-hoc</i> analysis indicated the AD group performed significantly worse than the other groups. Using multiple regression, FCR performance was modeled as a function of the diagnostic group, severity (MMSE or CDR-SOB), and their interaction. Results yielded significant main effects for MMSE and diagnostic group, with a significant interaction. CDR-SOB analyses were non-significant. Increases in impairment disproportionately impacted FCR performance for persons with AD, adding caution to research-based cutoffs for performance validity in dementia. Caution is warranted when assessing performance validity in dementia populations. Future research should examine whether CVLT-II-SF-FCR is appropriately specific for best-practice testing batteries for dementia.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42746377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne-Fleur Domensino, Jonathan Evans, Caroline van Heugten
{"title":"From word list learning to successful shopping: The neuropsychological assessment continuum from cognitive tests to cognition in everyday life.","authors":"Anne-Fleur Domensino, Jonathan Evans, Caroline van Heugten","doi":"10.1080/23279095.2022.2079087","DOIUrl":"10.1080/23279095.2022.2079087","url":null,"abstract":"<p><p>Cognitive deficits are common after brain injury and can be measured in various ways. Many neuropsychological tests are designed to measure specific cognitive deficits, and self-report questionnaires capture cognitive complaints. Measuring cognition in daily life is important in rehabilitating the abilities required to undertake daily life activities and participate in society. However, assessment of cognition in daily life is often performed in a non-standardized manner. In this opinion paper we discuss the various types of assessment of cognitive functioning and their associated instruments. Drawing on existing literature and evidence from experts in the field, we propose a framework that includes seven dimensions of cognition measurement, reflecting a continuum ranging from controlled test situations through to measurement of cognition in daily life environments. We recommend multidimensional measurement of cognitive functioning in different categories of the continuum for the purpose of diagnostics, evaluation of cognitive rehabilitation treatment, and assessing capacity after brain injury.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47914212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meytal Wilf, Noa Ben Yair, W Geoffrey Wright, Meir Plotnik
{"title":"The trail less traveled: Analytical approach for creating shortened versions for virtual reality-based color trails test.","authors":"Meytal Wilf, Noa Ben Yair, W Geoffrey Wright, Meir Plotnik","doi":"10.1080/23279095.2022.2065204","DOIUrl":"10.1080/23279095.2022.2065204","url":null,"abstract":"<p><p>The Color Trails Test (\"CTT\") is among the most popular neuropsychological assessment tests of executive function, targeting sustained visual attention (Trails A), and divided attention (Trails B). During the pen-and-paper (P&P) test, the participant traces 25 consecutive numbered targets marked on a page, and the completion time is recorded. In many cases, multiple assessments are performed on the same individual, either under varying experimental conditions or at several timepoints. However, repeated testing often results in learning and fatigue effects, which confound test outcomes. To mitigate these effects, we set the grounds for developing shorter versions of the CTT (<25 targets), using virtual reality (VR) based CTT (VR-CTT). Our aim was to discover the minimal set of targets that is sufficient for maintaining concurrent validity with the CTT including differentiation between age groups, and the difference between Trails A and B. To this aim, healthy participants in three age groups (total <i>N</i> = 165; young, middle-aged, or older adults) performed both the P&P CTT, and one type of VR-CTT (immersive head-mounted-device VR, large-scale 3D VR, or tablet). A subset of 13 targets was highly correlated with overall task completion times in all age groups and platforms (<i>r</i> > 0.8). We tested construct validity and found that the shortened-CTT preserved differences between Trails A and B (<i>p</i> < 0.001), showed concurrent validity relative to the P&P scores (<i>r</i> > 0.5; <i>p</i> < 0.05), and differentiated between age groups (<i>p</i> < 0.05). These findings open the possibility for shortened \"CTT-versions\", to be used in repeated-measures experiments or longitudinal studies, with potential implications for shortening neurocognitive assessment protocols.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45832581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chanda Simfukwe, SangYun Kim, Seong Soo An, Young Chul Youn
{"title":"Neuropsychological test using machine learning for cognitive impairment screening.","authors":"Chanda Simfukwe, SangYun Kim, Seong Soo An, Young Chul Youn","doi":"10.1080/23279095.2022.2078210","DOIUrl":"10.1080/23279095.2022.2078210","url":null,"abstract":"<p><strong>Objectives: </strong>Neuropsychological tests (NPTs) are widely used tools to evaluate cognitive functioning. The interpretation of these tests can be time-consuming and requires a specialized clinician. For this reason, we trained machine learning models that detect normal controls (NC), cognitive impairment (CI), and dementia among subjects.</p><p><strong>Patients and methods: </strong>A total number of 14,927 subject datasets were collected from the formal neuropsychological assessments Seoul Neuropsychological Screening Battery (SNSB) by well-qualified neuropsychologists. The dataset included 44 NPTs of SNSB, age, education level, and diagnosis of each participant. The dataset was preprocessed and classified according to three different classes NC, CI, and dementia. We trained machine-learning with a supervised machine learning classifier algorithm support vector machine (SVM) 30 times with classification from scikit-learn (https://scikit-learn.org/stable/) to distinguish the prediction accuracy, sensitivity, and specificity of the models; NC <i>vs.</i> CI, NC <i>vs.</i> dementia, and NC <i>vs.</i> CI <i>vs.</i> dementia. Confusion matrixes were plotted using the testing dataset for each model.</p><p><strong>Results: </strong>The trained model's 30 times mean accuracies for predicting cognitive states were as follows; NC <i>vs.</i> CI model was 88.61 ± 1.44%, NC <i>vs.</i> dementia model was 97.74 ± 5.78%, and NC <i>vs.</i> CI <i>vs.</i> dementia model was 83.85 ± 4.33%. NC <i>vs.</i> dementia showed the highest accuracy, sensitivity, and specificity of 97.74 ± 5.78, 97.99 ± 5.78, and 96.08 ± 4.33% in predicting dementia among subjects, respectively.</p><p><strong>Conclusion: </strong>Based on the results, the SVM algorithm is more appropriate in training models on an imbalanced dataset for a good prediction accuracy compared to natural network and logistic regression algorithms. The NC <i>vs.</i> dementia machine-learning trained model with SVM based on NPTs SNSB dataset could assist neuropsychologists in classifying the cognitive function of subjects.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41306657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melissa T Buelow, Wesley R Barnhart, Thomas Crook, Julie A Suhr
{"title":"Are correlations among behavioral decision making tasks moderated by simulated cognitive impairment?","authors":"Melissa T Buelow, Wesley R Barnhart, Thomas Crook, Julie A Suhr","doi":"10.1080/23279095.2022.2088289","DOIUrl":"10.1080/23279095.2022.2088289","url":null,"abstract":"<p><p>Behavioral decision making tasks are common in research settings, with only the Iowa Gambling Task available for clinical assessments. However, correlations among these tasks are low, indicating each may assess a distinct component of decision making. In addition, it is unclear whether these tasks are sensitive to invalid performance or even simulated impairment. The present study examined relationships among decision making tasks and whether simulated impairment moderates the relationships among them. Across two studies (Study 1: <i>n</i> = 166, Study 2: <i>n</i> = 130), undergraduate student participants were asked to try their best or to simulate a specific diagnosis (Attention-Deficit/Hyperactivity Disorder; Study 1), decision making impairment (Study 2), or general cognitive impairment (Study 2). They then completed a battery of tests including embedded and standalone performance validity tests (PVTs) and three behavioral decision making tasks. Across studies, participants simulating impairment were not distinguishable from controls on any of the behavioral tasks. Few significant correlations emerged among tasks across studies and the pattern of relationships between tasks did not differ on the basis of simulator or PVT failure status. Collectively, our findings suggest that these tasks may not be vulnerable to simulated cognitive impairment, and that the tasks measure largely non-overlapping aspects of decision making.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40269827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy J Arentsen, Whitney J Stubbs, Suzanne H Lease, Marcy C Adler, Elin Ovrebo, Jennifer L Jacobson
{"title":"The relationship of the clinician-rated Functional Status Interview with executive functioning.","authors":"Timothy J Arentsen, Whitney J Stubbs, Suzanne H Lease, Marcy C Adler, Elin Ovrebo, Jennifer L Jacobson","doi":"10.1080/23279095.2022.2084619","DOIUrl":"10.1080/23279095.2022.2084619","url":null,"abstract":"<p><p>Self/informant-report and performance-based instruments are typically used to measure activities of daily living (ADLs) and instrumental activities of daily living (IADLs). Minimal attention has focused on clinician-rated measures. Executive functioning (EF) contributes significantly to functional independence, and the validity of functional status measures has been examined through its relationship to EF scores. The current study used a clinical sample of older U.S. Veterans who completed a neurocognitive evaluation (<i>n</i> = 266). The psychometric properties of a novel, clinician-rated Functional Status Interview (FSI) and its relationship to EF measures, including the Frontal Assessment Battery (FAB) and Trail Making Test (TMT-A and TMT-B), were explored. Two FSI factors (IADL and ADL) emerged with all items loading strongly onto the subscales as predicted. EF correlated strongly with IADL but had small to medium correlations with ADL. In regression models that controlled for sociodemographic variables, all EF measures uniquely contributed to the IADL model, but only FAB and TMT-A contributed to the model for ADL. Notably, results may have been limited by prominent floor effects on TMT-B. Overall, the FSI is a promising measure with demonstrated content validity. Thus, there is preliminary support for clinicians to incorporate multiple sources of information to rate functional status using the FSI.</p>","PeriodicalId":51308,"journal":{"name":"Applied Neuropsychology-Adult","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45298926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}