Michelle Bui, Adrian Fernandez, Budheshwar Ramsukh, Onika Noel, Chris Prashad, David Bayne
{"title":"Training and implementation of handheld ultrasound technology at Georgetown Public Hospital Corporation in Guyana: a virtual learning cohort study","authors":"Michelle Bui, Adrian Fernandez, Budheshwar Ramsukh, Onika Noel, Chris Prashad, David Bayne","doi":"10.3352/jeehp.2023.20.11","DOIUrl":"10.3352/jeehp.2023.20.11","url":null,"abstract":"<p><p>A virtual point-of-care ultrasound (POCUS) education program was initiated to introduce handheld ultrasound technology to Georgetown Public Hospital\u0000Corporation in Guyana, a low-resource setting. We studied ultrasound competency and participant satisfaction in a cohort of 20 physicians-in-training\u0000through the urology clinic. The program consisted of a training phase, where they learned how to use the Butterfly iQ ultrasound, and a mentored implementation phase, where they applied their skills in the clinic. The assessment was through written exams and an objective structured clinical exam (OSCE). Fourteen students completed the program. The written exam scores were 3.36/5 in the training phase and 3.57/5 in the mentored implementation phase, and all students earned 100% on the OSCE. Students expressed satisfaction with the program. Our POCUS education program demonstrates the potential to teach clinical skills in low-resource settings and the value of virtual global health partnerships in advancing POCUS and minimally invasive diagnostics.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"11"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11009011/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9383080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vera Spatenkova, Iveta Zvercova, Zdenek Jindrisek, Ivana Veverkova, Eduard Kuriscak
{"title":"Comparison of nursing students’ performance of cardiopulmonary resuscitation between 1 semester and 3 semesters of manikin simulations in the Czech Republic: a non-randomized controlled study","authors":"Vera Spatenkova, Iveta Zvercova, Zdenek Jindrisek, Ivana Veverkova, Eduard Kuriscak","doi":"10.3352/jeehp.2023.20.9","DOIUrl":"10.3352/jeehp.2023.20.9","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to assess the effect of simulation teaching in critical care courses in a nursing study program on the quality of chest compressions of cardiopulmonary resuscitation (CPR).</p><p><strong>Methods: </strong>An observational cross-sectional study was conducted at the Faculty of Health Studies at the Technical University of Liberec. The success rate of CPR was tested in exams comparing 2 groups of students, totaling 66 different individuals, who completed half a year (group 1: intermediate exam with model simulation) or 1.5 years (group 2: final theoretical critical care exam with model simulation) of undergraduate nursing critical care education taught completely with a Laerdal SimMan 3G simulator. The quality of CPR was evaluated according to 4 components: compression depth, compression rate, time of correct frequency, and time of correct chest release.</p><p><strong>Results: </strong>Compression depth was significantly higher in group 2 than in group 1 (P=0.016). There were no significant differences in the compression rate (P=0.210), time of correct frequency (P=0.586), or time of correct chest release (P=0.514).</p><p><strong>Conclusion: </strong>Nursing students who completed the final critical care exam showed an improvement in compression depth during CPR after 2 additional semesters of critical care teaching compared to those who completed the intermediate exam. The above results indicate that regularly scheduled CPR training is necessary during critical care education for nursing students.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"9"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129870/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9413183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can an artificial intelligence chatbot be the author of a scholarly article?","authors":"Ju Yoen Lee","doi":"10.3352/jeehp.2023.20.6","DOIUrl":"10.3352/jeehp.2023.20.6","url":null,"abstract":"<p><p>At the end of 2022, the appearance of ChatGPT, an artificial intelligence (AI) chatbot with amazing writing ability, caused a great sensation in academia. The chatbot turned out to be very capable, but also capable of deception, and the news broke that several researchers had listed the chatbot (including its earlier version) as co-authors of their academic papers. In response, Nature and Science expressed their position that this chatbot cannot be listed as an author in the papers they publish. Since an AI chatbot is not a human being, in the current legal system, the text automatically generated by an AI chatbot cannot be a copyrighted work; thus, an AI chatbot cannot be an author of a copyrighted work. Current AI chatbots such as ChatGPT are much more advanced than search engines in that they produce original text, but they still remain at the level of a search engine in that they cannot take responsibility for their writing. For this reason, they also cannot be authors from the perspective of research ethics.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"6"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10033224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9157593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Factors influencing the learning transfer of nursing students in a non-face-to-face educational environment during the COVID-19 pandemic in Korea: a cross-sectional study using structural equation modeling.","authors":"Geun Myun Kim, Yunsoo Kim, Seong Kwang Kim","doi":"10.3352/jeehp.2023.20.14","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.14","url":null,"abstract":"<p><strong>Purpose: </strong>The aim of this study was to identify factors influencing the learning transfer of nursing students in a non-face-to-face educational environment through structural equation modeling and suggest ways to improve the transfer of learning.</p><p><strong>Methods: </strong>In this cross-sectional study, data were collected via online surveys from February 9 to March 1, 2022, from 218 nursing students in Korea. Learning transfer, learning immersion, learning satisfaction, learning efficacy, self-directed learning ability and information technology utilization ability were analyzed using IBM SPSS for Windows ver. 22.0 and AMOS ver. 22.0.</p><p><strong>Results: </strong>The assessment of structural equation modeling showed adequate model fit, with normed χ2=1.74 (P<0.024), goodness-of-fit index=0.97, adjusted goodness-of-fit index=0.93, comparative fit index=0.98, root mean square residual=0.02, Tucker-Lewis index=0.97, normed fit index=0.96, and root mean square error of approximation=0.06. In a hypothetical model analysis, 9 out of 11 pathways of the hypothetical structural model for learning transfer in nursing students were statistically significant. Learning self-efficacy and learning immersion of nursing students directly affected learning transfer, and subjective information technology utilization ability, self-directed learning ability, and learning satisfaction were variables with indirect effects. The explanatory power of immersion, satisfaction, and self-efficacy for learning transfer was 44.4%.</p><p><strong>Conclusion: </strong>The assessment of structural equation modeling indicated an acceptable fit. It is necessary to improve the transfer of learning through the development of a self-directed program for learning ability improvement, including the use of information technology in nursing students’ learning environment in non-face-to-face conditions.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"14"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10244801/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9870300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review","authors":"Tae Won Kim","doi":"10.3352/jeehp.2023.20.38","DOIUrl":"10.3352/jeehp.2023.20.38","url":null,"abstract":"<p><p>This study aims to explore ChatGPT’s (GPT-3.5 version) functionalities, including reinforcement learning, diverse applications, and limitations. ChatGPT is an artificial intelligence (AI) chatbot powered by OpenAI’s Generative Pre-trained Transformer (GPT) model. The chatbot’s applications span education, programming, content generation, and more, demonstrating its versatility. ChatGPT can improve education by creating assignments and offering personalized feedback, as shown by its notable performance in medical exams and the United States Medical Licensing Exam. However, concerns include plagiarism, reliability, and educational disparities. It aids in various research tasks, from design to writing, and has shown proficiency in summarizing and suggesting titles. Its use in scientific writing and language translation is promising, but professional oversight is needed for accuracy and originality. It assists in programming tasks like writing code, debugging, and guiding installation and updates. It offers diverse applications, from cheering up individuals to generating creative content like essays, news articles, and business plans. Unlike search engines, ChatGPT provides interactive, generative responses and understands context, making it more akin to human conversation, in contrast to conventional search engines’ keyword-based, non-interactive nature. ChatGPT has limitations, such as potential bias, dependence on outdated data, and revenue generation challenges. Nonetheless, ChatGPT is considered to be a transformative AI tool poised to redefine the future of generative technology. In conclusion, advancements in AI, such as ChatGPT, are altering how knowledge is acquired and applied, marking a shift from search engines to creativity engines. This transformation highlights the increasing importance of AI literacy and the ability to effectively utilize AI in various domains of life.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"38"},"PeriodicalIF":9.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893184/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139040714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pin-Hsiang Huang, Anthony John O'Sullivan, Boaz Shulruf
{"title":"Development and validation of the student ratings in clinical teaching scale in Australia: a methodological study","authors":"Pin-Hsiang Huang, Anthony John O'Sullivan, Boaz Shulruf","doi":"10.3352/jeehp.2023.20.26","DOIUrl":"10.3352/jeehp.2023.20.26","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to devise a valid measurement for assessing clinical students' perceptions of teaching practices.</p><p><strong>Methods: </strong>A new tool was developed based on a meta-analysis encompassing effective clinical teaching-learning factors. Seventy-nine items were generated using a frequency (never to always) scale. The tool was applied to the University of New South Wales year 2, 3, and 6 medical students. Exploratory and confirmatory factor analysis (exploratory factor analysis [EFA] and confirmatory factor analysis [CFA], respectively) were conducted to establish the tool’s construct validity and goodness of fit, and Cronbach’s α was used for reliability.</p><p><strong>Results: </strong>In total, 352 students (44.2%) completed the questionnaire. The EFA identified student-centered learning, problem-solving learning, self-directed learning, and visual technology (reliability, 0.77 to 0.89). CFA showed acceptable goodness of fit (chi-square P<0.01, comparative fit index=0.930 and Tucker-Lewis index=0.917, root mean square error of approximation=0.069, standardized root mean square residual=0.06).</p><p><strong>Conclusion: </strong>The established tool—Student Ratings in Clinical Teaching (STRICT)—is a valid and reliable tool that demonstrates how students perceive clinical teaching efficacy. STRICT measures the frequency of teaching practices to mitigate the biases of acquiescence and social desirability. Clinical teachers may use the tool to adapt their teaching practices with more active learning activities and to utilize visual technology to facilitate clinical learning efficacy. Clinical educators may apply STRICT to assess how these teaching practices are implemented in current clinical settings.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"26"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10562831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10158663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Presidential address: improving item validity and adopting computer-based testing, clinical skills assessments, artificial intelligence, and virtual reality in health professions licensing examinations in Korea.","authors":"Hyunjoo Pai","doi":"10.3352/jeehp.2023.20.8","DOIUrl":"https://doi.org/10.3352/jeehp.2023.20.8","url":null,"abstract":"","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"8"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10129871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9355467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler
{"title":"Use of learner-driven, formative, ad-hoc, prospective assessment of competence in physical therapist clinical education in the United States: a prospective cohort study","authors":"Carey Holleran, Jeffrey Konrad, Barbara Norton, Tamara Burlis, Steven Ambler","doi":"10.3352/jeehp.2023.20.36","DOIUrl":"10.3352/jeehp.2023.20.36","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this project was to implement a process for learner-driven, formative, prospective, ad-hoc, entrustment assessment in Doctor of Physical Therapy clinical education. Our goals were to develop an innovative entrustment assessment tool, and then explore whether the tool detected (1) differences between learners at different stages of development and (2) differences within learners across the course of a clinical education experience. We also investigated whether there was a relationship between the number of assessments and change in performance.</p><p><strong>Methods: </strong>A prospective, observational, cohort of clinical instructors (CIs) was recruited to perform learner-driven, formative, ad-hoc, prospective, entrustment assessments. Two entrustable professional activities (EPAs) were used: (1) gather a history and perform an examination and (2) implement and modify the plan of care, as needed. CIs provided a rating on the entrustment scale and provided narrative support for their rating.</p><p><strong>Results: </strong>Forty-nine learners participated across 4 clinical experiences (CEs), resulting in 453 EPA learner-driven assessments. For both EPAs, statistically significant changes were detected both between learners at different stages of development and within learners across the course of a CE. Improvement within each CE was significantly related to the number of feedback opportunities.</p><p><strong>Conclusion: </strong>The results of this pilot study provide preliminary support for the use of learner-driven, formative, ad-hoc assessments of competence based on EPAs with a novel entrustment scale. The number of formative assessments requested correlated with change on the EPA scale, suggesting that formative feedback may augment performance improvement.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"36"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10823263/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138811993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Khalafi, Yasamin Sharbatdar, Nasrin Khajeali, Mohammad Hosein Haghighizadeh, Mahshid Vaziri
{"title":"Improvement of the clinical skills of nurse anesthesia students using mini-clinical evaluation exercises in Iran: a randomized controlled study.","authors":"Ali Khalafi, Yasamin Sharbatdar, Nasrin Khajeali, Mohammad Hosein Haghighizadeh, Mahshid Vaziri","doi":"10.3352/jeehp.2023.20.12","DOIUrl":"10.3352/jeehp.2023.20.12","url":null,"abstract":"<p><strong>Purpose: </strong>The present study aimed to investigate the effect of a mini-clinical evaluation exercise (CEX) assessment on improving the clinical skills of nurse anesthesia students at Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.</p><p><strong>Methods: </strong>This study started on November 1, 2022, and ended on December 1, 2022. It was conducted among 50 nurse anesthesia students divided into intervention and control groups. The intervention group’s clinical skills were evaluated 4 times using the mini-CEX method. In contrast, the same skills were evaluated in the control group based on the conventional method—that is, general supervision by the instructor during the internship and a summative evaluation based on a checklist at the end of the course. The intervention group students also filled out a questionnaire to measure their satisfaction with the miniCEX method.</p><p><strong>Results: </strong>The mean score of the students in both the control and intervention groups increased significantly on the post-test (P<0.0001), but the improvement in the scores of the intervention group was significantly greater compared with the control group (P<0.0001). The overall mean score for satisfaction in the intervention group was 76.3 out of a maximum of 95.</p><p><strong>Conclusion: </strong>The findings of this study showed that using mini-CEX as a formative evaluation method to evaluate clinical skills had a significant effect on the improvement of nurse anesthesia students’ clinical skills, and they had a very favorable opinion about this evaluation method.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"12"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10209614/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9524415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila
{"title":"Performance of ChatGPT, Bard, Claude, and Bing on the Peruvian National Licensing Medical Examination: a cross-sectional study.","authors":"Betzy Clariza Torres-Zegarra, Wagner Rios-Garcia, Alvaro Micael Ñaña-Cordova, Karen Fatima Arteaga-Cisneros, Xiomara Cristina Benavente Chalco, Marina Atena Bustamante Ordoñez, Carlos Jesus Gutierrez Rios, Carlos Alberto Ramos Godoy, Kristell Luisa Teresa Panta Quezada, Jesus Daniel Gutierrez-Arratia, Javier Alejandro Flores-Cohaila","doi":"10.3352/jeehp.2023.20.30","DOIUrl":"10.3352/jeehp.2023.20.30","url":null,"abstract":"<p><strong>Purpose: </strong>We aimed to describe the performance and evaluate the educational value of justifications provided by artificial intelligence chatbots, including GPT-3.5, GPT-4, Bard, Claude, and Bing, on the Peruvian National Medical Licensing Examination (P-NLME).</p><p><strong>Methods: </strong>This was a cross-sectional analytical study. On July 25, 2023, each multiple-choice question (MCQ) from the P-NLME was entered into each chatbot (GPT-3, GPT-4, Bing, Bard, and Claude) 3 times. Then, 4 medical educators categorized the MCQs in terms of medical area, item type, and whether the MCQ required Peru-specific knowledge. They assessed the educational value of the justifications from the 2 top performers (GPT-4 and Bing).</p><p><strong>Results: </strong>GPT-4 scored 86.7% and Bing scored 82.2%, followed by Bard and Claude, and the historical performance of Peruvian examinees was 55%. Among the factors associated with correct answers, only MCQs that required Peru-specific knowledge had lower odds (odds ratio, 0.23; 95% confidence interval, 0.09-0.61), whereas the remaining factors showed no associations. In assessing the educational value of justifications provided by GPT-4 and Bing, neither showed any significant differences in certainty, usefulness, or potential use in the classroom.</p><p><strong>Conclusion: </strong>Among chatbots, GPT-4 and Bing were the top performers, with Bing performing better at Peru-specific MCQs. Moreover, the educational value of justifications provided by the GPT-4 and Bing could be deemed appropriate. However, it is essential to start addressing the educational value of these chatbots, rather than merely their performance on examinations.</p>","PeriodicalId":46098,"journal":{"name":"Journal of Educational Evaluation for Health Professions","volume":"20 ","pages":"30"},"PeriodicalIF":4.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11009012/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138048169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}