{"title":"A systematic review of conversational AI tools in ELT: Publication trends, tools, research methods, learning outcomes, and antecedents","authors":"Wan Yee Winsy Lai, Ju Seong Lee","doi":"10.1016/j.caeai.2024.100291","DOIUrl":"10.1016/j.caeai.2024.100291","url":null,"abstract":"<div><p>This review analyzed the trends in conversational AI tools in ELT from January 2013 to November 2023. The study examined 32 papers, focusing on publication trends, tool types, research methods, learning outcomes, and factors influencing their use. Findings revealed a gradual increase in publications, with 4 (12%) from 2013 to 2021, 13 (41%) in 2022, and 15 (47%) in 2023. All studies (100%) were conducted in Asian EFL contexts. Among the AI chatbots, <em>Google Assistant</em> (25%) was the most widely used. Quasi-experimental (45%) and cross-section (41%) research designs were commonly employed. Mixed-method (50%) approaches were prevalent for data collection and analysis. Conversational AI yielded positive outcomes in affective (43%) and cognitive skills (41%). The main factors influencing user perceptions or behaviors were individual (47%) and microsystem layers (31%). Future studies should (a) include diverse contexts beyond Asia, (b) consider the use of up-to-date tools (e.g., <em>ChatGPT</em>), (c) employ rigorous experimental designs, (d) explore behavioral learning outcomes, and (e) investigate broader environmental factors. The systematic review enhances current knowledge of recent research trends, identifies environmental factors influencing conversational AI tools concentrating in ELT, and provides insights for future research and practice in this rapidly evolving field.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100291"},"PeriodicalIF":0.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000948/pdfft?md5=68635526e04d61d702e1a64da49f7651&pid=1-s2.0-S2666920X24000948-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suzanne Groothuijsen , Antoine van den Beemt , Joris C. Remmers , Ludo W. van Meeuwen
{"title":"AI chatbots in programming education: Students’ use in a scientific computing course and consequences for learning","authors":"Suzanne Groothuijsen , Antoine van den Beemt , Joris C. Remmers , Ludo W. van Meeuwen","doi":"10.1016/j.caeai.2024.100290","DOIUrl":"10.1016/j.caeai.2024.100290","url":null,"abstract":"<div><p>Teaching and learning in higher education require adaptation following students' inevitable use of AI chatbots. This study contributes to the empirical literature on students' use of AI chatbots and how they influence learning. The aim of this study is to identify how to adapt programming education in higher engineering education. A mixed-methods case study was conducted of a scientific computing course in a Mechanical Engineering Master's program at a Eindhoven University of Technology in the Netherlands. Data consisted of 29 student questionnaires, a semi-structured group interview with three students, a semi-structured interview with the teacher, and 29 students' grades. Results show that students used ChatGPT for error checking and debugging of code, increasing conceptual understanding, generating, and optimizing solution code, explaining code, and solving mathematical problems. While students reported advantages of using ChatGPT, the teacher expressed concerns over declining code quality and student learning. Furthermore, both students and teacher perceived a negative influence from ChatGPT usage on pair programming, and consequently on student collaboration. The findings suggest that learning objectives should be formulated in more detail, to highlight essential programming skills, and be expanded to include the use of AI tools. Complex programming assignments remain appropriate in programming education, but pair programming as a didactic approach should be reconsidered in light of the growing use of AI Chatbots.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100290"},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000936/pdfft?md5=5dd3b00f57974dcb226523ac3d26b418&pid=1-s2.0-S2666920X24000936-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas K.F. Chiu , Yifan Chen , King Woon Yau , Ching-sing Chai , Helen Meng , Irwin King , Savio Wong , Yeung Yam
{"title":"Developing and validating measures for AI literacy tests: From self-reported to objective measures","authors":"Thomas K.F. Chiu , Yifan Chen , King Woon Yau , Ching-sing Chai , Helen Meng , Irwin King , Savio Wong , Yeung Yam","doi":"10.1016/j.caeai.2024.100282","DOIUrl":"10.1016/j.caeai.2024.100282","url":null,"abstract":"<div><p>The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100282"},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000857/pdfft?md5=0fca2149c7cdd2f757af3d4d0dfabea4&pid=1-s2.0-S2666920X24000857-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142171619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yidi Zhang , Margarida Lucas , Pedro Bem-haja , Luís Pedro
{"title":"The effect of student acceptance on learning outcomes: AI-generated short videos versus paper materials","authors":"Yidi Zhang , Margarida Lucas , Pedro Bem-haja , Luís Pedro","doi":"10.1016/j.caeai.2024.100286","DOIUrl":"10.1016/j.caeai.2024.100286","url":null,"abstract":"<div><p>The use of video and paper-based materials is commonly widespread in foreign language learning (FLL). It is well established that the level of acceptance of these materials influences learning outcomes, but there is lack of evidence regarding the use and related impact of videos generated by artificial intelligence (AI) on these aspects. This paper used linear mixed models and path analysis to investigate the influence of student acceptance of AI-generated short videos on learning outcomes compared to paper-based materials. Student acceptance was assessed based on perceived ease of use (PEU), perceived usefulness (PU), attitude (A), intentions (I), and concentration (C). The results indicate that both AI-generated short videos and paper-based materials can significantly enhance learning outcomes. AI-generated short videos are more likely to be accepted by students with lower pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at higher levels. On the other hand, paper-based materials are more likely to be accepted by students with higher pre-test scores and may lead to more significant learning outcomes when PEU, PU, A, I and C are at lower levels. These findings offer empirical evidence supporting the use of AI-generated short videos in FLL and provide suggestions for selecting appropriate learning materials in different FLL contexts.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100286"},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000894/pdfft?md5=a9ad201209bad5172c392fac6bbb6f8e&pid=1-s2.0-S2666920X24000894-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142096399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Teachers' and students' perceptions of AI-generated concept explanations: Implications for integrating generative AI in computer science education","authors":"Soohwan Lee, Ki-Sang Song","doi":"10.1016/j.caeai.2024.100283","DOIUrl":"10.1016/j.caeai.2024.100283","url":null,"abstract":"<div><p>The educational application of Generative AI (GAI) has garnered significant interest, sparking discussions about the pedagogical value of GAI-generated content. This study investigates the perceived effectiveness of concept explanations produced by GAI compared to those created by human teachers, focusing on programming concepts of sequence, selection, and iteration. The research also explores teachers' and students' ability to discern the source of these explanations. Participants included 11 teachers and 70 sixth-grade students who were presented with concept explanations created or generated by teachers and ChatGPT. They were asked to evaluate the helpfulness of the explanations and identify their source. Results indicated that teachers found GAI-generated explanations more helpful for sequence and selection concepts, while preferring teacher-created explanations for iteration (χ2(2, N = 11) = 10.062, p = .007, ω = .595). In contrast, students showed varying abilities to distinguish between AI-generated and teacher-created explanations across concepts, with significant differences observed (χ2(2, N = 70) = 22.127, p < .001, ω = .399). Notably, students demonstrated difficulty in identifying the source of explanations for the iteration concept (χ2(1, N = 70) = 8.45, p = .004, φ = .348). Qualitative analysis of open-ended responses revealed that teachers and students employed similar criteria for evaluating explanations but differed in their ability to discern the source. Teachers focused on pedagogical effectiveness, while students prioritized relatability and clarity. The findings highlight the importance of considering both teachers' and students' perspectives when integrating GAI into computer science education. The study proposes strategies for designing GAI-based explanations that cater to learners' needs and emphasizes the necessity of explicit AI literacy instruction. Limitations and future research directions are discussed, underlining the need for larger-scale studies and experimental designs that assess the impact of GAI on actual learning outcomes.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100283"},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000869/pdfft?md5=da3079ff70e673eb6248b52e3987a082&pid=1-s2.0-S2666920X24000869-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring social and computer science students’ perceptions of AI integration in (foreign) language instruction","authors":"Kosta Dolenc , Mihaela Brumen","doi":"10.1016/j.caeai.2024.100285","DOIUrl":"10.1016/j.caeai.2024.100285","url":null,"abstract":"<div><p>Artificial intelligence (AI) has gained acceptance in the field of education. Nevertheless, existing research on AI in education, particularly in foreign language (FL) learning and teaching, is notably limited in scope and depth. In the present study, we addressed this research gap by investigating social and computer science students' perceptions of the integration and use of AI-based technologies in education, focusing specifically on foreign language teaching. Using an online questionnaire, we analysed factors such as students' field of study, gender differences, and the type of AI used. The questionnaire included statements categorised into thematic clusters, with responses measured on a five-point Likert scale. Statistical analysis, including chi-square tests and Cohen's d, revealed that individuals studying computer science, males, and supporters of generative AI are more likely to use AI tools for educational purposes. They perceive fewer barriers to the integration of AI into FL education. Social science students and women are less likely to use AI tools in FL education and express scepticism about their potential to improve academic outcomes. They tend to be more critical or cautious regarding the role of AI in FL education. They view AI as a valuable tool that enhances the learning experience but, at the same time, recognise the irreplaceable role of human teachers. The study highlights the need for targeted educational initiatives to address gender and disciplinary gaps in AI adoption, promote informed discussions on AI in education, and develop balanced AI integration strategies to improve FL learning. These findings suggest educators and policymakers should implement comprehensive AI training programs and ethical guidelines for responsible AI use in (FL) education.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100285"},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000882/pdfft?md5=d0d6da316006cb052c80cb05f0e2d50c&pid=1-s2.0-S2666920X24000882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142089115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI in essay-based assessment: Student adoption, usage, and performance","authors":"David Smerdon","doi":"10.1016/j.caeai.2024.100288","DOIUrl":"10.1016/j.caeai.2024.100288","url":null,"abstract":"<div><p>The rise of generative artificial intelligence (AI) has sparked debate in education about whether to ban AI tools for assessments. This study explores the adoption and impact of AI tools on an undergraduate research proposal assignment using a mixed-methods approach. From a sample of 187 students, 69 completed a survey, with 46 (67%) reporting the use of AI tools. AI-using students were significantly more likely to be higher-performing, with a pre-semester average GPA of 5.46 compared to 4.92 for non-users (7-point scale, <em>p</em> = .025). Most students used AI assistance for the highest-weighted components of the task, such as the research topic and methods section, using AI primarily for generating research ideas and gathering feedback. Regression analysis suggests that there was no statistically significant effect of AI use on student performance in the task, with the preferred regression specification estimating an effect size of less than 1 mark out of 100. The qualitative analysis identified six main themes of AI usage: idea generation, writing assistance, literature search, grammar checking, statistical analysis, and overall learning impact. These findings indicate that while AI tools are widely adopted, their impact on academic performance is neutral, suggesting a potential for integration into educational practices without compromising academic integrity.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100288"},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000912/pdfft?md5=9fcea80886fdddc7bc070209d4d8039a&pid=1-s2.0-S2666920X24000912-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating technological and instructional factors influencing the acceptance of AIGC-assisted design courses","authors":"Qianling Jiang , Yuzhuo Zhang , Wei Wei , Chao Gu","doi":"10.1016/j.caeai.2024.100287","DOIUrl":"10.1016/j.caeai.2024.100287","url":null,"abstract":"<div><h3>Purpose</h3><p>This study aims to explore the key factors influencing design students' acceptance of AIGC-assisted design courses, providing specific strategies for course design to help students better learn this new technology and enhance their competitiveness in the design industry. The research focuses on evaluating technological and course-level factors, providing actionable insights for course developers.</p></div><div><h3>Design/methodology/approach</h3><p>The research establishes and validates evaluation dimensions and indicators affecting acceptance using structured questionnaires to collect data and employs factor analysis and weight analysis to determine the importance of each factor.</p></div><div><h3>Findings</h3><p>The results of the study reveal that the main dimensions influencing student acceptance include technology application and innovation, teaching content and methods, and extracurricular learning support and resources. Regarding indicators, data privacy, timeliness of extracurricular learning support, and availability of extracurricular learning resources are identified as the most critical factors.</p></div><div><h3>Originality</h3><p>The uniqueness of this study lies in providing specific course design strategies for AIGC-assisted design courses based on the weight analysis results for different dimensions and indicators. These strategies aim to help students better adapt to these courses and enhance their acceptance. Furthermore, the conclusions and recommendations of this study offer valuable insights for educational institutions and instructors, promoting further optimization and development of AIGC-assisted design courses.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100287"},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000900/pdfft?md5=c9607b5a429406e445f75d5e9d896936&pid=1-s2.0-S2666920X24000900-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142076787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shreya Bhandari , Yunting Liu , Yerin Kwak , Zachary A. Pardos
{"title":"Evaluating the psychometric properties of ChatGPT-generated questions","authors":"Shreya Bhandari , Yunting Liu , Yerin Kwak , Zachary A. Pardos","doi":"10.1016/j.caeai.2024.100284","DOIUrl":"10.1016/j.caeai.2024.100284","url":null,"abstract":"<div><p>Not much is known about how LLM-generated questions compare to gold-standard, traditional formative assessments concerning their difficulty and discrimination parameters, which are valued properties in the psychometric measurement field. We follow a rigorous measurement methodology to compare a set of ChatGPT-generated questions, produced from one lesson summary in a textbook, to existing questions from a published Creative Commons textbook. To do this, we collected and analyzed responses from 207 test respondents who answered questions from both item pools and used a linking methodology to compare IRT properties between the two pools. We find that neither the difficulty nor discrimination parameters of the 15 items in each pool differ statistically significantly, with some evidence that the ChatGPT items were marginally better at differentiating different respondent abilities. The response time also does not differ significantly between the two sources of items. The ChatGPT-generated items showed evidence of unidimensionality and did not affect the unidimensionality of the original set of items when tested together. Finally, through a fine-grained learning objective labeling analysis, we found greater similarity in the learning objective distribution of ChatGPT-generated items and the items from the target OpenStax lesson (0.9666) than between ChatGPT-generated items and adjacent OpenStax lessons (0.6859 for the previous lesson and 0.6153 for the subsequent lesson). These results corroborate our conclusion that generative AI can produce algebra items of similar quality to existing textbook questions that share the same construct or constructs as those questions.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100284"},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000870/pdfft?md5=91d7e8564077ef80c2ba5f18fa4e22fb&pid=1-s2.0-S2666920X24000870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Understanding the perception of design students towards ChatGPT","authors":"Vigneshkumar Chellappa, Yan Luximon","doi":"10.1016/j.caeai.2024.100281","DOIUrl":"10.1016/j.caeai.2024.100281","url":null,"abstract":"<div><p>The benefits of artificial intelligence (AI)-enabled language models, such as ChatGPT, have contributed to their growing popularity in education. However, there is currently a lack of evidence regarding the perception of ChatGPT, specifically among design students. This study aimed to understand the product design (PD) and user experience design (UXD) students' views on ChatGPT and focused on an Indian university. The study employed a survey research design, utilizing questionnaires as the primary data collection method. The collected data (n = 149) was analyzed using descriptive statistics (i.e., frequency, percentage, average, and standard deviation (SD). Inferential statistics (i.e., one-way ANOVA) was used to understand the significant differences between the programs of study, gender, and academic level. The findings indicate that the students expressed admiration for the capabilities of ChatGPT and found it to be an interesting and helpful tool for their studies. In addition, the students' motivation towards using ChatGPT was moderate. Furthermore, the study observed significant differences between PD and UXD students and differences based on gender and academic level on certain variables. Notably, UXD students reported that ChatGPT does not understand their questions well, and formulating effective prompts for the tool was more challenging than for PD students. Based on the findings, the study recommends how educators should consider integrating ChatGPT into design education curricula and pedagogical practices. The insights aim to contribute to refining the use of ChatGPT in educational settings and exploring avenues for improving its effectiveness, ultimately advancing the field of AI in design education.</p></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"7 ","pages":"Article 100281"},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666920X24000845/pdfft?md5=7f886da5dc4c10e4786939013f2aae97&pid=1-s2.0-S2666920X24000845-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142058527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}