Bas E Kellerhuis, Kevin Jenniskens, Mike P T Kusters, Ewoud Schuit, Lotty Hooft, Karel G M Moons, Johannes B Reitsma
{"title":"Expert panel as reference standard procedure in diagnostic accuracy studies: a systematic scoping review and methodological guidance.","authors":"Bas E Kellerhuis, Kevin Jenniskens, Mike P T Kusters, Ewoud Schuit, Lotty Hooft, Karel G M Moons, Johannes B Reitsma","doi":"10.1186/s41512-025-00195-7","DOIUrl":"10.1186/s41512-025-00195-7","url":null,"abstract":"<p><strong>Background: </strong>In diagnostic accuracy studies, when no reference standard test is available, a group of experts, combined in an expert panel, is often used to assess the presence of the target condition using multiple relevant pieces of patient information. Based on the expert panel's judgment, the accuracy of a test or model can be determined. Methodological choices in design and analysis of the expert panel procedure have been shown to vary considerably between studies as well as the quality of reporting. This review maps the current landscape of expert panels used as reference standard in diagnostic accuracy or model studies.</p><p><strong>Methods: </strong>PubMed was systematically searched for eligible studies published between June 1, 2012, and October 1, 2022. Data extraction was performed by one author and, in cases of doubt, checked by another author. Study characteristics, expert panel characteristics, and expert panel methodology were extracted. Articles were included if the diagnostic accuracy of an index test or diagnostic model was assessed using an expert panel as reference standard and the study was reported in English, Dutch, or German.</p><p><strong>Results: </strong>After initial identification of 4078 studies, 318 were included for data extraction. Expert panels were used across numerous medical domains, of which oncology was the most common (20%). The number of experts judging the presence of the target condition in each patient was 2 or fewer in 29%, 3 or 4 in 55%, and 5 or more in 16% of the 318 studies. Expert panel types used were an independent panel (i.e., each expert returns a judgement without conferring with other experts in the panel) in 33% of studies, a panel using a consensus method (i.e., each case was discussed by the expert panel) in 27%, a staged (i.e., each expert independently returns a judgement and discordant cases were discussed in a consensus meeting) target condition assessment approach in 11%, and a tiebreaker (i.e., each expert independently returns a judgement and discordant cases were assessed by another expert) in 8%. The exact expert panel decision approach was unclear or not reported in 21% of studies. In 5% of studies, information about remaining uncertainty in experts about the target condition presence or absence was collected for each participant.</p><p><strong>Conclusions: </strong>There is large heterogeneity in the composition of expert panels and the way that expert panels are used as reference standard in diagnostic research. Key methodological characteristics of expert panels are frequently not reported, making it difficult to replicate or reproduce results, and potentially masking biasing factors. There is a clear need for more guidance on how to perform an expert panel procedure and specific extensions of the STARD and TRIPOD reporting guidelines when using an expert panel.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12070646/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelo Capodici, Claudio Fanconi, Catherine Curtin, Alessandro Shapiro, Francesca Noci, Alberto Giannoni, Tina Hernandez-Boussard
{"title":"A scoping review of machine learning models to predict risk of falls in elders, without using sensor data.","authors":"Angelo Capodici, Claudio Fanconi, Catherine Curtin, Alessandro Shapiro, Francesca Noci, Alberto Giannoni, Tina Hernandez-Boussard","doi":"10.1186/s41512-025-00190-y","DOIUrl":"https://doi.org/10.1186/s41512-025-00190-y","url":null,"abstract":"<p><strong>Objectives: </strong>This scoping review assesses machine learning (ML) tools that predicted falls, relying on information in health records without using any sensor data. The aim was to assess the available evidence on innovative techniques to improve fall prevention management.</p><p><strong>Methods: </strong>Studies were included if they focused on predicting fall risk with machine learning in elderly populations and were written in English. There were 13 different extracted variables, including population characteristics (community dwelling, inpatients, age range, main pathology, ethnicity/race). Furthermore, the number of variables used in the final models, as well as their type, was extracted.</p><p><strong>Results: </strong>A total of 6331 studies were retrieved, and 19 articles met criteria for data extraction. Metric performances reported by authors were commonly high in terms of accuracy (e.g., greater than 0.70). The most represented features included cardiovascular status and mobility assessments. Common gaps identified included a lack of transparent reporting and insufficient fairness assessments.</p><p><strong>Conclusions: </strong>This review provides evidence that falls can be predicted using ML without using sensors if the amount of data and its quality is adequate. However, further studies are needed to validate these models in diverse groups and populations.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12054167/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can we develop real-world prognostic models using observational healthcare data? Large-scale experiment to investigate model sensitivity to database and phenotypes.","authors":"Jenna M Reps, Peter R Rijnbeek, Patrick B Ryan","doi":"10.1186/s41512-025-00191-x","DOIUrl":"https://doi.org/10.1186/s41512-025-00191-x","url":null,"abstract":"<p><strong>Background: </strong>Large observational healthcare databases are frequently used to develop models to be implemented in real-world clinical practice populations. For example, these databases were used to develop COVID severity models that guided interventions such as who to prioritize vaccinating during the pandemic. However, the clinical setting and observational databases often differ in the types of patients (case mix), and it is a nontrivial process to identify patients with medical conditions (phenotyping) in these databases. In this study, we investigate how sensitive a model's performance is to the choice of development database, population, and outcome phenotype.</p><p><strong>Methods: </strong>We developed > 450 different logistic regression models for nine prediction tasks across seven databases with a range of suitable population and outcome phenotypes. Performance stability within tasks was calculated by applying each model to data created by permuting the database, population, or outcome phenotype. We investigate performance (AUROC, scaled Brier, and calibration-in-the-large) stability and individual risk estimate stability.</p><p><strong>Results: </strong>In general, changing the outcome definitions or population phenotype made little impact on the model validation discrimination. However, validation discrimination was unstable when the database changed. Calibration and Brier performance were unstable when the population, outcome definition, or database changed. This may be problematic if a model developed using observational data is implemented in a real-world setting.</p><p><strong>Conclusions: </strong>These results highlight the importance of validating a model developed using observational data in the clinical setting prior to using it for decision-making. Calibration and Brier score should be evaluated to prevent miscalibrated risk estimates being used to aid clinical decisions.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12004590/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Heesen, Sebastian M Christ, Olga Ciobanu-Caraus, Abdullah Kahraman, Georg Schelling, Gabriela Studer, Beata Bode-Lesniewska, Bruno Fuchs
{"title":"Clinical prognostic models for sarcomas: a systematic review and critical appraisal of development and validation studies.","authors":"Philip Heesen, Sebastian M Christ, Olga Ciobanu-Caraus, Abdullah Kahraman, Georg Schelling, Gabriela Studer, Beata Bode-Lesniewska, Bruno Fuchs","doi":"10.1186/s41512-025-00186-8","DOIUrl":"10.1186/s41512-025-00186-8","url":null,"abstract":"<p><strong>Background: </strong>Current clinical guidelines recommend the use of clinical prognostic models (CPMs) for therapeutic decision-making in sarcoma patients. However, the number and quality of developed and externally validated CPMs is unknown. Therefore, we aimed to describe and critically assess CPMs for sarcomas.</p><p><strong>Methods: </strong>We performed a systematic review including all studies describing the development and/or external validation of a CPM for sarcomas. We searched the databases MEDLINE, EMBASE, Cochrane Central, and Scopus from inception until June 7th, 2022. The risk of bias was assessed using the prediction model risk of bias assessment tool (PROBAST).</p><p><strong>Results: </strong>Seven thousand six hundred fifty-six records were screened, of which 145 studies were eventually included, developing 182 and externally validating 59 CPMs. The most frequently modeled type of sarcoma was osteosarcoma (43/182; 23.6%), and the most frequently predicted outcome was overall survival (81/182; 44.5%). The most used predictors were the patient's age (133/182; 73.1%) and tumor size (116/182; 63.7%). Univariable screening was used in 137 (75.3%) CPMs, and only 7 (3.9%) CPMs were developed using pre-specified predictors based on clinical knowledge or literature. The median c-statistic on the development dataset was 0.74 (interquartile range [IQR] 0.71, 0.78). Calibration was reported for 142 CPMs (142/182; 78.0%). The median c-statistic of external validations was 0.72 (IQR 0.68-0.75). Calibration was reported for 46 out of 59 (78.0%) externally validated CPMs. We found 169 out of 241 (70.1%) CPMs to be at high risk of bias, mostly due to the high risk of bias in the analysis domain.</p><p><strong>Discussion: </strong>While various CPMs for sarcomas have been developed, the clinical utility of most of them is hindered by a high risk of bias and limited external validation. Future research should prioritise validating and updating existing well-developed CPMs over developing new ones to ensure reliable prognostic tools.</p><p><strong>Trial registration: </strong>PROSPERO CRD42022335222.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lasai Barreñada, Paula Dhiman, Dirk Timmerman, Anne-Laure Boulesteix, Ben Van Calster
{"title":"Correction: Understanding overfitting in random forest for probability estimation: a visualization and simulation study.","authors":"Lasai Barreñada, Paula Dhiman, Dirk Timmerman, Anne-Laure Boulesteix, Ben Van Calster","doi":"10.1186/s41512-025-00189-5","DOIUrl":"10.1186/s41512-025-00189-5","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11967119/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew J Vickers, Ben Van Calster, Laure Wynants, Ewout W Steyerberg
{"title":"Correction: Decision curve analysis: confidence intervals and hypothesis testing for net benefit.","authors":"Andrew J Vickers, Ben Van Calster, Laure Wynants, Ewout W Steyerberg","doi":"10.1186/s41512-025-00188-6","DOIUrl":"10.1186/s41512-025-00188-6","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura J Bonnett, Thomas Spain, Alexandra Hunt, Jane L Hutton, Victoria Watson, Anthony G Marson, John Blakey
{"title":"Guide to evaluating performance of prediction models for recurrent clinical events.","authors":"Laura J Bonnett, Thomas Spain, Alexandra Hunt, Jane L Hutton, Victoria Watson, Anthony G Marson, John Blakey","doi":"10.1186/s41512-025-00187-7","DOIUrl":"10.1186/s41512-025-00187-7","url":null,"abstract":"<p><strong>Background: </strong>Many chronic conditions, such as epilepsy and asthma, are typified by recurrent events-repeated acute deterioration events of a similar type. Statistical models for these conditions often focus on evaluating the time to the first event. They therefore do not make use of data available on all events. Statistical models for recurrent events exist, but it is not clear how best to evaluate their performance. We compare the relative performance of statistical models for analysing recurrent events for epilepsy and asthma.</p><p><strong>Methods: </strong>We studied two clinical exemplars of common and infrequent events: asthma exacerbations using the Optimum Patient Clinical Research Database, and epileptic seizures using data from the Standard versus New Antiepileptic Drug Study. In both cases, count-based models (negative binomial and zero-inflated negative binomial) and variants on the Cox model (Andersen-Gill and Prentice, Williams and Peterson) were used to assess the risk of recurrence (of exacerbations or seizures respectively). Performance of models was evaluated via numerical (root mean square prediction error, mean absolute prediction error, and prediction bias) and graphical (calibration plots and Bland-Altman plots) approaches.</p><p><strong>Results: </strong>The performance of the prediction models for asthma and epilepsy recurrent events could be evaluated via the selected numerical and graphical measures. For both the asthma and epilepsy exemplars, the Prentice, Williams and Peterson model showed the closest agreement between predicted and observed outcomes.</p><p><strong>Conclusion: </strong>Inappropriate models can lead to incorrect conclusions which disadvantage patients. Therefore, prediction models for outcomes associated with chronic conditions should include all repeated events. Such models can be evaluated via the promoted numerical and graphical approaches alongside modified calibration measures.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11912649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktoria Gastens, Arnaud Chiolero, Martin Feller, Douglas C Bauer, Nicolas Rodondi, Cinzia Del Giovane
{"title":"Development and internal validation of a new life expectancy estimator for multimorbid older adults.","authors":"Viktoria Gastens, Arnaud Chiolero, Martin Feller, Douglas C Bauer, Nicolas Rodondi, Cinzia Del Giovane","doi":"10.1186/s41512-025-00185-9","DOIUrl":"10.1186/s41512-025-00185-9","url":null,"abstract":"<p><strong>Background: </strong>As populations are aging, the number of older patients with multiple chronic diseases demanding complex care increases. Although clinical guidelines recommend care to be personalized accounting for life expectancy, there are no tools to estimate life expectancy among multimorbid patients. Our objective was therefore to develop and internally validate a life expectancy estimator specifically for older multimorbid adults.</p><p><strong>Methods: </strong>We analyzed data from the OPERAM (OPtimising thERapy to prevent avoidable hospital admissions in multimorbid older people) study in Bern, Switzerland. Participants aged 70 years old or more with multimorbidity (3 or more chronic medical conditions) and polypharmacy (use of 5 drugs or more for > 30 days) were included. All-cause mortality was assessed during 3 years of follow-up. We built a 3-year mortality prognostic index and transformed this index into a life expectancy estimator. Mortality risk candidate predictors included demographic variables (age, sex), clinical characteristics (metastatic cancer, number of drugs, body mass index, weight loss), smoking, functional status variables (Barthel-Index, falls, nursing home residence), and hospitalization. We internally validated and optimism corrected the model using bootstrapping techniques. We transformed the mortality prognostic index into a life expectancy estimator using the Gompertz survival function.</p><p><strong>Results: </strong>Eight hundred five participants were included in the analysis. During 3 years of follow-up, 292 participants (36%) died. Age, metastatic cancer, number of drugs, lower body mass index, weight loss, number of hospitalizations, and lower Barthel-Index (functional impairment) were selected as predictors in the final multivariable model. Our model showed moderate discrimination with an optimism-corrected C statistic of 0.70. The optimism-corrected calibration slope was 0.96. The Gompertz-predicted mean life expectancy in our sample was 5.4 years (standard deviation 3.5 years). Categorization into three life expectancy groups led to visually good separation in Kaplan-Meier curves. We also developed a web application that calculates an individual's life expectancy estimation.</p><p><strong>Conclusion: </strong>A life expectancy estimator for multimorbid older adults based on an internally validated 3-year mortality risk index was developed. Further validation of the score among various populations of multimorbid patients is needed before its implementation into practice.</p><p><strong>Trial registration: </strong>ClinicalTrials.gov NCT02986425. First submitted 21/10/2016. First posted 08/12/2016.</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11877760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akshay Swaminathan, Ujwal Srivastava, Lucia Tu, Ivan Lopez, Nigam H Shah, Andrew J Vickers
{"title":"Against reflexive recalibration: towards a causal framework for addressing miscalibration.","authors":"Akshay Swaminathan, Ujwal Srivastava, Lucia Tu, Ivan Lopez, Nigam H Shah, Andrew J Vickers","doi":"10.1186/s41512-024-00184-2","DOIUrl":"10.1186/s41512-024-00184-2","url":null,"abstract":"","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11812191/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Models for predicting risk of endometrial cancer: a systematic review.","authors":"Bea Harris Forder, Anastasia Ardasheva, Karyna Atha, Hannah Nentwich, Roxanna Abhari, Christiana Kartsonaki","doi":"10.1186/s41512-024-00178-0","DOIUrl":"10.1186/s41512-024-00178-0","url":null,"abstract":"<p><strong>Background: </strong>Endometrial cancer (EC) is the most prevalent gynaecological cancer in the UK with a rising incidence. Various models exist to predict the risk of developing EC, for different settings and prediction timeframes. This systematic review aims to provide a summary of models and assess their characteristics and performance.</p><p><strong>Methods: </strong>A systematic search of the MEDLINE and Embase (OVID) databases was used to identify risk prediction models related to EC and studies validating these models. Papers relating to predicting the risk of a future diagnosis of EC were selected for inclusion. Study characteristics, variables included in the model, methods used, and model performance, were extracted. The Prediction model Risk-of-Bias Assessment Tool was used to assess model quality.</p><p><strong>Results: </strong>Twenty studies describing 19 models were included. Ten were designed for the general population and nine for high-risk populations. Three models were developed for premenopausal women and two for postmenopausal women. Logistic regression was the most used development method. Three models, all in the general population, had a low risk of bias and all models had high applicability. Most models had moderate (area under the receiver operating characteristic curve (AUC) 0.60-0.80) or high predictive ability (AUC > 0.80) with AUCs ranging from 0.56 to 0.92. Calibration was assessed for five models. Two of these, the Hippisley-Cox and Coupland QCancer models, had high predictive ability and were well calibrated; these models also received a low risk of bias rating.</p><p><strong>Conclusions: </strong>Several models of moderate-high predictive ability exist for predicting the risk of EC, but study quality varies, with most models at high risk of bias. External validation of well-performing models in large, diverse cohorts is needed to assess their utility.</p><p><strong>Registration: </strong>The protocol for this review is available on PROSPERO (CRD42022303085).</p>","PeriodicalId":72800,"journal":{"name":"Diagnostic and prognostic research","volume":"9 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11792366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143124016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}