Hua Li, Ming-Chieh Shih, Cheng-Jie Song, Yu-Kang Tu
{"title":"Bias propagation in network meta-analysis models","authors":"Hua Li, Ming-Chieh Shih, Cheng-Jie Song, Yu-Kang Tu","doi":"10.1002/jrsm.1614","DOIUrl":"https://doi.org/10.1002/jrsm.1614","url":null,"abstract":"<p>Network meta-analysis combines direct and indirect evidence to compare multiple treatments. As direct evidence for one treatment contrast may be indirect evidence for other treatment contrasts, biases in the direct evidence for one treatment contrast may affect not only the estimate for this particular treatment contrast but also estimates of other treatment contrasts. Because network structure determines how direct and indirect evidence are combined and weighted, the impact of biased evidence will be determined by the network geometry. Thus, this study's aim was to investigate how the impact of biased evidence spreads across the whole network and how the propagation of bias is influenced by the network structure. In addition to the popular Lu & Ades model, we also investigate bias propagation in the baseline model and arm-based model to compare the effects of bias in the different models. We undertook extensive simulations under different scenarios to explore how the impact of bias may be affected by the location of the bias, network geometry and the statistical model. Our results showed that the structure of a network has an important impact on how the bias spreads across the network, and this is especially true for the Lu & Ades model. The impact of bias is more likely to be diluted by other unbiased evidence in a well-connected network. We also used a real network meta-analysis to demonstrate how to use the new knowledge about bias propagation to explain questionable results from the original analysis.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 2","pages":"247-265"},"PeriodicalIF":9.8,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5814906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Area under the curve-optimized synthesis of prediction models from a meta-analytical perspective","authors":"Daisuke Yoneoka, Katsuhiro Omae, Masayuki Henmi, Shinto Eguchi","doi":"10.1002/jrsm.1612","DOIUrl":"https://doi.org/10.1002/jrsm.1612","url":null,"abstract":"<p>The number of clinical prediction models sharing the same prediction task has increased in the medical literature. However, evidence synthesis methodologies that use the results of these prediction models have not been sufficiently studied, particularly in the context of meta-analysis settings where only summary statistics are available. In particular, we consider the following situation: we want to predict an outcome <i>Y</i>, that is not included in our current data, while the covariate data are fully available. In addition, the summary statistics from prior studies, which share the same prediction task (i.e., the prediction of <i>Y</i>), are available. This study introduces a new method for synthesizing the summary results of binary prediction models reported in the prior studies using a linear predictor under a distributional assumption between the current and prior studies. The method provides an integrated predictor combining all predictors reported in the prior studies with weights. The vector of the weights is designed to achieve the hypothetical improvement of area under the receiver operating characteristic curve (AUC) on the current available data under a practical situation where there are different sets of covariates in the prior studies. We observe a counterintuitive aspect in typical situations where a part of weight components in the proposed method becomes negative. It implies that flipping the sign of the prediction results reported in each individual study would improve the overall prediction performance. Finally, numerical and real-world data analysis were conducted and showed that our method outperformed conventional methods in terms of AUC.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 2","pages":"234-246"},"PeriodicalIF":9.8,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5808053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos I. Bougioukas, Theodoros Diakonidis, Anna C. Mavromanoli, Anna-Bettina Haidich
{"title":"ccaR: A package for assessing primary study overlap across systematic reviews in overviews","authors":"Konstantinos I. Bougioukas, Theodoros Diakonidis, Anna C. Mavromanoli, Anna-Bettina Haidich","doi":"10.1002/jrsm.1610","DOIUrl":"https://doi.org/10.1002/jrsm.1610","url":null,"abstract":"<p>An overview of reviews aims to collect, assess, and synthesize evidence from multiple systematic reviews (SRs) on a specific topic using rigorous and reproducible methods. An important methodological challenge in conducting an overview of reviews is the management of overlapping data due to the inclusion of the same primary studies in SRs. We present a free, open-source R package called ccaR (https://github.com/thdiakon/ccaR) that provides easy-to-use functions for assessing the degree of overlap of primary studies in an overview of reviews with the use of the corrected cover area (CCA) index. A worked example with and without consideration of chronological structural missingness is outlined, illustrating the steps involved in, calculating the CCA index and creating a publication-ready heatmap. We expect ccaR to be useful for overview authors, methodologists, and reviewers who are familiar with the basics of R and contribute to the discussion on different methodological approaches for implementing the CCA index. Future research and applications could further investigate the functionality or potential limitations of our package and other potential uses.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 3","pages":"443-454"},"PeriodicalIF":9.8,"publicationDate":"2022-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"6108125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Camilla Hansen Nejstgaard, David Ruben Teindl Laursen, Andreas Lundh, Asbj?rn Hróbjartsson
{"title":"Commercial funding and estimated intervention effects in randomized clinical trials: Systematic review of meta-epidemiological studies","authors":"Camilla Hansen Nejstgaard, David Ruben Teindl Laursen, Andreas Lundh, Asbj?rn Hróbjartsson","doi":"10.1002/jrsm.1611","DOIUrl":"https://doi.org/10.1002/jrsm.1611","url":null,"abstract":"<p>We investigated to which degree commercial funding is associated with estimated intervention effects in randomized trials. We included meta-epidemiological studies with published data on the association between commercial funding and results or conclusions of randomized trials. We searched five databases and other sources. We selected one result per meta-epidemiological study, preferably unadjusted ratio of odds ratios (ROR), for example, odds ratio(commercial funding)/odds ratio(noncommercial funding). We pooled RORs in random-effects meta-analyses (ROR <1 indicated exaggerated intervention effects in commercially funded trials), subgrouped (preplanned) by study aim: commercial funding per se versus risk of commercial funder influence. We included eight meta-epidemiological studies (264 meta-analyses, 2725 trials). The summary ROR was 0.95 (95% confidence interval 0.85–1.06). Subgroup analysis revealed a difference (<i>p</i> = 0.02) between studies of commercial funding per se, ROR 1.06 (0.95–1.17) and studies of risk of commercial funder influence, ROR 0.88 (0.79–0.97). In conclusion, we found no statistically significant association between commercial funding and estimated intervention effects when combining studies of commercial funding per se and studies of risk of commercial funder influence. A preplanned subgroup analysis indicated that trials with high risk of commercial funder influence exaggerated intervention effects by 12% (21%–3%), on average. Our results differ from previous theoretical considerations and findings from methodological studies and therefore call for confirmation. We suggest it is prudent to interpret results from commercially funded trials with caution, especially when there is a risk that the funder had direct influence on trial design, conduct, analysis, or reporting.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 2","pages":"144-155"},"PeriodicalIF":9.8,"publicationDate":"2022-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1611","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5795849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Critical appraisal in ecology: What tools are available, and what is being used in systematic reviews?","authors":"Jessica Stanhope, Philip Weinstein","doi":"10.1002/jrsm.1609","DOIUrl":"https://doi.org/10.1002/jrsm.1609","url":null,"abstract":"<p>Many reviews referred to as ‘systematic reviews’ in ecology are not consistent with best practice in that they generally lack appropriate critical appraisal of included studies. This limitation is particularly important in applied ecology, where there have been increasing calls for more systematic reviews to guide decision making. To identify the available critical appraisal tools (CATs) and hierarchies of evidence available for ecology studies, we systematically searched for: studies that described the development and/or examination of tools to assess the potential methodological bias in studies of ecology; and the tools used to assess potential methodological bias of included studies in ecological systematic reviews. We identified 680 reviews labelled as ‘systematic reviews’ in ecology, however only 4.0% performed critical appraisal of the included studies. Three hierarchies of evidence and 23 CATs were identified, and assessed as lacking independent development, validity and reliability testing, and/or completeness. The authors of the reviews that included critical appraisal have appropriately identified the need to move reviews in ecology in the direction of this higher level of evidence, and have taken applied ecology further in the direction of evidence-based practice. However, we identified shortcomings in these approaches when compared with best practice, and conclude that new tools are needed that reflect a range of questions posed in ecology. Through increasing the availability of such tools, the strength of evidence provided by systematic reviews in ecology would improve.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 3","pages":"342-356"},"PeriodicalIF":9.8,"publicationDate":"2022-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1609","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5854515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ofir Harari, Mohsen Soltanifar, Joseph C. Cappelleri, Andre Verhoek, Mario Ouwens, Caitlin Daly, Bart Heeg
{"title":"Network meta-interpolation: Effect modification adjustment in network meta-analysis using subgroup analyses","authors":"Ofir Harari, Mohsen Soltanifar, Joseph C. Cappelleri, Andre Verhoek, Mario Ouwens, Caitlin Daly, Bart Heeg","doi":"10.1002/jrsm.1608","DOIUrl":"https://doi.org/10.1002/jrsm.1608","url":null,"abstract":"<p>Effect modification (EM) may cause bias in network meta-analysis (NMA). Existing population adjustment NMA methods use individual patient data to adjust for EM but disregard available subgroup information from aggregated data in the evidence network. Additionally, these methods often rely on the shared effect modification (SEM) assumption. In this paper, we propose Network Meta-Interpolation (NMI): a method using subgroup analyses to adjust for EM that does not assume SEM. NMI balances effect modifiers across studies by turning treatment effect (TE) estimates at the subgroup- and study level into TE and standard errors at EM values common to all studies. In an extensive simulation study, we simulate two evidence networks consisting of four treatments, and assess the impact of departure from the SEM assumption, variable EM correlation across trials, trial sample size and network size. NMI was compared to standard NMA, network meta-regression (NMR) and Multilevel NMR (ML-NMR) in terms of estimation accuracy and credible interval (CrI) coverage. In the base case non-SEM dataset, NMI achieved the highest estimation accuracy with root mean squared error (RMSE) of 0.228, followed by standard NMA (0.241), ML-NMR (0.447) and NMR (0.541). In the SEM dataset, NMI was again the most accurate method with RMSE of 0.222, followed by ML-NMR (0.255). CrI coverage followed a similar pattern. NMI's dominance in terms of estimation accuracy and CrI coverage appeared to be consistent across all scenarios. NMI represents an effective option for NMA in the presence of study imbalance and available subgroup data.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 2","pages":"211-233"},"PeriodicalIF":9.8,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jrsm.1608","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5809876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Paperfetcher: A tool to automate handsearching and citation searching for systematic reviews","authors":"Akash Pallath, Qiyang Zhang","doi":"10.1002/jrsm.1604","DOIUrl":"https://doi.org/10.1002/jrsm.1604","url":null,"abstract":"<p>Systematic reviews are vital instruments for researchers to understand broad trends in a field and synthesize evidence on the effectiveness of interventions in addressing specific issues. The quality of a systematic review depends critically on having comprehensively surveyed all relevant literature on the review topic. In addition to database searching, handsearching is an important supplementary technique that helps increase the likelihood of identifying all relevant studies in a literature search. Traditional handsearching requires reviewers to manually browse through a curated list of field-specific journals and conference proceedings to find articles relevant to the review topic. This manual process is not only time-consuming, laborious, costly, and error-prone due to human fatigue, but it also lacks replicability due to its cumbersome manual nature. To address these issues, this paper presents a free and open-source Python package and an accompanying web-app, <i>Paperfetcher</i>, to automate the retrieval of article metadata for handsearching. With <i>Paperfetcher</i>'s assistance, researchers can retrieve article metadata from designated journals within a specified time frame in just a few clicks. In addition to handsearching, it also incorporates a beta version of citation searching in both forward and backward directions. <i>Paperfetcher</i> has an easy-to-use interface, which allows researchers to download the metadata of retrieved studies as a list of DOIs or as an RIS file to facilitate seamless import into systematic review screening software. To the best of our knowledge, <i>Paperfetcher</i> is the first tool to automate handsearching with high usability and a multi-disciplinary focus.</p>","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 2","pages":"323-335"},"PeriodicalIF":9.8,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"6226523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiong Guo, Xianlin Gu, Kun Feng, Jin Huang, Liang Du
{"title":"Response to Kim et al. “When conducting a systematic review, can one trade search efficiency for potential publication bias?”","authors":"Qiong Guo, Xianlin Gu, Kun Feng, Jin Huang, Liang Du","doi":"10.1002/jrsm.1607","DOIUrl":"https://doi.org/10.1002/jrsm.1607","url":null,"abstract":"Dear Editor, We sincerely thank Kim et al. for their interest in our study entitled “A search of only four key databases would identify most randomized controlled trials of acupuncture: A meta-epidemiological study.” Our study found that combined retrieval from two Chinese databases (CNKI and WanFang) and two English databases (PubMed and CENTRAL) identified most randomized controlled trials (RCTs) of acupuncture and highlighted the importance of searching for both Chinese and English RCTs when performing a systematic review (SR) on acupuncture. Our findings were based on acupuncture SRs in Chinese and English owing to the limitations of language, as Kim et al. indicate. Nevertheless, the SRs in Chinese and English were based on searches that were not restricted to Chinese and English databases but included databases in other languages, thus RCTs originally published in nonChinese and non-English languages may have been included in our analyses. It would have been difficult and unnecessary for the study to include acupuncture SRs in all languages. The quality of research published in English was higher and a sensitivity analysis including only acupuncture SRs in English was performed. All 1840 RCTs were extracted from 119 acupuncture SRs in English, of which 34 (1.8%) RCTs were not recalled by searching the four key databases (CNKI, WanFang, PubMed, and CENTRAL). The 34 unrecalled RCTs were from 25 SRs, including 17 SRs each with 1 unrecalled RCT, 7 with 2 unrecalled RCTs and 1 with 3 unrecalled RCTs. The unrecalled rate of RCTs per SR ranged from 2.5% to 33.3%. A search of the four key databases produced at least 90% recall of included RCTs per SR in 93.3% (95% confidence interval [CI] 88.8%–97.8%) of SRs (Figure 1), meaning that the combined retrieval of four key databases might achieve a 90% recall of the RCTs included in an SR. The limited number of acupuncture SRs published in English led to the wide 95%CI of the proportion of 90% recall SRs which meant that a firm conclusion could not be drawn. Therefore, more acupuncture SRs published in English are needed to validate our findings. The selection of target databases was initially based on prior experience and also on the search frequency of the databases used in acupuncture SRs. We assumed that these target databases would retrieve the majority of acupuncture RCTs. This assumption was later confirmed since 99.3% of acupuncture RCTs were recalled by searching the six target databases, CNKI, WanFang, VIP, PubMed, CENTRAL, and EMbase. It was not the intention of our study to assess the impact of non-Chinese and non-English RCTs in an acupuncture SR. Kim et al. considered that searching acupuncture RCTs in Korean and Japanese was important when performing an SR on acupuncture and we sought to validate their idea by analyzing our existing data. We found that 92 out of 1227 SRs searched Korean or Japanese databases but only 24 SRs searched the four key databases in addition to Korean or Japanese ","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"13 6","pages":"664-666"},"PeriodicalIF":9.8,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"6226531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Korevaar, S. Turner, Andrew B Forbes, A. Karahalios, M. Taljaard, Joanne E. McKenzie
{"title":"Evaluation of statistical methods used to meta-analyse results from interrupted time series studies: a simulation study","authors":"Elizabeth Korevaar, S. Turner, Andrew B Forbes, A. Karahalios, M. Taljaard, Joanne E. McKenzie","doi":"10.1101/2022.10.17.22281160","DOIUrl":"https://doi.org/10.1101/2022.10.17.22281160","url":null,"abstract":"Background Interrupted time series (ITS) are often meta-analysed to inform public health and policy decisions but examination of the statistical methods for ITS analysis and meta-analysis in this context is limited. Methods We simulated meta-analyses of ITS studies with continuous outcome data, analysed the studies using segmented linear regression with two estimation methods [ordinary least squares (OLS) and restricted maximum likelihood (REML)], and meta-analysed the immediate level- and slope-change effect estimates using fixed-effect and (multiple) random-effects meta-analysis methods. Simulation design parameters included varying series length; magnitude of lag-1 autocorrelation; magnitude of level- and slope-changes; number of included studies; and, effect size heterogeneity. Results All meta-analysis methods yielded unbiased estimates of the interruption effects. All random effects meta-analysis methods yielded coverage close to the nominal level, irrespective of the ITS analysis method used and other design parameters. However, heterogeneity was frequently overestimated in scenarios where the ITS study standard errors were underestimated, which occurred for short series or when the ITS analysis method did not appropriately account for autocorrelation. Conclusions The performance of meta-analysis methods depends on the design and analysis of the included ITS studies. Although all random effects methods performed well in terms of coverage, irrespective of the ITS analysis method, we recommend the use of effect estimates calculated from ITS methods that adjust for autocorrelation when possible. Doing so will likely to lead to more accurate estimates of the heterogeneity variance.","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"1 1","pages":""},"PeriodicalIF":9.8,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42577226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Response to Hemilä and Chalker's “Pitfalls in choosing data examples for methodological work: Bayesian approaches to a fixed effects meta-analysis of zinc lozenges for the common cold”","authors":"Clara P. Domínguez Islas, Kenneth M. Rice","doi":"10.1002/jrsm.1600","DOIUrl":"https://doi.org/10.1002/jrsm.1600","url":null,"abstract":"We thank Drs Harri Hemilä and Elizabeth Chalker for their post-publication review of our work. Using a metaanalysis from a now-withdrawn systematic review on zinc for the common cold was indeed an unfortunate oversight. In addition, we should have better corroborated the data from each of the studies in the meta-analysis, to avoid further propagation of numerical errors. We completely agree that this meta-analysis should not have been cited nor used as an applied example, and we apologize to researchers who were confused by our mistake. We take issue, however, with other points raised by Hemilä and Chalker. First, we disagree that the “validity” of Figures 3–5 is in question. The objective of our manuscript is to propose and evaluate a Bayesian approach to fixed effects meta-analysis and to compare the performance of such approach to other more traditional ones. The point made in Figures 3–5 is that, compared with existing defaults, the proposed method can provide considerable stability and robustness to choices of prior. This same point could have been made using a different example, or even fictional or simulated data. And indeed, the manuscript includes a second example in the Appendix section, from which similar conclusions about the approach can be drawn. Our manuscript is about statistical methods for meta-analysis, not a systematic review on zinc lozenges, and we stand by our method's validity. Second, Hemilä and Chalker object to our use of mean differences. We chose this scale deliberately, to simplify the exposition and enable readers to focus on novel aspects of our method. We are not convinced by their “strong” arguments for blanket use of one scale over another: interpreted with care, either absolute or relative differences can be usefully analyzed. Finally, Hemilä and Chalker complain that we insufficiently increase the understanding of zinc lozenges' effect on the common cold. This complaint simply misses the point of the manuscript which, as the title and abstract make clear, is to discuss novel statistical methods. Were our focus the understanding of zinc lozenges, we would have published a different paper in a different journal.","PeriodicalId":226,"journal":{"name":"Research Synthesis Methods","volume":"14 1","pages":"2"},"PeriodicalIF":9.8,"publicationDate":"2022-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"6152375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}