Mohammad Torabi, Georgios D Mitsis, Jean-Baptiste Poline
{"title":"On the variability of dynamic functional connectivity assessment methods.","authors":"Mohammad Torabi, Georgios D Mitsis, Jean-Baptiste Poline","doi":"10.1093/gigascience/giae009","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Dynamic functional connectivity (dFC) has become an important measure for understanding brain function and as a potential biomarker. However, various methodologies have been developed for assessing dFC, and it is unclear how the choice of method affects the results. In this work, we aimed to study the results variability of commonly used dFC methods.</p><p><strong>Methods: </strong>We implemented 7 dFC assessment methods in Python and used them to analyze the functional magnetic resonance imaging data of 395 subjects from the Human Connectome Project. We measured the similarity of dFC results yielded by different methods using several metrics to quantify overall, temporal, spatial, and intersubject similarity.</p><p><strong>Results: </strong>Our results showed a range of weak to strong similarity between the results of different methods, indicating considerable overall variability. Somewhat surprisingly, the observed variability in dFC estimates was found to be comparable to the expected functional connectivity variation over time, emphasizing the impact of methodological choices on the final results. Our findings revealed 3 distinct groups of methods with significant intergroup variability, each exhibiting distinct assumptions and advantages.</p><p><strong>Conclusions: </strong>Overall, our findings shed light on the impact of dFC assessment analytical flexibility and highlight the need for multianalysis approaches and careful method selection to capture the full range of dFC variation. They also emphasize the importance of distinguishing neural-driven dFC variations from physiological confounds and developing validation frameworks under a known ground truth. To facilitate such investigations, we provide an open-source Python toolbox, PydFC, which facilitates multianalysis dFC assessment, with the goal of enhancing the reliability and interpretability of dFC studies.</p>","PeriodicalId":12581,"journal":{"name":"GigaScience","volume":null,"pages":null},"PeriodicalIF":11.8000,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11000510/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"GigaScience","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1093/gigascience/giae009","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Dynamic functional connectivity (dFC) has become an important measure for understanding brain function and as a potential biomarker. However, various methodologies have been developed for assessing dFC, and it is unclear how the choice of method affects the results. In this work, we aimed to study the results variability of commonly used dFC methods.
Methods: We implemented 7 dFC assessment methods in Python and used them to analyze the functional magnetic resonance imaging data of 395 subjects from the Human Connectome Project. We measured the similarity of dFC results yielded by different methods using several metrics to quantify overall, temporal, spatial, and intersubject similarity.
Results: Our results showed a range of weak to strong similarity between the results of different methods, indicating considerable overall variability. Somewhat surprisingly, the observed variability in dFC estimates was found to be comparable to the expected functional connectivity variation over time, emphasizing the impact of methodological choices on the final results. Our findings revealed 3 distinct groups of methods with significant intergroup variability, each exhibiting distinct assumptions and advantages.
Conclusions: Overall, our findings shed light on the impact of dFC assessment analytical flexibility and highlight the need for multianalysis approaches and careful method selection to capture the full range of dFC variation. They also emphasize the importance of distinguishing neural-driven dFC variations from physiological confounds and developing validation frameworks under a known ground truth. To facilitate such investigations, we provide an open-source Python toolbox, PydFC, which facilitates multianalysis dFC assessment, with the goal of enhancing the reliability and interpretability of dFC studies.
期刊介绍:
GigaScience seeks to transform data dissemination and utilization in the life and biomedical sciences. As an online open-access open-data journal, it specializes in publishing "big-data" studies encompassing various fields. Its scope includes not only "omic" type data and the fields of high-throughput biology currently serviced by large public repositories, but also the growing range of more difficult-to-access data, such as imaging, neuroscience, ecology, cohort data, systems biology and other new types of large-scale shareable data.