Imaging neuroscience (Cambridge, Mass.)最新文献

筛选
英文 中文
The contribution of the vascular architecture and cerebrovascular reactivity to the BOLD signal formation across cortical depth. 跨皮层深度的血管结构和脑血管反应性对 BOLD 信号形成的贡献。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-06-28 eCollection Date: 2024-06-01 DOI: 10.1162/imag_a_00203
Emiel C A Roefs, Wouter Schellekens, Mario G Báez-Yáñez, Alex A Bhogal, Iris I A Groen, Matthias J P van Osch, Jeroen C W Siero, Natalia Petridou
{"title":"The contribution of the vascular architecture and cerebrovascular reactivity to the BOLD signal formation across cortical depth.","authors":"Emiel C A Roefs, Wouter Schellekens, Mario G Báez-Yáñez, Alex A Bhogal, Iris I A Groen, Matthias J P van Osch, Jeroen C W Siero, Natalia Petridou","doi":"10.1162/imag_a_00203","DOIUrl":"https://doi.org/10.1162/imag_a_00203","url":null,"abstract":"<p><p>Assessment of neuronal activity using blood oxygenation level-dependent (BOLD) is confounded by how the cerebrovascular architecture modulates hemodynamic responses. To understand brain function at the laminar level, it is crucial to distinguish neuronal signal contributions from those determined by the cortical vascular organization. Therefore, our aim was to investigate the purely vascular contribution in the BOLD signal by using vasoactive stimuli and compare that with neuronal-induced BOLD responses from a visual task. To do so, we estimated the hemodynamic response function (HRF) across cortical depth following brief visual stimulations under different conditions using ultrahigh-field (7 Tesla) functional (f)MRI. We acquired gradient-echo (GE)-echo-planar-imaging (EPI) BOLD, containing contributions from all vessel sizes, and spin-echo (SE)-EPI BOLD for which signal changes predominately originate from microvessels, to distinguish signal weighting from different vascular compartments. Non-neuronal hemodynamic changes were induced by hypercapnia and hyperoxia to estimate cerebrovascular reactivity and venous cerebral blood volume ( <math><mrow><mi>C</mi> <mi>B</mi> <mi>V</mi> <msub><mi>v</mi> <mrow><msub><mi>O</mi> <mn>2</mn></msub> </mrow> </msub> </mrow> </math> ). Results show that increases in GE HRF amplitude from deeper to superficial layers coincided with increased macrovascular <math><mrow><mi>C</mi> <mi>B</mi> <mi>V</mi> <msub><mi>v</mi> <mrow><msub><mi>O</mi> <mn>2</mn></msub> </mrow> </msub> </mrow> </math> . <math><mrow><mi>C</mi> <mi>B</mi> <mi>V</mi> <msub><mi>v</mi> <mrow><msub><mi>O</mi> <mn>2</mn></msub> </mrow> </msub> </mrow> </math> -normalized GE-HRF amplitudes yielded similar cortical depth profiles as SE, thereby possibly improving specificity to neuronal activation. For GE BOLD, faster onset time and shorter time-to-peak were observed toward the deeper layers. Hypercapnia reduced the amplitude of visual stimulus-induced signal responses as denoted by lower GE-HRF amplitudes and longer time-to-peak. In contrast, the SE-HRF amplitude was unaffected by hypercapnia, suggesting that these responses reflect predominantly neurovascular processes that are less contaminated by macrovascular signal contributions.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-aware and acquisition-agnostic joint registration with SynthMorph. 利用 SynthMorph 进行解剖感知和采集无关的关节配准。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-06-25 DOI: 10.1162/imag_a_00197
Malte Hoffmann, Andrew Hoopes, Douglas N Greve, Bruce Fischl, Adrian V Dalca
{"title":"Anatomy-aware and acquisition-agnostic joint registration with SynthMorph.","authors":"Malte Hoffmann, Andrew Hoopes, Douglas N Greve, Bruce Fischl, Adrian V Dalca","doi":"10.1162/imag_a_00197","DOIUrl":"10.1162/imag_a_00197","url":null,"abstract":"<p><p>Affine image registration is a cornerstone of medical-image analysis. While classical algorithms can achieve excellent accuracy, they solve a time-consuming optimization for every image pair. Deep-learning (DL) methods learn a function that maps an image pair to an output transform. Evaluating the function is fast, but capturing large transforms can be challenging, and networks tend to struggle if a test-image characteristic shifts from the training domain, such as the resolution. Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image. We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image without preprocessing. First, we leverage a strategy that trains networks with widely varying images synthesized from label maps, yielding robust performance across acquisition specifics unseen at training. Second, we optimize the spatial overlap of select anatomical labels. This enables networks to distinguish anatomy of interest from irrelevant structures, removing the need for preprocessing that excludes content which would impinge on anatomy-specific registration. Third, we combine the affine model with a deformable hypernetwork that lets users choose the optimal deformation-field regularity for their specific data, at registration time, in a fraction of the time required by classical methods. This framework is applicable to learning anatomy-aware, acquisition-agnostic registration of any anatomy with any architecture, as long as label maps are available for training. We analyze how competing architectures learn affine transforms and compare state-of-the-art registration tools across an extremely diverse set of neuroimaging data, aiming to truly capture the behavior of methods in the real world. SynthMorph demonstrates high accuracy and is available at https://w3id.org/synthmorph, as a single complete end-to-end solution for registration of brain magnetic resonance imaging (MRI) data.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-33"},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11247402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VINNA for neonates: Orientation independence through latent augmentations. 新生儿 VINNA:通过潜在增强功能实现定向独立性。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-05-30 eCollection Date: 2024-05-01 DOI: 10.1162/imag_a_00180
Leonie Henschel, David Kügler, Lilla Zöllei, Martin Reuter
{"title":"VINNA for neonates: Orientation independence through latent augmentations.","authors":"Leonie Henschel, David Kügler, Lilla Zöllei, Martin Reuter","doi":"10.1162/imag_a_00180","DOIUrl":"10.1162/imag_a_00180","url":null,"abstract":"<p><p>A robust, fast, and accurate segmentation of neonatal brain images is highly desired to better understand and detect changes during development and disease, specifically considering the rise in imaging studies for this cohort. Yet, the limited availability of ground truth datasets, lack of standardized acquisition protocols, and wide variations of head positioning in the scanner pose challenges for method development. A few automated image analysis pipelines exist for newborn brain Magnetic Resonance Image (MRI) segmentation, but they often rely on time-consuming non-linear spatial registration procedures and require resampling to a common resolution, subject to loss of information due to interpolation and down-sampling. Without registration and image resampling, variations with respect to head positions and voxel resolutions have to be addressed differently. In deep learning, external augmentations such as rotation, translation, and scaling are traditionally used to artificially expand the representation of spatial variability, which subsequently increases both the training dataset size and robustness. However, these transformations in the image space still require resampling, reducing accuracy specifically in the context of label interpolation. We recently introduced the concept of resolution-independence with the Voxel-size Independent Neural Network framework, VINN. Here, we extend this concept by additionally shifting all rigid-transforms into the network architecture with a four degree of freedom (4-DOF) transform module, enabling resolution-aware internal augmentations (VINNA) for deep learning. In this work, we show that VINNA (i) significantly outperforms state-of-the-art external augmentation approaches, (ii) effectively addresses the head variations present specifically in newborn datasets, and (iii) retains high segmentation accuracy across a range of resolutions (0.5-1.0 mm). Furthermore, the 4-DOF transform module together with internal augmentations is a powerful, general approach to implement spatial augmentation without requiring image or label interpolation. The specific network application to newborns will be made publicly available as VINNA4neonates.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-26"},"PeriodicalIF":0.0,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11576933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142689479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated deep learning segmentation of high-resolution 7 Tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases. 高分辨率 7 特斯拉死后磁共振成像的自动深度学习分割,用于定量分析神经退行性疾病的结构病理相关性。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-05-08 eCollection Date: 2024-05-01 DOI: 10.1162/imag_a_00171
Pulkit Khandelwal, Michael Tran Duong, Shokufeh Sadaghiani, Sydney Lim, Amanda E Denning, Eunice Chung, Sadhana Ravikumar, Sanaz Arezoumandan, Claire Peterson, Madigan Bedard, Noah Capp, Ranjit Ittyerah, Elyse Migdal, Grace Choi, Emily Kopp, Bridget Loja, Eusha Hasan, Jiacheng Li, Alejandra Bahena, Karthik Prabhakaran, Gabor Mizsei, Marianna Gabrielyan, Theresa Schuck, Winifred Trotman, John Robinson, Daniel T Ohm, Edward B Lee, John Q Trojanowski, Corey McMillan, Murray Grossman, David J Irwin, John A Detre, M Dylan Tisdall, Sandhitsu R Das, Laura E M Wisse, David A Wolk, Paul A Yushkevich
{"title":"Automated deep learning segmentation of high-resolution 7 Tesla postmortem MRI for quantitative analysis of structure-pathology correlations in neurodegenerative diseases.","authors":"Pulkit Khandelwal, Michael Tran Duong, Shokufeh Sadaghiani, Sydney Lim, Amanda E Denning, Eunice Chung, Sadhana Ravikumar, Sanaz Arezoumandan, Claire Peterson, Madigan Bedard, Noah Capp, Ranjit Ittyerah, Elyse Migdal, Grace Choi, Emily Kopp, Bridget Loja, Eusha Hasan, Jiacheng Li, Alejandra Bahena, Karthik Prabhakaran, Gabor Mizsei, Marianna Gabrielyan, Theresa Schuck, Winifred Trotman, John Robinson, Daniel T Ohm, Edward B Lee, John Q Trojanowski, Corey McMillan, Murray Grossman, David J Irwin, John A Detre, M Dylan Tisdall, Sandhitsu R Das, Laura E M Wisse, David A Wolk, Paul A Yushkevich","doi":"10.1162/imag_a_00171","DOIUrl":"10.1162/imag_a_00171","url":null,"abstract":"<p><p><i><b>Postmortem</b></i> MRI allows brain anatomy to be examined at high resolution and to link pathology measures with morphometric measurements. However, automated segmentation methods for brain mapping in postmortem MRI are not well developed, primarily due to limited availability of labeled datasets, and heterogeneity in scanner hardware and acquisition protocols. In this work, we present a high-resolution dataset of 135 postmortem human brain tissue specimens imaged at 0.3 mm<sup>3</sup> isotropic using a T2w sequence on a 7T whole-body MRI scanner. We developed a deep learning pipeline to segment the cortical mantle by benchmarking the performance of nine deep neural architectures, followed by post-hoc topological correction. We evaluate the reliability of this pipeline via overlap metrics with manual segmentation in 6 specimens, and intra-class correlation between cortical thickness measures extracted from the automatic segmentation and expert-generated reference measures in 36 specimens. We also segment four subcortical structures (caudate, putamen, globus pallidus, and thalamus), white matter hyperintensities, and the normal appearing white matter, providing a limited evaluation of accuracy. We show generalizing capabilities across whole-brain hemispheres in different specimens, and also on unseen images acquired at 0.28 mm<sup>3</sup> and 0.16 mm<sup>3</sup> isotropic T2*w fast low angle shot (FLASH) sequence at 7T. We report associations between localized cortical thickness and volumetric measurements across key regions, and semi-quantitative neuropathological ratings in a subset of 82 individuals with Alzheimer's disease (AD) continuum diagnoses. Our code, Jupyter notebooks, and the containerized executables are publicly available at the <b>project webpage</b> (https://pulkit-khandelwal.github.io/exvivo-brain-upenn/).</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-30"},"PeriodicalIF":0.0,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11409836/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The past, present, and future of the brain imaging data structure (BIDS). 大脑成像数据结构 (BIDS) 的过去、现在和未来。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-03-08 eCollection Date: 2024-03-01 DOI: 10.1162/imag_a_00103
Russell A Poldrack, Christopher J Markiewicz, Stefan Appelhoff, Yoni K Ashar, Tibor Auer, Sylvain Baillet, Shashank Bansal, Leandro Beltrachini, Christian G Benar, Giacomo Bertazzoli, Suyash Bhogawar, Ross W Blair, Marta Bortoletto, Mathieu Boudreau, Teon L Brooks, Vince D Calhoun, Filippo Maria Castelli, Patricia Clement, Alexander L Cohen, Julien Cohen-Adad, Sasha D'Ambrosio, Gilles de Hollander, María de la Iglesia-Vayá, Alejandro de la Vega, Arnaud Delorme, Orrin Devinsky, Dejan Draschkow, Eugene Paul Duff, Elizabeth DuPre, Eric Earl, Oscar Esteban, Franklin W Feingold, Guillaume Flandin, Anthony Galassi, Giuseppe Gallitto, Melanie Ganz, Rémi Gau, James Gholam, Satrajit S Ghosh, Alessio Giacomel, Ashley G Gillman, Padraig Gleeson, Alexandre Gramfort, Samuel Guay, Giacomo Guidali, Yaroslav O Halchenko, Daniel A Handwerker, Nell Hardcastle, Peer Herholz, Dora Hermes, Christopher J Honey, Robert B Innis, Horea-Ioan Ioanas, Andrew Jahn, Agah Karakuzu, David B Keator, Gregory Kiar, Balint Kincses, Angela R Laird, Jonathan C Lau, Alberto Lazari, Jon Haitz Legarreta, Adam Li, Xiangrui Li, Bradley C Love, Hanzhang Lu, Eleonora Marcantoni, Camille Maumet, Giacomo Mazzamuto, Steven L Meisler, Mark Mikkelsen, Henk Mutsaerts, Thomas E Nichols, Aki Nikolaidis, Gustav Nilsonne, Guiomar Niso, Martin Norgaard, Thomas W Okell, Robert Oostenveld, Eduard Ort, Patrick J Park, Mateusz Pawlik, Cyril R Pernet, Franco Pestilli, Jan Petr, Christophe Phillips, Jean-Baptiste Poline, Luca Pollonini, Pradeep Reddy Raamana, Petra Ritter, Gaia Rizzo, Kay A Robbins, Alexander P Rockhill, Christine Rogers, Ariel Rokem, Chris Rorden, Alexandre Routier, Jose Manuel Saborit-Torres, Taylor Salo, Michael Schirner, Robert E Smith, Tamas Spisak, Julia Sprenger, Nicole C Swann, Martin Szinte, Sylvain Takerkart, Bertrand Thirion, Adam G Thomas, Sajjad Torabian, Gael Varoquaux, Bradley Voytek, Julius Welzel, Martin Wilson, Tal Yarkoni, Krzysztof J Gorgolewski
{"title":"The past, present, and future of the brain imaging data structure (BIDS).","authors":"Russell A Poldrack, Christopher J Markiewicz, Stefan Appelhoff, Yoni K Ashar, Tibor Auer, Sylvain Baillet, Shashank Bansal, Leandro Beltrachini, Christian G Benar, Giacomo Bertazzoli, Suyash Bhogawar, Ross W Blair, Marta Bortoletto, Mathieu Boudreau, Teon L Brooks, Vince D Calhoun, Filippo Maria Castelli, Patricia Clement, Alexander L Cohen, Julien Cohen-Adad, Sasha D'Ambrosio, Gilles de Hollander, María de la Iglesia-Vayá, Alejandro de la Vega, Arnaud Delorme, Orrin Devinsky, Dejan Draschkow, Eugene Paul Duff, Elizabeth DuPre, Eric Earl, Oscar Esteban, Franklin W Feingold, Guillaume Flandin, Anthony Galassi, Giuseppe Gallitto, Melanie Ganz, Rémi Gau, James Gholam, Satrajit S Ghosh, Alessio Giacomel, Ashley G Gillman, Padraig Gleeson, Alexandre Gramfort, Samuel Guay, Giacomo Guidali, Yaroslav O Halchenko, Daniel A Handwerker, Nell Hardcastle, Peer Herholz, Dora Hermes, Christopher J Honey, Robert B Innis, Horea-Ioan Ioanas, Andrew Jahn, Agah Karakuzu, David B Keator, Gregory Kiar, Balint Kincses, Angela R Laird, Jonathan C Lau, Alberto Lazari, Jon Haitz Legarreta, Adam Li, Xiangrui Li, Bradley C Love, Hanzhang Lu, Eleonora Marcantoni, Camille Maumet, Giacomo Mazzamuto, Steven L Meisler, Mark Mikkelsen, Henk Mutsaerts, Thomas E Nichols, Aki Nikolaidis, Gustav Nilsonne, Guiomar Niso, Martin Norgaard, Thomas W Okell, Robert Oostenveld, Eduard Ort, Patrick J Park, Mateusz Pawlik, Cyril R Pernet, Franco Pestilli, Jan Petr, Christophe Phillips, Jean-Baptiste Poline, Luca Pollonini, Pradeep Reddy Raamana, Petra Ritter, Gaia Rizzo, Kay A Robbins, Alexander P Rockhill, Christine Rogers, Ariel Rokem, Chris Rorden, Alexandre Routier, Jose Manuel Saborit-Torres, Taylor Salo, Michael Schirner, Robert E Smith, Tamas Spisak, Julia Sprenger, Nicole C Swann, Martin Szinte, Sylvain Takerkart, Bertrand Thirion, Adam G Thomas, Sajjad Torabian, Gael Varoquaux, Bradley Voytek, Julius Welzel, Martin Wilson, Tal Yarkoni, Krzysztof J Gorgolewski","doi":"10.1162/imag_a_00103","DOIUrl":"10.1162/imag_a_00103","url":null,"abstract":"<p><p>The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neuroscience modalities. This paper is meant as a history of how the standard has developed and grown over time. We outline the principles behind the project, the mechanisms by which it has been extended, and some of the challenges being addressed as it evolves. We also discuss the lessons learned through the project, with the aim of enabling researchers in other domains to learn from the success of BIDS.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11415029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MMORF-FSL's MultiMOdal Registration Framework. MMORF-FSL的多模式注册框架。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-03-01 DOI: 10.1162/imag_a_00100
Frederik J Lange, Christoph Arthofer, Andreas Bartsch, Gwenaëlle Douaud, Paul McCarthy, Stephen M Smith, Jesper L R Andersson
{"title":"MMORF-FSL's MultiMOdal Registration Framework.","authors":"Frederik J Lange, Christoph Arthofer, Andreas Bartsch, Gwenaëlle Douaud, Paul McCarthy, Stephen M Smith, Jesper L R Andersson","doi":"10.1162/imag_a_00100","DOIUrl":"10.1162/imag_a_00100","url":null,"abstract":"<p><p>We present MMORF-FSL's MultiMOdal Registration Framework-a newly released nonlinear image registration tool designed primarily for application to magnetic resonance imaging (MRI) images of the brain. MMORF is capable of simultaneously optimising both displacement and rotational transformations within a single registration framework by leveraging rich information from multiple scalar and tensor modalities. The regularisation employed in MMORF promotes local rigidity in the deformation, and we have previously demonstrated how this effectively controls both shape and size distortion, leading to more biologically plausible warps. The performance of MMORF is benchmarked against three established nonlinear registration methods-FNIRT, ANTs, and DR-TAMAS-across four domains: FreeSurfer label overlap, diffusion tensor imaging (DTI) similarity, task-fMRI cluster mass, and distortion. The evaluation is based on 100 unrelated subjects from the Human Connectome Project (HCP) dataset registered to the Oxford-MultiModal-1 (OMM-1) multimodal template via either the T1w contrast alone or in combination with a DTI/DTI-derived contrast. Results show that MMORF is the most consistently high-performing method across all domains-both in terms of accuracy and levels of distortion. MMORF is available as part of FSL, and its inputs and outputs are fully compatible with existing workflows. We believe that MMORF will be a valuable tool for the neuroimaging community, regardless of the domain of any downstream analysis, providing state-of-the-art registration performance that integrates into the rich and widely adopted suite of analysis tools in FSL.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-30"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7617249/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMG-projected MEG high-resolution source imaging of human motor execution: Brain-muscle coupling above movement frequencies. 人类运动执行的肌电图投射高分辨率源成像:运动频率以上的脑-肌耦合。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-01-09 eCollection Date: 2024-01-01 DOI: 10.1162/imag_a_00056
Ming-Xiong Huang, Deborah L Harrington, Annemarie Angeles-Quinto, Zhengwei Ji, Ashley Robb-Swan, Charles W Huang, Qian Shen, Hayden Hansen, Jared Baumgartner, Jaqueline Hernandez-Lucas, Sharon Nichols, Joanna Jacobus, Tao Song, Imanuel Lerman, Maksim Bazhenov, Giri P Krishnan, Dewleen G Baker, Ramesh Rao, Roland R Lee
{"title":"EMG-projected MEG high-resolution source imaging of human motor execution: Brain-muscle coupling above movement frequencies.","authors":"Ming-Xiong Huang, Deborah L Harrington, Annemarie Angeles-Quinto, Zhengwei Ji, Ashley Robb-Swan, Charles W Huang, Qian Shen, Hayden Hansen, Jared Baumgartner, Jaqueline Hernandez-Lucas, Sharon Nichols, Joanna Jacobus, Tao Song, Imanuel Lerman, Maksim Bazhenov, Giri P Krishnan, Dewleen G Baker, Ramesh Rao, Roland R Lee","doi":"10.1162/imag_a_00056","DOIUrl":"10.1162/imag_a_00056","url":null,"abstract":"<p><p>Magnetoencephalography (MEG) is a non-invasive functional imaging technique for pre-surgical mapping. However, movement-related MEG functional mapping of primary motor cortex (M1) has been challenging in presurgical patients with brain lesions and sensorimotor dysfunction due to the large numbers of trials needed to obtain adequate signal to noise. Moreover, it is not fully understood how effective the brain communication is with the muscles at frequencies above the movement frequency and its harmonics. We developed a novel Electromyography (EMG)-projected MEG source imaging technique for localizing early-stage (-100 to 0 ms) M1 activity during ~l min recordings of left and right self-paced finger movements (~1 Hz). High-resolution MEG source images were obtained by projecting M1 activity towards the skin EMG signal without trial averaging. We studied delta (1-4 Hz), theta (4-7 Hz), alpha (8-12 Hz), beta (15-30 Hz), gamma (30-90 Hz), and upper-gamma (60-90 Hz) bands in 13 healthy participants (26 datasets) and three presurgical patients with sensorimotor dysfunction. In healthy participants, EMG-projected MEG accurately localized M1 with high accuracy in delta (100.0%), theta (100.0%), and beta (76.9%) bands, but not alpha (34.6%) or gamma/upper-gamma (0.0%) bands. Except for delta, all other frequency bands were above the movement frequency and its harmonics. In three presurgical patients, M1 activity in the affected hemisphere was also accurately localized, despite highly irregular EMG movement patterns in one patient. Altogether, our EMG-projected MEG imaging approach is highly accurate and feasible for M1 mapping in presurgical patients. The results also provide insight into movement-related brain-muscle coupling above the movement frequency and its harmonics.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":"1-20"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11403128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Denoising task-correlated head motion from motor-task fMRI data with multi-echo ICA. 利用多回波 ICA 对运动任务 fMRI 数据中与任务相关的头部运动进行去噪。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-01-01 Epub Date: 2024-01-05 DOI: 10.1162/imag_a_00057
Neha A Reddy, Kristina M Zvolanek, Stefano Moia, César Caballero-Gaudes, Molly G Bright
{"title":"Denoising task-correlated head motion from motor-task fMRI data with multi-echo ICA.","authors":"Neha A Reddy, Kristina M Zvolanek, Stefano Moia, César Caballero-Gaudes, Molly G Bright","doi":"10.1162/imag_a_00057","DOIUrl":"https://doi.org/10.1162/imag_a_00057","url":null,"abstract":"<p><p>Motor-task functional magnetic resonance imaging (fMRI) is crucial in the study of several clinical conditions, including stroke and Parkinson's disease. However, motor-task fMRI is complicated by task-correlated head motion, which can be magnified in clinical populations and confounds motor activation results. One method that may mitigate this issue is multi-echo independent component analysis (ME-ICA), which has been shown to separate the effects of head motion from the desired blood oxygenation level dependent (BOLD) signal but has not been tested in motor-task datasets with high amounts of motion. In this study, we collected an fMRI dataset from a healthy population who performed a hand grasp task with and without task-correlated amplified head motion to simulate a motor-impaired population. We analyzed these data using three models: single-echo (SE), multi-echo optimally combined (ME-OC), and ME-ICA. We compared the models' performance in mitigating the effects of head motion on the subject level and group level. On the subject level, ME-ICA better dissociated the effects of head motion from the BOLD signal and reduced noise. Both ME models led to increased t-statistics in brain motor regions. In scans with high levels of motion, ME-ICA additionally mitigated artifacts and increased stability of beta coefficient estimates, compared to SE. On the group level, all three models produced activation clusters in expected motor areas in scans with both low and high motion, indicating that group-level averaging may also sufficiently resolve motion artifacts that vary by subject. These findings demonstrate that ME-ICA is a useful tool for subject-level analysis of motor-task data with high levels of task-correlated head motion. The improvements afforded by ME-ICA are critical to improve reliability of subject-level activation maps for clinical populations in which group-level analysis may not be feasible or appropriate, for example, in a chronic stroke cohort with varying stroke location and degree of tissue damage.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11426116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142333836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arithmetic in two languages: Localizing simple multiplication processing in the adult bilingual brain. 用两种语言算术:成人双语大脑中简单乘法处理的定位。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-01-01 Epub Date: 2024-06-24 DOI: 10.1162/imag_a_00199
Vanessa R Cerda, Macarena Suárez-Pellicioni, James R Booth, Nicole Y Wicha
{"title":"Arithmetic in two languages: Localizing simple multiplication processing in the adult bilingual brain.","authors":"Vanessa R Cerda, Macarena Suárez-Pellicioni, James R Booth, Nicole Y Wicha","doi":"10.1162/imag_a_00199","DOIUrl":"https://doi.org/10.1162/imag_a_00199","url":null,"abstract":"<p><p>Verbally memorized multiplication tables are thought to create language-specific memories. Supporting this idea, bilinguals are typically faster and more accurate in the language in which they learned math (LA+) than in their other language (LA- ) . No study has yet revealed the underlying neurocognitive mechanisms explaining this effect, or the role of problem size in explaining the recruitment of different brain regions in LA+ and LA- . To fill this gap in the literature, 29 Spanish-English early bilingual adults, proficient in both languages, verified simple multiplication problems in each language while functional magnetic resonance imaging (fMRI) was acquired. More specifically, this study aimed to answer two questions: 1) Does LA+ recruit left superior and middle temporal gyri (STG/MTG) to a greater extent than LA- , reflecting more robust verbal representations of multiplication facts in LA+? In contrast, does LA- recruit the inferior frontal gyrus (IFG), reflecting more effortful retrieval, or the intraparietal sulcus (IPS), reflecting reliance on quantity processes? 2) Is there an interaction between language and problem size, where language differences are more pronounced for less practiced, large multiplication problems (e.g., 8 × 9) in comparison to more familiar, small problems (e.g., 2 × 3). Functional localizer tasks were used to identify hypothesis-driven regions of interest in verbal areas associated with verbal representations of arithmetic facts (left STG/MTG) and with the effortful retrieval of these facts (left IFG) and quantity areas engaged when calculation-based strategies are used (bilateral IPS). In planned analyses, no cluster reached significance for the direct comparison of languages (question 1) or for the interaction between language and problem size (question 2). An exploratory analysis found a main effect of problem size, where small problems recruited left STG/MTG and left IFG to a greater extent than large problems, suggesting greater verbal involvement for these problems in both languages. Additionally, large problems recruited right IPS to a greater extent than small problems, suggesting reliance on quantity processes. Our results suggest that proficient early bilingual adults engage similar brain regions in both languages, even for more difficult, large problems.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11426113/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142333835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ketosis regulates K+ ion channels, strengthening brain-wide signaling disrupted by age. 酮症调节钾离子通道,加强因年龄而中断的全脑信号传导。
Imaging neuroscience (Cambridge, Mass.) Pub Date : 2024-01-01 Epub Date: 2024-05-08 DOI: 10.1162/imag_a_00163
Helena van Nieuwenhuizen, Anthony G Chesebro, Claire Polizu, Kieran Clarke, Helmut H Strey, Corey Weistuch, Lilianne R Mujica-Parodi
{"title":"Ketosis regulates K<sup>+</sup> ion channels, strengthening brain-wide signaling disrupted by age.","authors":"Helena van Nieuwenhuizen, Anthony G Chesebro, Claire Polizu, Kieran Clarke, Helmut H Strey, Corey Weistuch, Lilianne R Mujica-Parodi","doi":"10.1162/imag_a_00163","DOIUrl":"10.1162/imag_a_00163","url":null,"abstract":"<p><p>Aging is associated with impaired signaling between brain regions when measured using resting-state fMRI. This age-related destabilization and desynchronization of brain networks reverses itself when the brain switches from metabolizing glucose to ketones. Here, we probe the mechanistic basis for these effects. First, we confirmed their robustness across measurement modalities using two datasets acquired from resting-state EEG (<i>Lifespan</i>: standard diet, 20-80 years, N = 201; <i>Metabolic</i>: individually weight-dosed and calorically-matched glucose and ketone ester challenge, <math> <msub><mrow><mi>μ</mi></mrow> <mrow><mi>a</mi> <mi>g</mi> <mi>e</mi></mrow> </msub> <mo>=</mo> <mn>26.9</mn> <mo>±</mo> <mn>11.2</mn> <mspace></mspace> <mtext>years</mtext></math> , N = 36). Then, using a multiscale conductance-based neural mass model, we identified the unique set of mechanistic parameters consistent with our clinical data. Together, our results implicate potassium (K<sup>+</sup>) gradient dysregulation as a mechanism for age-related neural desynchronization and its reversal with ketosis, the latter finding of which is consistent with direct measurement of ion channels. As such, the approach facilitates the connection between macroscopic brain activity and cellular-level mechanisms.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"2 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信