Journal of Neuroscience Methods最新文献

筛选
英文 中文
Validating a novel paradigm for simultaneously assessing mismatch response and frequency-following response to speech sounds 验证一种同时评估语音错配反应和频率跟随反应的新型范式。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-09-06 DOI: 10.1016/j.jneumeth.2024.110277
Tzu-Han Zoe Cheng , Tian Christina Zhao
{"title":"Validating a novel paradigm for simultaneously assessing mismatch response and frequency-following response to speech sounds","authors":"Tzu-Han Zoe Cheng ,&nbsp;Tian Christina Zhao","doi":"10.1016/j.jneumeth.2024.110277","DOIUrl":"10.1016/j.jneumeth.2024.110277","url":null,"abstract":"<div><h3>Background</h3><p>Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously</p></div><div><h3>New method</h3><p>Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated.</p></div><div><h3>Results</h3><p>Using EEG from 18 adults, we observed a decoding accuracy of 72.2 % between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6 % was found for FFR. A high stimulus-to-response cross-correlation with a 9 ms lag suggested that FFR closely tracks speech sounds.</p></div><div><h3>Comparison with existing method(s)</h3><p>These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12 minutes).</p></div><div><h3>Conclusions</h3><p>This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"412 ","pages":"Article 110277"},"PeriodicalIF":2.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142154394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel method for sparse dynamic functional connectivity analysis from resting-state fMRI 从静息态 fMRI 分析稀疏动态功能连接的新方法
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-09-04 DOI: 10.1016/j.jneumeth.2024.110275
Houxiang Wang , Jiaqing Chen , Zihao Yuan , Yangxin Huang , Fuchun Lin
{"title":"A novel method for sparse dynamic functional connectivity analysis from resting-state fMRI","authors":"Houxiang Wang ,&nbsp;Jiaqing Chen ,&nbsp;Zihao Yuan ,&nbsp;Yangxin Huang ,&nbsp;Fuchun Lin","doi":"10.1016/j.jneumeth.2024.110275","DOIUrl":"10.1016/j.jneumeth.2024.110275","url":null,"abstract":"<div><h3>Background:</h3><p>There is growing interest in understanding the dynamic functional connectivity (DFC) between distributed brain regions. However, it remains challenging to reliably estimate the temporal dynamics from resting-state functional magnetic resonance imaging (rs-fMRI) due to the limitations of current methods.</p></div><div><h3>New methods:</h3><p>We propose a new model called HDP-HSMM-BPCA for sparse DFC analysis of high-dimensional rs-fMRI data, which is a temporal extension of probabilistic principal component analysis using Bayesian nonparametric hidden semi-Markov model (HSMM). Specifically, we utilize a hierarchical Dirichlet process (HDP) prior to remove the parametric assumption of the HMM framework, overcoming the limitations of the standard HMM. An attractive superiority is its ability to automatically infer the state-specific latent space dimensionality within the Bayesian formulation.</p></div><div><h3>Results:</h3><p>The experiment results of synthetic data show that our model outperforms the competitive models with relatively higher estimation accuracy. In addition, the proposed framework is applied to real rs-fMRI data to explore sparse DFC patterns. The findings indicate that there is a time-varying underlying structure and sparse DFC patterns in high-dimensional rs-fMRI data.</p></div><div><h3>Comparison with existing methods:</h3><p>Compared with the existing DFC approaches based on HMM, our method overcomes the limitations of standard HMM. The observation model of HDP-HSMM-BPCA can discover the underlying temporal structure of rs-fMRI data. Furthermore, the relevant sparse DFC construction algorithm provides a scheme for estimating sparse DFC.</p></div><div><h3>Conclusion:</h3><p>We describe a new computational framework for sparse DFC analysis to discover the underlying temporal structure of rs-fMRI data, which will facilitate the study of brain functional connectivity.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110275"},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks 基于频带注意图卷积对抗神经网络的脑机接口中的跨主体情感识别。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-09-03 DOI: 10.1016/j.jneumeth.2024.110276
Shinan Chen , Yuchen Wang , Xuefen Lin , Xiaoyong Sun , Weihua Li , Weifeng Ma
{"title":"Cross-subject emotion recognition in brain-computer interface based on frequency band attention graph convolutional adversarial neural networks","authors":"Shinan Chen ,&nbsp;Yuchen Wang ,&nbsp;Xuefen Lin ,&nbsp;Xiaoyong Sun ,&nbsp;Weihua Li ,&nbsp;Weifeng Ma","doi":"10.1016/j.jneumeth.2024.110276","DOIUrl":"10.1016/j.jneumeth.2024.110276","url":null,"abstract":"<div><h3><em>Background:</em></h3><p>Emotion is an important area in neuroscience. Cross-subject emotion recognition based on electroencephalogram (EEG) data is challenging due to physiological differences between subjects. Domain gap, which refers to the different distributions of EEG data at different subjects, has attracted great attention for cross-subject emotion recognition.</p></div><div><h3><em>Comparison with existing methods:</em></h3><p>This study focuses on narrowing the domain gap between subjects through the emotional frequency bands and the relationship information between EEG channels. Emotional frequency band features represent the energy distribution of EEG data in different frequency ranges, while relationship information between EEG channels provides spatial distribution information about EEG data.</p></div><div><h3><em>New method:</em></h3><p>To achieve this, this paper proposes a model called the Frequency Band Attention Graph convolutional Adversarial neural Network (FBAGAN). This model includes three components: a feature extractor, a classifier, and a discriminator. The feature extractor consists of a layer with a frequency band attention mechanism and a graph convolutional neural network. The mechanism effectively extracts frequency band information by assigning weights and Graph Convolutional Networks can extract relationship information between EEG channels by modeling the graph structure. The discriminator then helps minimize the gap in the frequency information and relationship information between the source and target domains, improving the model’s ability to generalize.</p></div><div><h3><em>Results:</em></h3><p>The FBAGAN model is extensively tested on the SEED, SEED-IV, and DEAP datasets. The accuracy and standard deviation scores are 88.17% and 4.88, respectively, on the SEED dataset, and 77.35% and 3.72 on the SEED-IV dataset. On the DEAP dataset, the model achieves 69.64% for Arousal and 65.18% for Valence. These results outperform most existing models.</p></div><div><h3><em>Conclusions:</em></h3><p>The experiments indicate that FBAGAN effectively addresses the challenges of transferring EEG channel domain and frequency band domain, leading to improved performance.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110276"},"PeriodicalIF":2.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142140348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconstruction of natural images from human fMRI using a three-stage multi-level deep fusion model 利用三阶段多层次深度融合模型从人体 fMRI 重建自然图像
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110269
Lu Meng , Zhenxuan Tang , Yangqian Liu
{"title":"Reconstruction of natural images from human fMRI using a three-stage multi-level deep fusion model","authors":"Lu Meng ,&nbsp;Zhenxuan Tang ,&nbsp;Yangqian Liu","doi":"10.1016/j.jneumeth.2024.110269","DOIUrl":"10.1016/j.jneumeth.2024.110269","url":null,"abstract":"<div><h3>Background</h3><p>Image reconstruction is a critical task in brain decoding research, primarily utilizing functional magnetic resonance imaging (fMRI) data. However, due to challenges such as limited samples in fMRI data, the quality of reconstruction results often remains poor.</p></div><div><h3>New method</h3><p>We proposed a three-stage multi-level deep fusion model (TS-ML-DFM). The model employed a three-stage training process, encompassing components such as image encoders, generators, discriminators, and fMRI encoders. In this method, we incorporated distinct supplementary features derived separately from depth images and original images. Additionally, the method integrated several components, including a random shift module, dual attention module, and multi-level feature fusion module.</p></div><div><h3>Results</h3><p>In both qualitative and quantitative comparisons on the Horikawa17 and VanGerven10 datasets, our method exhibited excellent performance.</p><p>Comparison with existing methods: For example, on the primary Horikawa17 dataset, our method was compared with other leading methods based on metrics the average hash value, histogram similarity, mutual information, structural similarity accuracy, AlexNet(2), AlexNet(5), and pairwise human perceptual similarity accuracy. Compared to the second-ranked results in each metric, the proposed method achieved improvements of 0.99 %, 3.62 %, 3.73 %, 2.45 %, 3.51 %, 0.62 %, and 1.03 %, respectively. In terms of the SwAV top-level semantic metric, a substantial improvement of 10.53 % was achieved compared to the second-ranked result in the pixel-level reconstruction methods.</p></div><div><h3>Conclusions</h3><p>The TS-ML-DFM method proposed in this study, when applied to decoding brain visual patterns using fMRI data, has outperformed previous algorithms, thereby facilitating further advancements in research within this field.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110269"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002140/pdfft?md5=cf2903d860a78e3a684efb8d1cc769d2&pid=1-s2.0-S0165027024002140-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel comprehensive analysis of skilled reaching and grasping behavior in adult rats 对成年大鼠熟练伸手和抓握行为的新颖综合分析
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110271
Pawan Sharma , Yixuan Du , Kripi Singapuri , Debbi Moalemi Delafraz , Prithvi K. Shah
{"title":"Novel comprehensive analysis of skilled reaching and grasping behavior in adult rats","authors":"Pawan Sharma ,&nbsp;Yixuan Du ,&nbsp;Kripi Singapuri ,&nbsp;Debbi Moalemi Delafraz ,&nbsp;Prithvi K. Shah","doi":"10.1016/j.jneumeth.2024.110271","DOIUrl":"10.1016/j.jneumeth.2024.110271","url":null,"abstract":"<div><h3>Background</h3><p>Reaching and grasping (R&amp;G) in rats is commonly used as an outcome measure to investigate the effectiveness of rehabilitation or treatment strategies to recover forelimb function post spinal cord injury. Kinematic analysis has been limited to the wrist and digit movements. Kinematic profiles of the more proximal body segments that play an equally crucial role in successfully executing the task remain unexplored. Additionally, understanding of different forelimb muscle activity, their interactions, and their correlation with the kinematics of R&amp;G movement is scarce.</p></div><div><h3>New method</h3><p>In this work, novel methodologies to comprehensively assess and quantify the 3D kinematics of the proximal and distal forelimb joints along with associated muscle activity during R&amp;G movements in adult rats are developed and discussed.</p></div><div><h3>Results</h3><p>Our data show that different phases of R&amp;G identified using the novel kinematic and EMG-based approach correlate with the well-established descriptors of R&amp;G stages derived from the Whishaw scoring system. Additionally, the developed methodology allows describing the temporal activity of individual muscles and associated mechanical and physiological properties during different phases of the motor task.</p></div><div><h3>Comparison with existing method(s)</h3><p>R&amp;G phases and their sub-components are identified and quantified using the developed kinematic and EMG-based approach. Importantly, the identified R&amp;G phases closely match the well-established qualitative descriptors of the R&amp;G task proposed by Whishaw and colleagues.</p></div><div><h3>Conclusions</h3><p>The present work provides an in-depth objective analysis of kinematics and EMG activity of R&amp;G behavior, paving the way to a standardized approach to assessing this critical rodent motor function in future studies.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110271"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142108291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High quality, high throughput, and low-cost simultaneous video recording of 60 animals in operant chambers using PiRATeMC 使用 PiRATeMC 对操作室中的 60 只动物进行高质量、高通量和低成本的同步视频记录。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-31 DOI: 10.1016/j.jneumeth.2024.110270
Jarryd Ramborger , Sumay Kalra , Joseph Mosquera , Alexander C.W. Smith , Olivier George
{"title":"High quality, high throughput, and low-cost simultaneous video recording of 60 animals in operant chambers using PiRATeMC","authors":"Jarryd Ramborger ,&nbsp;Sumay Kalra ,&nbsp;Joseph Mosquera ,&nbsp;Alexander C.W. Smith ,&nbsp;Olivier George","doi":"10.1016/j.jneumeth.2024.110270","DOIUrl":"10.1016/j.jneumeth.2024.110270","url":null,"abstract":"<div><h3>Background</h3><p>The development of Raspberry Pi-based recording devices for video analyses of drug self-administration studies has been shown to be promising in terms of affordability, customizability, and capacity to extract in-depth behavioral patterns. Yet, most video recording systems are limited to a few cameras making them incompatible with large-scale studies.</p></div><div><h3>New method</h3><p>We expanded the PiRATeMC (Pi-based Remote Acquisition Technology for Motion Capture) recording system by increasing its scale, modifying its code, and adding equipment to accommodate large-scale video acquisition, accompanied by data on throughput capabilities, video fidelity, synchronicity of devices, and comparisons between Raspberry Pi 3B+ and 4B models.</p></div><div><h3>Results</h3><p>Using PiRATeMC default recording parameters resulted in minimal storage (∼350MB/h), high throughput (&lt; ∼120 seconds/Pi), high video fidelity, and synchronicity within ∼0.02 seconds, affording the ability to simultaneously record 60 animals in individual self-administration chambers for various session lengths at a fraction of commercial costs. No consequential differences were found between Raspberry Pi models.</p></div><div><h3>Comparison with existing method(s)</h3><p>This system allows greater acquisition of video data simultaneously than other video recording systems by an order of magnitude with less storage needs and lower costs. Additionally, we report in-depth quantitative assessments of throughput, fidelity, and synchronicity, displaying real-time system capabilities.</p></div><div><h3>Conclusions</h3><p>The system presented is able to be fully installed in a month’s time by a single technician and provides a scalable, low cost, and quality-assured procedure with a high-degree of customization and synchronicity between recording devices, capable of recording a large number of subjects and timeframes with high turnover in a variety of species and settings.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110270"},"PeriodicalIF":2.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002152/pdfft?md5=06fea00dd45e5f4ed19740e351731e89&pid=1-s2.0-S0165027024002152-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pushing the boundaries of brain-computer interfacing (BCI) and neuron-electronics 推动脑机接口(BCI)和神经元电子学的发展。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-30 DOI: 10.1016/j.jneumeth.2024.110274
Mohammed Seghir Guellil, Fatima Kies, Emad Kamil Hussein, Mohammad Shabaz, Robert E. Hampson
{"title":"Pushing the boundaries of brain-computer interfacing (BCI) and neuron-electronics","authors":"Mohammed Seghir Guellil,&nbsp;Fatima Kies,&nbsp;Emad Kamil Hussein,&nbsp;Mohammad Shabaz,&nbsp;Robert E. Hampson","doi":"10.1016/j.jneumeth.2024.110274","DOIUrl":"10.1016/j.jneumeth.2024.110274","url":null,"abstract":"","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110274"},"PeriodicalIF":2.7,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142108292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Small animal brain surgery with neither a brain atlas nor a stereotaxic frame 既没有脑图谱也没有立体定向框架的小动物脑部手术
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-28 DOI: 10.1016/j.jneumeth.2024.110272
Shaked Ron, Hadar Beeri, Ori Shinover , Noam M. Tur , Jonathan Brokman , Ben Engelhard, Yoram Gutfreund
{"title":"Small animal brain surgery with neither a brain atlas nor a stereotaxic frame","authors":"Shaked Ron,&nbsp;Hadar Beeri,&nbsp;Ori Shinover ,&nbsp;Noam M. Tur ,&nbsp;Jonathan Brokman ,&nbsp;Ben Engelhard,&nbsp;Yoram Gutfreund","doi":"10.1016/j.jneumeth.2024.110272","DOIUrl":"10.1016/j.jneumeth.2024.110272","url":null,"abstract":"<div><h3>Background</h3><p>Stereotaxic surgery is a cornerstone in brain research for the precise positioning of electrodes and probes, but its application is limited to species with available brain atlases and tailored stereotaxic frames. Addressing this limitation, we introduce an alternative technique for small animal brain surgery that requires neither an aligned brain atlas nor a stereotaxic frame.</p></div><div><h3>New method</h3><p>The new method requires an ex-vivo high-contrast MRI brain scan of one specimen and access to a micro-CT scanner. The process involves attaching miniature markers to the skull, followed by CT scanning of the head. Subsequently, MRI and CT images are co-registered using standard image processing software and the targets for brain recordings are marked in the MRI image. During surgery, the animal's head is stabilized in any convenient orientation, and the probe’s 3D position and angle are tracked using a multi-camera system. We have developed a software that utilizes the on-skull markers as fiducial points to align the CT/MRI 3D model with the surgical positioning system, and in turn instructs the surgeon how to move the probe to reach the targets within the brain.</p></div><div><h3>Results</h3><p>Our technique allows the execution of insertion tracks connecting two points in the brain. We successfully applied this method for neuropixels probe positioning in owls, quails, and mice, demonstrating its versatility.</p></div><div><h3>Comparison with existing methods</h3><p>We present an alternative to traditional stereotaxic brain surgeries that does not require established stereotaxic tools. Thus, this method is especially of advantage for research in non-standard and novel animal models.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110272"},"PeriodicalIF":2.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuroQuantify – An image analysis software for detection and quantification of neuron cells and neurite lengths using deep learning NeuroQuantify - 利用深度学习检测和量化神经元细胞和神经元长度的图像分析软件。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-27 DOI: 10.1016/j.jneumeth.2024.110273
Ka My Dang , Yi Jia Zhang , Tianchen Zhang , Chao Wang , Anton Sinner , Piero Coronica , Joyce K.S. Poon
{"title":"NeuroQuantify – An image analysis software for detection and quantification of neuron cells and neurite lengths using deep learning","authors":"Ka My Dang ,&nbsp;Yi Jia Zhang ,&nbsp;Tianchen Zhang ,&nbsp;Chao Wang ,&nbsp;Anton Sinner ,&nbsp;Piero Coronica ,&nbsp;Joyce K.S. Poon","doi":"10.1016/j.jneumeth.2024.110273","DOIUrl":"10.1016/j.jneumeth.2024.110273","url":null,"abstract":"<div><h3>Background</h3><p>The segmentation of cells and neurites in microscopy images of neuronal networks provides valuable quantitative information about neuron growth and neuronal differentiation, including the number of cells, neurites, neurite length and neurite orientation. This information is essential for assessing the development of neuronal networks in response to extracellular stimuli, which is useful for studying neuronal structures, for example, the study of neurodegenerative diseases and pharmaceuticals.</p></div><div><h3>New method</h3><p>We have developed NeuroQuantify, an open-source software that uses deep learning to efficiently and quickly segment cells and neurites in phase contrast microscopy images.</p></div><div><h3>Results</h3><p>NeuroQuantify offers several key features: (i) automatic detection of cells and neurites; (ii) post-processing of the images for the quantitative neurite length measurement based on segmentation of phase contrast microscopy images, and (iii) identification of neurite orientations.</p></div><div><h3>Comparison with existing methods</h3><p>NeuroQuantify overcomes some of the limitations of existing methods in the automatic and accurate analysis of neuronal structures. It has been developed for phase contrast images rather than fluorescence images. In addition to typical functionality of cell counting, NeuroQuantify also detects and counts neurites, measures the neurite lengths, and produces the neurite orientation distribution.</p></div><div><h3>Conclusions</h3><p>We offer a valuable tool to assess network development rapidly and effectively. The user-friendly NeuroQuantify software can be installed and freely downloaded from GitHub at <span><span>https://github.com/StanleyZ0528/neural-image-segmentation</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110273"},"PeriodicalIF":2.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0165027024002188/pdfft?md5=0023c239982bf823c04acc4b3908a6ee&pid=1-s2.0-S0165027024002188-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142093529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct dorsal root ganglia (DRG) injection in mice for analysis of adeno-associated viral (AAV) gene transfer to peripheral somatosensory neurons 直接向小鼠背根神经节 (DRG) 注射,分析腺相关病毒 (AAV) 基因向外周躯体感觉神经元的转移。
IF 2.7 4区 医学
Journal of Neuroscience Methods Pub Date : 2024-08-25 DOI: 10.1016/j.jneumeth.2024.110268
Michael O’Donnell , Arjun Fontaine , John Caldwell , Richard Weir
{"title":"Direct dorsal root ganglia (DRG) injection in mice for analysis of adeno-associated viral (AAV) gene transfer to peripheral somatosensory neurons","authors":"Michael O’Donnell ,&nbsp;Arjun Fontaine ,&nbsp;John Caldwell ,&nbsp;Richard Weir","doi":"10.1016/j.jneumeth.2024.110268","DOIUrl":"10.1016/j.jneumeth.2024.110268","url":null,"abstract":"<div><h3>Background</h3><p>Delivering optogenetic genes to the peripheral sensory nervous system provides an efficient approach to study and treat neurological disorders and offers the potential to reintroduce sensory feedback to prostheses users and those who have incurred other neuropathies. Adeno-associated viral (AAV) vectors are a common method of gene delivery due to efficiency of gene transfer and minimal toxicity. AAVs are capable of being designed to target specific tissues, with transduction efficacy determined through the combination of serotype and genetic promoter selection, as well as location of vector administration. The dorsal root ganglia (DRGs) are collections of cell bodies of sensory neurons which project from the periphery to the central nervous system (CNS). The anatomical make-up of DRGs make them an ideal injection location to target the somatosensory neurons in the peripheral nervous system (PNS).</p></div><div><h3>Comparison to existing methods</h3><p>Previous studies have detailed methods of direct DRG injection in rats and dorsal horn injection in mice, however, due to the size and anatomical differences between rats and strains of mice, there is only one other published method for AAV injection into murine DRGs for transduction of peripheral sensory neurons using a different methodology.</p></div><div><h3>New Method/Results</h3><p>Here, we detail the necessary materials and methods required to inject AAVs into the L3 and L4 DRGs of mice, as well as how to harvest the sciatic nerve and L3/L4 DRGs for analysis. This methodology results in optogenetic expression in both the L3/L4 DRGs and sciatic nerve and can be adapted to inject any DRG.</p></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"411 ","pages":"Article 110268"},"PeriodicalIF":2.7,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142080601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信