Huanqing Zhang , Jun Xie , Hongwei Yu , Fangzhao Du , Zhiwei Jin , Yujie Chen
{"title":"Enhancing transient motion-onset visual evoked potentials via stochastic resonance: Unimodal and cross-modal noise effects","authors":"Huanqing Zhang , Jun Xie , Hongwei Yu , Fangzhao Du , Zhiwei Jin , Yujie Chen","doi":"10.1016/j.jneumeth.2025.110589","DOIUrl":"10.1016/j.jneumeth.2025.110589","url":null,"abstract":"<div><h3>Background</h3><div>Motion-onset visual evoked potential (mVEP) are transient brain responses triggered by sudden motion stimuli and are widely used in brain-computer interface (BCI) systems. However, the inherently weak nature of mVEP signals poses a significant challenge to achieving reliable and accurate BCI performance. Enhancing the signal quality of mVEP responses is therefore critical for improving system robustness and usability.</div></div><div><h3>New method</h3><div>This study introduces a novel approach based on stochastic resonance (SR) theory, where appropriate levels of noise can enhance the performance of nonlinear systems such as the brain. By applying auditory and visual noise of varying intensities alongside mVEP stimuli, both unimodal SR and cross-modal SR effects were investigated. The method examines the effects of these noise conditions on brain activation and classification performance in mVEP-BCI.</div></div><div><h3>Results</h3><div>The results show that moderate levels of auditory or visual noise significantly enhance the P2 component amplitude of mVEP and improve classification accuracy in BCI tasks. In contrast, excessive noise leads to suppression of neural responses, forming an inverted U-shaped relationship between noise intensity and mVEP amplitude.</div></div><div><h3>Comparison with existing methods</h3><div>Conventional mVEP enhancement techniques typically rely on signal processing methods such as spatial filtering or feature extraction. In comparison, the proposed noise modulation strategy directly enhances neural responses, offering a biologically inspired and computationally simple alternative that complements existing approaches.</div></div><div><h3>Conclusions</h3><div>Both unimodal and cross-modal SR effectively enhance mVEP responses and BCI performance. This strategy provides new insights into SR mechanisms and supports the development of more robust mVEP-BCI systems.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110589"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145155537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated EEG signal processing: A comprehensive investigation into preprocessing techniques and sub-band extraction for enhanced brain-computer interface applications","authors":"Venkata Phanikrishna Balam","doi":"10.1016/j.jneumeth.2025.110561","DOIUrl":"10.1016/j.jneumeth.2025.110561","url":null,"abstract":"<div><div>The Electroencephalogram (EEG) is a vital physiological signal for monitoring brain activity and understanding neurological capacities, disabilities, and cognitive processes. Analyzing and classifying EEG signals are key to assessing an individual’s reactions to various stimuli. Manual EEG analysis is time-consuming and labor-intensive, necessitating automated tools for efficiency. Machine learning techniques often rely on preprocessing and segmentation methods to integrate automated classification into EEG signal processing, with EEG sub-band components (<em>δ</em>,<em>θ</em>,<em>α</em>,<em>β</em> and <em>γ</em>) playing a crucial role. This paper presents a comprehensive exploration of EEG preprocessing methods, with a specific focus on sub-band extraction techniques used in Brain-Computer Interface (BCI) applications. Various methods—including Fast Fourier Transform (FFT), Short-Time Fourier Transform (STFT), Finite Impulse Response (FIR) and Infinite Impulse Response (IIR) filters, and wavelet transforms (DWT, WPT)—are evaluated through qualitative and quantitative parametric analysis, along with a review of their practical applicability. The study also includes an application-based evaluation using an open-access EEG dataset for drowsiness detection.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110561"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145008387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James L. Bonanno , Ciara F. O’Brien , William B.J. Cafferty
{"title":"REVS: A new open-source platform for high-resolution analysis of rodent wheel running behavior","authors":"James L. Bonanno , Ciara F. O’Brien , William B.J. Cafferty","doi":"10.1016/j.jneumeth.2025.110581","DOIUrl":"10.1016/j.jneumeth.2025.110581","url":null,"abstract":"<div><h3>Background</h3><div>Rodent wheel running is widely used in neuroscience and preclinical research to assess locomotor function, recovery post-trauma or disease, circadian rhythms, and exercise physiology. However, most existing wheel-running systems offer limited metrics, lack flexibility in hardware, or require costly proprietary software, reducing their usefulness for detailed behavioral phenotyping—especially in models of injury or rehabilitation.</div></div><div><h3>New method</h3><div>We developed REVS (Revolution Evaluation and Visualization Software), a low-cost, open-source hardware and software platform for analyzing and visualizing rodent wheel running behavior. REVS captures wheel revolutions using Hall effect sensors and computes 13 day-level behavioral metrics along with detailed bout-level data. Users can interactively explore high-resolution temporal features and export data in Open Data Commons (ODC)-compatible formats. REVS supports customizable wheel types, facilitating use in animals with motor and/or sensory impairments.</div></div><div><h3>Results</h3><div>We validated REVS using a mouse model of partial spinal cord injury, where fine motor control is compromised. REVS detected impairments in 10 of 13 behavioral metrics post-injury, with varied recovery trajectories across measures. Principal component analysis revealed that recovery was closely linked to bout quality and intensity, rather than timing.</div></div><div><h3>Comparison with existing methods</h3><div>Unlike commercial and open-source systems, REVS offers more detailed metrics, customizable wheel compatibility, seamless blending with common vivarium hardware, integrated data visualizations, and ODC-compatible data export. It also supports flexible analysis across individuals and groups.</div></div><div><h3>Conclusions</h3><div>REVS provides a powerful, scalable tool for granular behavioral phenotyping in rodent studies, enhancing reproducibility and revealing insights into subtle locomotor changes associated with injury, recovery, and intervention.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110581"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145064877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessing attentiveness and cognitive engagement across tasks using video-based action understanding in non-human primates","authors":"Sin-Man Cheung , Adam Neumann , Thilo Womelsdorf","doi":"10.1016/j.jneumeth.2025.110597","DOIUrl":"10.1016/j.jneumeth.2025.110597","url":null,"abstract":"<div><h3>Background</h3><div>Distractibility and attentiveness are cognitive states that are expressed through observable behavior, but how behavioral features can be used to quantify these cognitive states has remained poorly understood. Video-based analysis promises to be a versatile tool to quantify the behavioral features that reflect subject-specific distractibility and attentiveness and are diagnostic of cognitive states.</div></div><div><h3>New method</h3><div>We describe an analysis pipeline that classifies cognitive states using a 2-camera set-up for video-based estimation of attentiveness and screen engagement in nonhuman primates performing cognitive tasks. The procedure reconstructs 3D poses from 2D labeled DeepLabCut videos, reconstructs the head/yaw orientation relative to a task screen, and arm/hand/wrist engagements with task objects, to segment behavior into an attentiveness and engagement score.</div></div><div><h3>Results</h3><div>Performance of different cognitive tasks was robustly classified from video within a few frames, reaching > 90 % decoding accuracy with ≤ 3 min long time segments. The analysis procedure allows adjusting thresholds for segmenting subject-specific movements for a time-resolved scoring of attentiveness and screen engagement.</div></div><div><h3>Comparison with existing methods</h3><div>Current methods also extract poses and segment action units; however, they haven't been combined into a framework that enables subject-adjusted thresholding for specific task contexts. This integration is needed for inferring cognitive state variables and differentiating performance across various tasks.</div></div><div><h3>Conclusion</h3><div>The proposed method integrates video segmentation, scoring of attentiveness and screen engagement, and classification of task performance at high temporal resolution. This integrated framework provides a tool for assessing attention functions from video.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110597"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized protocols for the simultaneous isolation of primary brain microvascular endothelial cells and primary neurons with high purity and functional maturation from individual newborn mice","authors":"Fating Zhou , Rui Huang , Jia Xie , Junyu Jiang , Xuemei Jiang , Yunfei Xiang , Guoxiang Zhang , Hao Li , Shunjie Zhang , Shanmu Ai , Yu Ma","doi":"10.1016/j.jneumeth.2025.110568","DOIUrl":"10.1016/j.jneumeth.2025.110568","url":null,"abstract":"<div><h3>Background</h3><div>Current neurovascular unit isolation requires processing brain microvascular endothelial cells (BMECs) and neurons from separate animals, preventing concurrent analysis of neurovascular crosstalk within identical genetic/physiological contexts.</div></div><div><h3>New methods</h3><div>We developed an enzymatic digestion/bovine serum albumin density gradient technique that enabled the simultaneous isolation of neural tissue and microvascular segments from individual mice. The neural tissue was filtered and centrifuged for primary cortical neuron culture on poly-L-lysine-coated plates. Microvascular segments were subjected to collagenase/dispase digestion and Percoll gradient centrifugation for BMEC culture on fibronectin-coated plates. Cellular purity was quantified via immunofluorescence, and BMEC functionality was assessed by tight junction expression, transendothelial electrical resistance (TEER), tubulogenesis, and secretory function. Neuronal characteristics were evaluated using morphometric analysis, detection of neurotransmitter secretion, and sensitivity to oxygen-glucose deprivation (OGD).</div></div><div><h3>Results</h3><div>High-purity BMECs and primary cortical neurons were successfully isolated by enzymatic digestion combined with density-gradient centrifugation. Primary BMECs exhibited fibronectin-dependent adhesion during initial plating, with a significantly enhanced adhesive capacity observed in passages 2 and 3. Tubulogenesis assays demonstrated superior tube-forming capacity of primary BMECs compared b.<em>E</em>nd3 cells. TEER and nitric oxide (NO) secretion decreased by 38.31 % and 26.1 %, respectively, following OGD. Primary cortical neurons displayed a characteristic somatic morphology with extensive neurite arborization and heightened sensitivity to OGD. The GABA level in the OGD group was 2.01 times higher than that in the control group and decreased by 52.5 % after reoxygenation.</div></div><div><h3>Comparison with existing methods</h3><div>Unlike conventional multi-animal protocols that introduce inter-individual variability, our single-mouse approach eliminates genetic confounders while reducing processing time by 40–60 % and yielding higher purity. Furthermore, primary BMECs and neurons maintained their original characteristics, including morphology, angiogenic capacity, and secretory function.</div></div><div><h3>Conclusion</h3><div>This novel platform reliably co-isolated functional primary BMECs and cortical neurons from individual mice, providing unprecedented fidelity for modeling neurovascular interactions in disease contexts.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110568"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145008315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinxin Chen , Mandy M. Koop , Kenneth B. Baker , Jay L. Alberts , James Y. Liao
{"title":"Corrigendum to “Reconstructing time-domain data from discontinuous Percept™ PC and RC output using external data acquisition and linear filtering” [J. Neurosci. Methods 424 (2025) 110566]","authors":"Jinxin Chen , Mandy M. Koop , Kenneth B. Baker , Jay L. Alberts , James Y. Liao","doi":"10.1016/j.jneumeth.2025.110596","DOIUrl":"10.1016/j.jneumeth.2025.110596","url":null,"abstract":"","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110596"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145212936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sahaj A. Patel , Helen Brinyark , Caila Coyne , Noshin Tasnia , Rebekah Chatfield , Erin C. Conrad , Benjamin Cox , Arie Nakhmani , Rachel J. Smith
{"title":"Cortico-cortical evoked potentials: Automated localization and classification of early and late responses","authors":"Sahaj A. Patel , Helen Brinyark , Caila Coyne , Noshin Tasnia , Rebekah Chatfield , Erin C. Conrad , Benjamin Cox , Arie Nakhmani , Rachel J. Smith","doi":"10.1016/j.jneumeth.2025.110571","DOIUrl":"10.1016/j.jneumeth.2025.110571","url":null,"abstract":"<div><h3>Background</h3><div>Cortico-cortical evoked potentials (CCEPs), elicited via single-pulse electrical stimulation, are used to map brain networks. These responses comprise early (N1) and late (N2) components, which reflect direct and indirect cortical connectivity. Reliable identification of these components remains difficult due to substantial variability in amplitude, phase, and timing. Traditional statistical methods often struggle to localize N1 and N2 peaks under such conditions.</div></div><div><h3>New Method</h3><div>A deep learning framework based on You Only Look Once (YOLO v10) was developed. Each CCEP epoch was converted into a two-dimensional image using Matplotlib and subsequently analyzed by the YOLO model to localize and classify N1 and N2 components. Detected image coordinates were mapped back to corresponding time-series indices for clinical interpretation.</div></div><div><h3>Results</h3><div>The framework was trained and validated on intracranial EEG data from 9 patients with drug-resistant epilepsy (DRE) at the University of Alabama at Birmingham (UAB), achieving a mean average precision (mAP) of 0.928 at an Intersection over Union (IoU) threshold of 0.5 on the test dataset. Generalizability was assessed on more than 4000 unannotated epochs obtained from 5 additional UAB patients and 10 patients at the Hospital of the University of Pennsylvania.</div></div><div><h3>Comparison with existing methods</h3><div>To our knowledge, no existing deep learning methods localize and classify both N1 and N2 components, limiting comparison. Current approaches rely on manual identification within fixed windows, introducing inter-rater variability and often missing inter-individual differences.</div></div><div><h3>Conclusion</h3><div>The proposed framework accurately detects and classifies CCEP components, offering a robust, automated alternative to manual analysis.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110571"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High - quality decoding of RGB images from the neuronal signals of the pigeon optic tectum","authors":"Zhen Dong, Yingjie Xiang, Songwei Wang","doi":"10.1016/j.jneumeth.2025.110595","DOIUrl":"10.1016/j.jneumeth.2025.110595","url":null,"abstract":"<div><h3>Background</h3><div>Decoding neural activity to reverse-engineer sensory inputs advances understanding of neural encoding and boosts brain-computer interface and visual prosthesis tech. A major challenge is high-quality RGB image reconstruction from natural scenes, which this study tackles using pigeon optic tectum neurons.</div></div><div><h3>New method</h3><div>We built a neural response dataset via microelectrode arrays capturing tectal neurons' ON-OFF responses to RGB images. A modular decoding algorithm, integrating a convolutional encoding network, linear decoder, and image enhancement network, enabled inverse RGB image reconstruction from neural signals.</div></div><div><h3>Results</h3><div>Experimental results confirmed high-quality RGB image reconstruction by the proposed algorithm. For all test set reconstructions, average metrics were: correlation coefficient (R) of 0.853, structural similarity index (SSIM) of 0.618, peak signal-to-noise ratio (PSNR) of 19.94 dB, and feature similarity index (FSIMc) of 0.801. These results confirm accurate recapitulation of both color and contour details of the original images.</div></div><div><h3>Comparison with existing methods</h3><div>In terms of key quantitative metrics, the proposed algorithm achieves a significant improvement over traditional linear reconstruction methods, with the correlation coefficient (R) increased by 12.65 %, the structural similarity index (SSIM) increased by 38.92 %, the peak signal-to-noise ratio (PSNR) increased by 12.65 %, and the feature similarity index (FSIMc) increased by 9.28 %.</div></div><div><h3>Conclusions</h3><div>This research provides a novel technical pathway for high-quality visual neural decoding, with robust experimental metrics validating its effectiveness. It also offers experimental evidence to support investigations into the information processing mechanisms of the avian visual pathway.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110595"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145206660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathan Runstadler , Selena Martinez , UnCheol Lee , Duan Li , Kourosh Maboudi , George A. Mashour , Phillip E. Vlisides
{"title":"Wireless high-density electroencephalography in the perioperative setting","authors":"Nathan Runstadler , Selena Martinez , UnCheol Lee , Duan Li , Kourosh Maboudi , George A. Mashour , Phillip E. Vlisides","doi":"10.1016/j.jneumeth.2025.110584","DOIUrl":"10.1016/j.jneumeth.2025.110584","url":null,"abstract":"<div><h3>Background</h3><div>Electroencephalographic (EEG) systems used in the operating room are constrained to frontal channels, providing limited neuroanatomical insights into altered perioperative brain states. Our objective is to present pragmatic strategies for placing whole-scalp, high-density EEG systems perioperatively that enable more comprehensive analysis.</div></div><div><h3>New method</h3><div>We present the successful implementation of wireless high-density (72-channel) EEG in the perioperative setting for the ongoing Caffeine, Postoperative Delirium, and Change in Outcomes after Surgery (CAPACHINOS-2) clinical trial (NCT05574400). Placement time was calculated, impedance and data quality were assessed, and data acquisition and analysis pipelines were established. Lastly, proof-of-principle analyses using source localization were conducted.</div></div><div><h3>Results</h3><div>High-density wireless EEG data have been successfully acquired for n = 45 participants, with median (interquartile range) placement time of 34 (25 – 52) minutes. Data acquisition was supported by an established workflow, and a subsequent data processing pipeline was used to evaluate channel quality, remove artifacts, and generate proof-of-principle high-density analyses.</div></div><div><h3>Comparison with existing methods</h3><div>Compared to a low-density system used for a similar, previous clinical trial (n = 54 participants), preoperative median impedance values (kΩ) were lower with the high-density system (13 [11–16] vs. 39 [28–47] kΩ; p < 0.001). Additionally, proof-of-principle analysis demonstrates a more complex connectivity matrix and broader distribution of cortical alpha rhythms after induction of general anesthesia with the high-density system, highlighting an expanded capacity for neurophysiologic analysis.</div></div><div><h3>Conclusions</h3><div>Wireless high-density EEG serves as a feasible, promising tool to advance understanding of altered perioperative brain states by providing high spatiotemporal resolution of cortical oscillations.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110584"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145102947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Rojas-Rodríguez , Elisa Imbimbo , Claudia Capitini , Irene Costantini , Giacomo Mazzamuto , Francesco Saverio Pavone , Ludovico Silvestri , Martino Calamai
{"title":"High efficiency labeling of nerve fibers in cleared tissue for light-sheet microscopy","authors":"Marta Rojas-Rodríguez , Elisa Imbimbo , Claudia Capitini , Irene Costantini , Giacomo Mazzamuto , Francesco Saverio Pavone , Ludovico Silvestri , Martino Calamai","doi":"10.1016/j.jneumeth.2025.110567","DOIUrl":"10.1016/j.jneumeth.2025.110567","url":null,"abstract":"<div><h3>Background</h3><div>Tissue clearing techniques combined with light-sheet fluorescence microscopy (LSFM) enable high-resolution 3D imaging of biological structures without physical sectioning. While widely used in neuroscience to determine brain architecture and connectomics, their application for spinal cord mapping remains more limited, posing challenges for studying demyelinating diseases like multiple sclerosis. Myelin visualization in cleared tissues is particularly difficult due to the lipid-removal nature of most clearing protocols, and alternative immunolabeling approaches failed to reach satisfying results.</div></div><div><h3>New method</h3><div>To overcome these limitations, we developed a novel protocol named HELF -High Efficiency Labeling of Fibers- which takes advantage of a fluorescently labeled aminosterol, trodusquemine, which displays a strong affinity for cholesterol-rich membranes, and a supplementary round of fixation with glutaraldehyde.</div></div><div><h3>Results and comparison with existing methods</h3><div>The labeling with trodusquemine was tested in combination with various established tissue clearing techniques and compared with HELF, which resulted to be the best approach for providing high-brightness myelin staining in mouse spinal cord and brain, and in human brain samples. Finally, we demonstrated that HELF can be used to stain and image with LSFM a whole cleared mouse spinal cord.</div></div><div><h3>Conclusions</h3><div>Our data support the potential use of HELF coupled to LSFM as a practical tool for the evaluation of novel therapeutics for remyelination in preclinical models of CNS diseases.</div></div>","PeriodicalId":16415,"journal":{"name":"Journal of Neuroscience Methods","volume":"424 ","pages":"Article 110567"},"PeriodicalIF":2.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145008354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}