{"title":"Real-time, inline quantitative MRI enabled by scanner-integrated machine learning: a proof of principle with NODDI.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Purpose: </strong>The clinical feasibility and translation of many advanced quantitative MRI (qMRI) techniques are inhibited by their restriction to 'research mode', due to resource-intensive, offline parameter estimation. This work aimed to achieve 'clinical mode' qMRI, by real-time, inline parameter estimation with a trained neural network (NN) fully integrated into a vendor's image reconstruction environment, therefore facilitating and encouraging clinical adoption of advanced qMRI techniques.</p><p><strong>Methods: </strong>The Siemens Image Calculation Environment (ICE) pipeline was customised to deploy trained NNs for advanced diffusion MRI parameter estimation with Open Neural Network Exchange (ONNX) Runtime. Two fully-connected NNs were trained offline with data synthesised with the neurite orientation dispersion and density imaging (NODDI) model, using either conventionally estimated (NNMLE) or ground truth (NNGT) parameters as training labels. The strategy was demonstrated online with an in vivo acquisition and evaluated offline with synthetic test data.</p><p><strong>Results: </strong>NNs were successfully integrated and deployed natively in ICE, performing inline, whole-brain, in vivo NODDI parameter estimation in <10 seconds. DICOM parametric maps were exported from the scanner for further analysis, generally finding that NNMLE estimates were more consistent than NNGT with conventional estimates. Offline evaluation confirms that NNMLE has comparable accuracy and slightly better noise robustness than conventional fitting, whereas NNGT exhibits compromised accuracy at the benefit of higher noise robustness.</p><p><strong>Conclusion: </strong>Real-time, inline parameter estimation with the proposed generalisable framework resolves a key practical barrier to clinical uptake of advanced qMRI methods and enables their efficient integration into clinical workflows.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288656/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving Drug Identification in Overdose Death Surveillance using Large Language Models.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The rising rate of drug-related deaths in the United States, largely driven by fentanyl, requires timely and accurate surveillance. However, critical overdose data are often buried in free-text coroner reports, leading to delays and information loss when coded into ICD (International Classification of Disease)-10 classifications. Natural language processing (NLP) models may automate and enhance overdose surveillance, but prior applications have been limited. A dataset of 35,433 death records from multiple U.S. jurisdictions in 2020 was used for model training and internal testing. External validation was conducted using a novel separate dataset of 3,335 records from 2023-2024. Multiple NLP approaches were evaluated for classifying specific drug involvement from unstructured death certificate text. These included traditional single- and multi-label classifiers, as well as fine-tuned encoder-only language models such as Bidirectional Encoder Representations from Transformers (BERT) and BioClinicalBERT, and contemporary decoder-only large language models such as Qwen 3 and Llama 3. Model performance was assessed using macro-averaged F1 scores, and 95% confidence intervals were calculated to quantify uncertainty. Fine-tuned BioClinicalBERT models achieved near-perfect performance, with macro F1 scores >=0.998 on the internal test set. External validation confirmed robustness (macro F1=0.966), outperforming conventional machine learning, general-domain BERT models, and various decoder-only large language models. NLP models, particularly fine-tuned clinical variants like BioClinicalBERT, offer a highly accurate and scalable solution for overdose death classification from free-text reports. These methods can significantly accelerate surveillance workflows, overcoming the limitations of manual ICD-10 coding and supporting near real-time detection of emerging substance use trends.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288657/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-objective CFD optimization of an intermediate diffuser stage for PediaFlow pediatric ventricular assist device.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Background: </strong>Computational fluid dynamics (CFD) has become an essential design tool for ventricular assist devices (VADs), where the goal of maximizing performance often conflicts with biocompatibility. This tradeoff becomes even more pronounced in pediatric applications due to the stringent size constraints imposed by the smaller patient population. This study presents an automated CFD-driven shape optimization of a new intermediate diffuser stage for the PediaFlow pediatric VAD, positioned immediately downstream of the impeller to improve pressure recovery.</p><p><strong>Methods: </strong>We adopted a multi-objective optimization approach to maximize pressure recovery while minimizing hemolysis. The proposed diffuser stage was isolated from the rest of the flow domain, enabling efficient evaluation of over 450 design variants using Sobol sequence, which yielded a Pareto front of non-dominated solutions. The selected best candidate was further refined using local T-search algorithm. We then incorporated the optimized front diffuser into the full pump for CFD verification and in vitro validation.</p><p><strong>Results: </strong>We identified critical dependencies where longer blades increased pressure recovery but also hemolysis, while the wrap angle showed a strong parabolic relationship with pressure recovery but a monotonic relationship with hemolysis. Counterintuitively, configurations with fewer blades (2-3) consistently outperformed those with more blades (4-5) in both metrics. The optimized two-blade design enabled operation at lower pump speeds (14,000 vs 16,000 RPM), improving hydraulic efficiency from 26.3% to 32.5% and reducing hemolysis by 31%.</p><p><strong>Conclusion: </strong>This approach demonstrates that multi-objective CFD optimization can systematically explore complex design spaces while balancing competing priorities of performance and hemocompatibility for pediatric VADs.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288659/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging Swin Transformer for enhanced diagnosis of Alzheimer's disease using multi-shell diffusion MRI.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objective: </strong>This study aims to support early diagnosis of Alzheimer's disease and detection of amyloid accumulation by leveraging the microstructural information available in multi-shell diffusion MRI (dMRI) data, using a vision transformer-based deep learning framework.</p><p><strong>Methods: </strong>We present a classification pipeline that employs the Swin Transformer, a hierarchical vision transformer model, on multi-shell dMRI data for the classification of Alzheimer's disease and amyloid presence. Key metrics from DTI and NODDI were extracted and projected onto 2D planes to enable transfer learning with ImageNet-pretrained models. To efficiently adapt the transformer to limited labeled neuroimaging data, we integrated Low-Rank Adaptation. We assessed the framework on diagnostic group prediction (cognitively normal, mild cognitive impairment, Alzheimer's disease dementia) and amyloid status classification.</p><p><strong>Results: </strong>The framework achieved competitive classification results within the scope of multi-shell dMRI-based features, with the best balanced accuracy of 95.2% for distinguishing cognitively normal individuals from those with Alzheimer's disease dementia using NODDI metrics. For amyloid detection, it reached 77.2% balanced accuracy in distinguishing amyloid-positive mild cognitive impairment/Alzheimer's disease dementia subjects from amyloid-negative cognitively normal subjects, and 67.9% for identifying amyloid-positive individuals among cognitively normal subjects. Grad-CAM-based explainability analysis identified clinically relevant brain regions, including the parahippocampal gyrus and hippocampus, as key contributors to model predictions.</p><p><strong>Conclusion: </strong>This study demonstrates the promise of diffusion MRI and transformer-based architectures for early detection of Alzheimer's disease and amyloid pathology, supporting biomarker-driven diagnostics in data-limited biomedical settings.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic Modeling of Antibody Kinetics Post Infection and Vaccination: A Markov Chain Approach.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Understanding the dynamics of antibody levels is crucial for characterizing the time-dependent response to immune events: either infections or vaccinations. The sequence and timing of these events significantly influence antibody level changes. Despite extensive interest in the topic in the recent years and many experimental studies, the effect of immune event sequences on antibody levels is not well understood. Moreover, disease or vaccination prevalence in the population are time-dependent. This, alongside the complexities of personal antibody kinetics, makes it difficult to analyze a sample immune measurement from a population. As a solution, we design a rigorous mathematical characterization in terms of a time-inhomogeneous Markov chain model for event-to-event transitions coupled with a probabilistic framework for the post-event antibody kinetics of multiple immune events. We demonstrate that this is an ideal model for immune event sequences, referred to as personal trajectories. This novel modeling framework surpasses the susceptible-infected-recovered (SIR) characterizations by rigorously tracking the probability distribution of population antibody response across time. To illustrate our ideas, we apply our mathematical framework to longitudinal severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) data from individuals with multiple documented infection and vaccination events. Our work is an important step towards a comprehensive understanding of antibody kinetics that could lead to an effective way to analyze the protective power of natural immunity or vaccination, predict missed immune events at an individual level, and inform booster timing recommendations.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bayesian Parameter Inference and Uncertainty Quantification for a Computational Pulmonary Hemodynamics Model Using Gaussian Processes.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Subject-specific modeling is a powerful tool in cardiovascular research, providing insights beyond the reach of current clinical diagnostics. Limitations in available clinical data require the incorporation of uncertainty into models to improve guidance for personalized treatments. However, for clinical relevance, such modeling must be computationally efficient. In this study, we used a one-dimensional (1D) fluid dynamics model informed by experimental data from a dog model of chronic thromboembolic pulmonary hypertension (CTEPH), incorporating measurements from multiple subjects under both baseline and CTEPH conditions. Surgical intervention can alleviate CTEPH, yet patients with microvascular disease (e.g., remodeling and narrowing of small vessels) often exhibit persistent pulmonary hypertension, highlighting the importance of assessing microvascular disease severity. Thus, each lung was modeled separately to account for the heterogeneous nature of CTEPH, allowing us to explore lung-specific microvascular narrowing and resistance. We compared inferred parameters between baseline and CTEPH and examined their correlation with clinical markers of disease severity. To accelerate model calibration, we employed Gaussian process (GP) emulators, enabling the estimation of microvascular parameters and their uncertainties within a clinically feasible timeframe. Our results demonstrated that CTEPH leads to heterogeneous microvascular adaptation, reflected in distinct parameter shifts. Notably, the changes in model parameters strongly correlated with disease severity, especially in the lung previously reported to have more advanced disease. This framework provides a rapid, uncertainty-aware method for evaluating microvascular dysfunction in CTEPH and may support more targeted treatment strategies within a timeframe suitable for clinical application.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11875295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Simple Approximate Bayesian Inference Neural Surrogate for Stochastic Petri Net Models.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Stochastic Petri Nets (SPNs) are an increasingly popular tool of choice for modeling discrete-event dynamics in areas such as epidemiology and systems biology, yet their parameter estimation remains challenging in general and in particular when transition rates depend on external covariates and explicit likelihoods are unavailable. We introduce a neural-surrogate (neural-network--based approximation of the posterior distribution) framework that predicts the coefficients of known covariate-dependent rate functions directly from noisy, partially observed token trajectories. Our model employs a lightweight 1D Convolutional Residual Network trained end-to-end on Gillespie-simulated SPN realizations, learning to invert system dynamics under realistic conditions of event dropout. During inference, Monte Carlo dropout provides calibrated uncertainty bounds together with point estimates. On synthetic SPNs with 20% missing events, our surrogate recovers rate-function coefficients with an RMSE = 0.108 and substantially runs faster than traditional Bayesian approaches. These results demonstrate that data-driven, likelihood-free surrogates can enable accurate, robust, and real-time parameter recovery in complex, partially observed discrete-event systems.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Summary statistics of learning link changing neural representations to behavior.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical physics offer a potential answer: for a given task, there are often a small set of summary statistics that are sufficient to predict performance as the network learns. Here, we review recent advances in how summary statistics can be used to build theoretical understanding of neural network learning. We then argue for how this perspective can inform the analysis of neural data, enabling better understanding of learning in biological and artificial neural networks.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12045385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144025934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fold-switching Proteins.","authors":"","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Globular proteins are expected to assume folds with fixed secondary structures, alpha-helices and beta-sheets. Fold-switching proteins challenge this expectation by remodeling their secondary and/or tertiary structures in response to cellular stimuli. Though these shapeshifting proteins were once thought to be haphazard evolutionary byproducts with little intrinsic biological relevance, recent work has shown that evolution has selected for their dual-folding behavior, which plays critical roles in biological processes across all kingdoms of life. The widening scope of fold switching draws attention to the ways it challenges conventional wisdom, raising fundamental unanswered questions about protein structure, biophysics, and evolution. Here we discuss the progress being made to answer these questions and suggest future directions for the field.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12288660/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144710261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How Easy Is It to Learn Motion Models from Widefield Fluorescence Single Particle Tracks?","authors":"Zachary H Hendrix, Lance W Q Xu, Steve Pressé","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Motion models are often deduced from fluorescence widefield tracking experiments by analyzing single-particle trajectories post-processed from the data. This analysis immediately raises the following question: To what degree is our ability to learn motion models impacted by analyzing post-processed trajectories versus the raw measurements? To answer this question, we mathematically formulate a data likelihood for diffraction-limited fluorescence widefield tracking experiments. In particular, we explicitly make the likelihood's dependence on the motion model versus the emission model (or measurement model). The emission model describes how photons emitted by fluorescently labeled particles are distributed in space according to the optical point spread function, with intensities subsequently integrated over a pixel, and convoluted with camera noise. Logic dictates that if the data likelihood is primarily informed by the motion model, then it should be straightforward to learn the motion model from the trajectory post-processed from the data. On the other hand, if the majority of the likelihood is numerically dominated by the emission model, then the post-processed trajectory inferred from data is primarily informed by the emission model, and very little information on the motion model permeates into the post-processed trajectories analyzed downstream to learn motion models. We find that for typical diffraction-limited fluorescence experiments, the emission model often robustly contributes approximately 99% to the likelihood, leaving motion models to explain approximately 1% of the data. This result immediately casts doubt on our ability to reliably learn motion models from post-processed data, raising further questions on the significance of motion models learned thus far from post-processed single-particle trajectories from single-molecule widefield fluorescence tracking experiments.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}