{"title":"MDF2Former: Multi-Scale Dual-Domain Feature Fusion Transformer for Hyperspectral Image Classification of Bacteria in Murine Wounds.","authors":"Decheng Wu, Wendan Liu, Rui Li, Xudong Fu, Lin Tao, Yinli Tian, Anqiang Zhang, Zhen Wang, Hao Tang","doi":"10.3390/jimaging12020090","DOIUrl":"10.3390/jimaging12020090","url":null,"abstract":"<p><p>Bacterial wound infection poses a major challenge in trauma care and can lead to severe complications such as sepsis and organ failure. Therefore, rapid and accurate identification of the pathogen, along with targeted intervention, is of vital importance for improving treatment outcomes and reducing risks. However, current detection methods are still constrained by procedural complexity and long processing times. In this study, a hyperspectral imaging (HSI) acquisition system for bacterial analysis and a multi-scale dual-domain feature fusion transformer (MDF2Former) were developed for classifying wound bacteria. MDF2Former integrates three modules: a multi-scale feature enhancement and fusion module that generates tokens with multi-scale discriminative representations, a spatial-spectral dual-branch attention module that strengthens joint feature modeling, and a frequency and spatial-spectral domain encoding module that captures global and local interactions among tokens through a hierarchical stacking structure, thereby enabling more efficient feature learning. Extensive experiments on our self-constructed HSI dataset of typical wound bacteria demonstrate that MDF2Former achieved outstanding performance across five metrics: Accuracy (91.94%), Precision (92.26%), Recall (91.94%), F1-score (92.01%), and Kappa coefficient (90.73%), surpassing all comparative models. These results have verified the effectiveness of combining HSI with deep learning for bacterial identification, and have highlighted its potential in assisting in the identification of bacterial species and making personalized treatment decisions for wound infections.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12942589/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated Compactness Quantitative Metrics for Wrist Bone on Conventional Radiography in Rheumatoid Arthritis: A Clinical Evaluation Study.","authors":"Jiajing Zhou, Junmu Peng, Haolin Wang, Hiroshi Kataoka, Masaya Mukai, Tunlada Wiriyanukhroh, Tamotsu Kamishima","doi":"10.3390/jimaging12020087","DOIUrl":"10.3390/jimaging12020087","url":null,"abstract":"<p><p>Rheumatoid arthritis (RA) frequently affects the joints of the hands, with joint space narrowing (JSN) representing an important early marker of structural damage. The semi-quantitative Sharp/van der Heijde (SvdH) scoring system is widely used in clinical practice but is inherently subjective and susceptible to observer variability. Moreover, the complex anatomy of the wrist and substantial overlap of carpal bones pose challenges for automated quantitative assessment of wrist JSN on routine radiographs. This study aimed to introduce a novel quantitative assessment perspective and to clinically validate an automated, compactness-related quantification framework for evaluating wrist JSN in RA. This study initially enrolled 51 patients with RA. After excluding one case with severe carpal fusion that precluded anatomical differentiation, 50 patients (44 females and 6 males) were included in the final analysis. The cohort had a mean age of 61 years (range: 21-82), a median symptom duration of 9 years (IQR: 1-32), and a median follow-up interval for bilateral hand radiographs of 1.06 years (IQR: 0.82-1.30). To quantify global wrist JSN, 10 compactness-related metrics were computed based on the spatial distribution of bone centroids extracted from carpal segmentation masks. These metrics were validated against the wrist JSN subscore of the SvdH score (SvdH-JSN_wrist) and the total Sharp score (TSS) as gold standards. Several distance-based metrics among the compactness-related metrics showed significant negative correlations with the wrist joint space narrowing subscore of the Sharp/van der Heijde score (SvdH-JSN_wrist). Specifically, mean-pairwise-distance (MPD), root-mean-square-radius (RMSR), and median-radius (R50) showed moderate to strong correlations (r = -0.52 to -0.63, all p≤0.0001) that were consistent at BL and FU. Correlations with TSS were weaker overall, with only R50 and its normalized form showing stable negative correlations (r = -0.40 to -0.43, <i>p</i> < 0.01). Longitudinal analyses showed limited correlations between metric changes and clinical score changes. The proposed automated compactness quantification framework enables objective and reliable assessment of wrist JSN on standard radiographs and complements conventional scoring systems by supporting automated and standardized evaluation of RA-related wrist structural changes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aaron Gálvez-Salido, Francisca Robles, Rodrigo J Gonçalves, Roberto de la Herrán, Carmelo Ruiz Rejón, Rafael Navajas-Pérez
{"title":"Analysis of Biological Images and Quantitative Monitoring Using Deep Learning and Computer Vision.","authors":"Aaron Gálvez-Salido, Francisca Robles, Rodrigo J Gonçalves, Roberto de la Herrán, Carmelo Ruiz Rejón, Rafael Navajas-Pérez","doi":"10.3390/jimaging12020088","DOIUrl":"10.3390/jimaging12020088","url":null,"abstract":"<p><p>Automated biological counting is essential for scaling wildlife monitoring and biodiversity assessments, as manual processing currently limits analytical effort and scalability. This review evaluates the integration of deep learning and computer vision across diverse acquisition platforms, including camera traps, unmanned aerial vehicles (UAVs), and remote sensing. Methodological paradigms ranging from Convolutional Neural Networks (CNNs) and one-stage detectors like You Only Look Once (YOLO) to recent transformer-based architectures and hybrid models are examined. The literature shows that these methods consistently achieve high accuracy-often exceeding 95%-across various taxa, including insect pests, aquatic organisms, terrestrial vegetation, and forest ecosystems. However, persistent challenges such as object occlusion, cryptic species differentiation, and the scarcity of high-quality, labeled datasets continue to hinder fully automated workflows. We conclude that while automated counting has fundamentally increased data throughput, future advancements must focus on enhancing model generalization through self-supervised learning and improved data augmentation techniques. These developments are critical for transitioning from experimental models to robust, operational tools for global ecological monitoring and conservation efforts.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Igor Majnarić, Marija Jelkić, Marko Morić, Krunoslav Hajdek
{"title":"Print Quality Assessment of QR Code Elements Achieved by the Digital Thermal Transfer Process.","authors":"Igor Majnarić, Marija Jelkić, Marko Morić, Krunoslav Hajdek","doi":"10.3390/jimaging12020086","DOIUrl":"10.3390/jimaging12020086","url":null,"abstract":"<p><p>The new European Regulation (EU) 2025/40 includes provisions on modern packaging and packaging waste. It defines the use of image QR codes on packaging (items 71 and 161) and in personal documents, making line barcodes a thing of the past. The definition of a QR code is precisely specified in ISO/IEC 18004:2024. However, their implementation in printing systems is not specified and remains an important factor for their future application. Digital foil printing is a completely new hybrid printing process for applying information to highly precise applications such as QR codes, security printing, and packaging printing. The technique is characterized by a combination of two printing techniques: drop-on-demand UV inkjet followed by thermal transfer of black foil. Using a matte-coated printing substrate (Garda Matt, 300 g/m<sup>2</sup>), Konica Minolta KM1024 LHE Inkjet head settings, and a transfer temperature of 100 °C, the size of the square printing elements in QR codes plays a decisive role in the quality of the decoded information. The aim of this work is to investigate the possibility of realizing the basic elements of the QR code image (the profile of square elements and the success of realizing a precisely defined surface) with a variation in the thickness of the UV varnish coating (7, 14 and 21 µm), realized using the MGI JETvarnish 3DS digital machine. The most commonly used rectangular elements with a surface area of 0.01 cm<sup>2</sup> were tested: 0.06 cm<sup>2</sup>, 0.25 cm<sup>2</sup>, 1 cm<sup>2</sup>, 4 cm<sup>2</sup>, and 16 cm<sup>2</sup>. The results showed that the imprint quality is uneven for the smallest elements (square elements with base lengths of 0.1 cm and 0.25 cm). The effect is especially visible with a minimum UV varnish application of 7 μm (1 drop). By increasing the amount of UV varnish and the application thickness to 14 μm (2 drops) and 21 μm (3 drops), respectively, a significantly more stable, even reproduction of the achromatic image is achieved. The highest technical precision was achieved with a UV varnish thickness of 21 μm.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12942380/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SREF: Semantics-Refined Feature Extraction for Long-Term Visual Localization.","authors":"Danfeng Wu, Kaifeng Zhu, Heng Shi, Fenfen Zhou, Minchi Kuang","doi":"10.3390/jimaging12020085","DOIUrl":"10.3390/jimaging12020085","url":null,"abstract":"<p><p>Accurate and robust visual localization under changing environments remains a fundamental challenge in autonomous driving and mobile robotics. Traditional handcrafted features often degrade under long-term illumination and viewpoint variations, while recent CNN-based methods, although more robust, typically rely on coarse semantic cues and remain vulnerable to dynamic objects. In this paper, we propose a fine-grained semantics-guided feature extraction framework that adaptively selects stable keypoints while suppressing dynamic disturbances. A fine-grained semantic refinement module subdivides coarse semantic categories into stability-homogeneous sub-classes, and a dual-attention mechanism enhances local repeatability and semantic consistency. By integrating physical priors with self-supervised clustering, the proposed framework learns discriminative and reliable feature representations. Extensive experiments on the Aachen and RobotCar-Seasons benchmarks demonstrate that the proposed approach achieves state-of-the-art accuracy and robustness while maintaining real-time efficiency, effectively bridging coarse semantic guidance with fine-grained stability estimation. Quantitatively, our method achieves strong localization performance on Aachen (up to 88.1% at night under the (0.2°,0.25 m) threshold) and on RobotCar-Seasons (up to 57.2%/28.4% under the same threshold for day/night), demonstrating improved robustness to seasonal and illumination changes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"LEGS: Visual Localization Enhanced by 3D Gaussian Splatting.","authors":"Daewoon Kim, I-Gil Kim","doi":"10.3390/jimaging12020084","DOIUrl":"10.3390/jimaging12020084","url":null,"abstract":"<p><p>Accurate six-degree-of-freedom (6-DoF) visual localization is a fundamental component for modern mapping and navigation. While recent data-centric approaches have leveraged Novel View Synthesis (NVS) to augment training datasets, these methods typically rely on uniform grid-based sampling of virtual cameras. Such naive placement often yields redundant or weakly informative views, failing to effectively bridge the gap between sparse, unordered captures and dense scene geometry. To address these challenges, we present LEGS (Visual <b>L</b>ocalization <b>E</b>nhanced by 3D <b>G</b>aussian <b>S</b>platting), a trajectory-agnostic synthetic-view augmentation framework. LEGS constructs a joint set of 6-DoF camera pose proposals by integrating a coarse 3D lattice with the Structure-from-Motion (SfM) camera graph, followed by a visibility-aware, coverage-driven selection strategy. By utilizing 3D Gaussian Splatting (3DGS), our framework enables high-throughput, scene-specific synthesis within practical computational budgets. Experiments on standard benchmarks and an in-house dataset demonstrate that LEGS consistently improves pose accuracy and robustness, particularly in scenarios characterized by sparse sampling and co-located viewpoints.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research Progress on the Application of Radiomics and Deep Learning in Liver Fibrosis.","authors":"Yi Dang, Wenjing Li, Zhao Liu, Junqiang Lei","doi":"10.3390/jimaging12020082","DOIUrl":"10.3390/jimaging12020082","url":null,"abstract":"<p><p>Liver fibrosis (LF) represents a crucial intermediate stage in the pathological progression from chronic liver disease to cirrhosis and hepatocellular carcinoma. Early and accurate diagnosis is of vital importance for the intervention treatment of diseases and the improvement of prognosis. Traditional liver biopsy, long regarded as the diagnostic gold standard, remains associated with several notable limitations such as invasiveness, sampling errors and inter-observer variability. Lately, as artificial intelligence (AI) technology progresses swiftly, radiomics and deep learning (DL) have risen to prominence as non-invasive diagnostic instruments, showing significant potential in the LF diagnostic evaluation. This review summarizes the latest advancements in radiomics and DL for LF diagnosis, staging, prognosis prediction and etiological differentiation. It also analyzes the application value of multimodal imaging modalities, including magnetic resonance imaging (MRI), computed tomography (CT) and ultrasound in this field. Despite ongoing challenges in model generalization and standardization, improved model interpretability, technological integration and multimodal fusion, the continuous advancement of radiomics and DL technologies holds promise for AI-driven imaging analysis strategies. These approaches aim to integrate multiple clinical monitoring methods, overcome obstacles in the early LF diagnosis and treatment and provide new perspectives for precision medicine of this disease.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941878/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D Road Defect Mapping via Differentiable Neural Rendering and Multi-Frame Semantic Fusion in Bird's-Eye-View Space.","authors":"Hongjia Xing, Feng Yang","doi":"10.3390/jimaging12020083","DOIUrl":"10.3390/jimaging12020083","url":null,"abstract":"<p><p>Road defect detection is essential for traffic safety and infrastructure maintenance. Excising automated methods based on 2D image analysis lack spatial context and cannot provide accurate 3D localization required for maintenance planning. We propose a novel framework for road defect mapping from monocular video sequences by integrating differentiable Bird's-Eye-View (BEV) mesh representation, semantic filtering, and multi-frame temporal fusion. Our differentiable mesh-based BEV representation enables efficient scene reconstruction from sparse observations through MLP-based optimization. The semantic filtering strategy leverages road surface segmentation to eliminate off-road false positives, reducing detection errors by 33.7%. Multi-frame fusion with ray-casting projection and exponential moving average update accumulates defect observations across frames while maintaining 3D geometric consistency. Experimental results demonstrate that our framework produces geometrically consistent BEV defect maps with superior accuracy compared to single-frame 2D methods, effectively handling occlusions, motion blur, and varying illumination conditions.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941438/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Childhood Pneumonia Diagnosis Based on Multi-Model Feature Fusion Using Chi-Square Feature Selection.","authors":"Amira Ouerhani, Tareq Hadidi, Hanene Sahli, Halima Mahjoubi","doi":"10.3390/jimaging12020081","DOIUrl":"10.3390/jimaging12020081","url":null,"abstract":"<p><p>Pneumonia is one of the main reasons for child mortality, with chest radiography (CXR) being essential for its diagnosis. However, the low radiation exposure in pediatric analysis complicates the accurate detection of pneumonia, making traditional examination ineffective. Progress in medical imaging with convolutional neural networks (CNN) has considerably improved performance, gaining widespread recognition for its effectiveness. This paper proposes an accurate pneumonia detection method based on different deep CNN architectures that combine optimal feature fusion. Enhanced VGG-19, ResNet-50, and MobileNet-V2 are trained on the most widely used pneumonia dataset, applying appropriate transfer learning and fine-tuning strategies. To create an effective feature input, the Chi-Square technique removes inappropriate features from every enhanced CNN. The resulting subsets are subsequently fused horizontally, to generate more diverse and robust feature representation for binary classification. By combining 1000 best features from VGG-19 and MobileNet-V2 models, the suggested approach records the best accuracy (97.59%), Recall (98.33%), and F1-score (98.19%) on the test set based on the supervised support vector machines (SVM) classifier. The achieved results demonstrated that our approach provides a significant enhancement in performance compared to previous studies using various ensemble fusion techniques while ensuring computational efficiency. We project this fused-feature system to significantly aid timely detection of childhood pneumonia, especially within constrained healthcare systems.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12942337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Confidence-Guided Adaptive Diffusion Network for Medical Image Classification.","authors":"Yang Yan, Zhuo Xie, Wenbo Huang","doi":"10.3390/jimaging12020080","DOIUrl":"10.3390/jimaging12020080","url":null,"abstract":"<p><p>Medical image classification is a fundamental task in medical image analysis and underpins a wide range of clinical applications, including dermatological screening, retinal disease assessment, and malignant tissue detection. In recent years, diffusion models have demonstrated promising potential for medical image classification owing to their strong representation learning capability. However, existing diffusion-based classification methods often rely on oversimplified prior modeling strategies, which fail to adequately capture the intrinsic multi-scale semantic information and contextual dependencies inherent in medical images. As a result, the discriminative power and stability of feature representations are constrained in complex scenarios. In addition, fixed noise injection strategies neglect variations in sample-level prediction confidence, leading to uniform perturbations being imposed on samples with different levels of semantic reliability during the diffusion process, which in turn limits the model's discriminative performance and generalization ability. To address these challenges, this paper proposes a Confidence-Guided Adaptive Diffusion Network (CGAD-Net) for medical image classification. Specifically, a hybrid prior modeling framework is introduced, consisting of a Hierarchical Pyramid Context Modeling (HPCM) module and an Intra-Scale Dilated Convolution Refinement (IDCR) module. These two components jointly enable the diffusion-based feature modeling process to effectively capture fine-grained structural details and global contextual semantic information. Furthermore, a Confidence-Guided Adaptive Noise Injection (CG-ANI) strategy is designed to dynamically regulate noise intensity during the diffusion process according to sample-level prediction confidence. Without altering the underlying discriminative objective, CG-ANI stabilizes model training and enhances robust representation learning for semantically ambiguous samples.Experimental results on multiple public medical image classification benchmarks, including HAM10000, APTOS2019, and Chaoyang, demonstrate that CGAD-Net achieves competitive performance in terms of classification accuracy, robustness, and training stability. These results validate the effectiveness and application potential of confidence-guided diffusion modeling for two-dimensional medical image classification tasks, and provide valuable insights for further research on diffusion models in the field of medical image analysis.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"12 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2026-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12941618/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}