{"title":"Shaping the Optimal Timing for Treatment of Isolated Asymptomatic Severe Aortic Stenosis with Preserved Left Ventricular Ejection Fraction: The Role of Non-Invasive Diagnostics Focused on Strain Echocardiography and Future Perspectives.","authors":"Luca Dell'Angela, Gian Luigi Nicolosi","doi":"10.3390/jimaging11020048","DOIUrl":"10.3390/jimaging11020048","url":null,"abstract":"<p><p>The optimal timing for treatment of patients with isolated asymptomatic severe aortic stenosis and preserved left ventricular ejection fraction is still controversial and research is ongoing. Once a diagnosis has been performed and other cardiac comorbidities (e.g., concomitant significant valvulopathies or infiltrative cardiomyopathies) have reasonably been excluded, a hot topic is adequate myocardial characterization, which aims to prevent both myocardial dysfunction and subsequent adverse myocardial remodeling, and can potentially compromise the post-treatment outcomes. Another crucial subject of debate is the assessment of the real \"preserved\" left ventricular ejection fraction cut-off value in the presence of isolated asymptomatic severe aortic stenosis, in order to optimize the timing of aortic valve replacement as well. The aim of the present critical narrative review is highlighting the current role of non-invasive diagnostics in such a setting, focusing on strain echocardiography, and citing the main complementary cardiac imaging techniques, as well as suggesting potential implementation strategies in routine clinical practice in view of future developments.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856064/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Machine Learning and Generative Intelligence in Book Cover Development.","authors":"Nonna Kulishova, Daiva Sajek","doi":"10.3390/jimaging11020046","DOIUrl":"10.3390/jimaging11020046","url":null,"abstract":"<p><p>The rapid development of machine learning and artificial intelligence approaches is finding ever wider application in various areas of life. This paper considers the problem of improving editorial and publishing processes, namely self-publishing, when designing book covers using machine learning and generative artificial intelligence (GAI) methods. When choosing a book, readers often have certain expectations regarding the design of the publication, including the color of the cover. These expectations can be called color preferences, and they can depend on the genre of the book, its target audience, and even personal associations. Cultural context can also influence color choice, as certain colors can symbolize different emotions or moods in different cultures. Cluster analysis of book cover images of the same genre allows us to identify color preferences inherent in the genre, which is proposed to be used when designing new covers. The capabilities of generative services for creating and improving cover designs are also investigated. An improved flow chart for using GAI in creating book covers in the process of self-publishing is proposed, which includes new stages, namely exploring, conditioning, and evolving. At these stages, the designer creates prompts for GAI and examines how they and GAI's issuances correspond to the task. Conditioning allows for even more precise adjustment of prompts to features of each book, and the evolving stage also includes post-processing of results already received from GAI. Post-processing, in turn, can be performed both in generative services and by a designer. The experiment allowed us to use the machine-learning method to determine which colors are most often found in book cover layouts of one of the genres and to check whether these colors correspond to harmonious color palettes. In accordance with the proposed scheme of the design process using generative artificial intelligence, versions of book cover layouts of a given genre were obtained.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Duarte Oliveira-Saraiva, João Leote, Filipe André Gonzalez, Nuno Cruz Garcia, Hugo Alexandre Ferreira
{"title":"We Need to Talk About Lung Ultrasound Score: Prediction of Intensive Care Unit Admission with Machine Learning.","authors":"Duarte Oliveira-Saraiva, João Leote, Filipe André Gonzalez, Nuno Cruz Garcia, Hugo Alexandre Ferreira","doi":"10.3390/jimaging11020045","DOIUrl":"10.3390/jimaging11020045","url":null,"abstract":"<p><p>The admission of COVID-19 patients to the Intensive Care Unit (ICU) is largely dependent on illness severity, yet no standard criteria exist for this decision. Here, lung ultrasound (LU) data, blood gas analysis (BGA), and clinical parameters from venous blood tests (VBTs) were used, along with machine-learning (ML) models to predict the need for ICU admission. Data from fifty-one COVID-19 patients, including ICU admission status, were collected. The information from LU was gathered through the identification of LU findings (LUFs): B-lines, irregular pleura, subpleural, and lobar consolidations. LU scores (LUSs) were computed by summing predefined weights assigned to each LUF, as reported in previous studies. In addition, individual LUFs were analyzed without calculating a total LUS. Support vector machine models were built, combining the available clinical data to predict ICU admissions. The application of ML models to individual LUFs outperformed standard LUS approaches reported in previous studies. Moreover, combining LU data with results from other medical exams improved the area under the receiver operating characteristic curve (AUC). The model with the best overall performance used variables from all three exams (BGA, LU, VBT), achieving an AUC of 95.5%. Overall, the results demonstrate the significant role of ML models in improving the prediction of ICU admission. Additionally, applying ML specifically to LUFs provided better results compared to traditional approaches that rely on traditional LUSs. The results of this paper are deployed on a web app.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Birthe Göbel, Jonas Huurdeman, Alexander Reiterer, Knut Möller
{"title":"Robot-Based Procedure for 3D Reconstruction of Abdominal Organs Using the Iterative Closest Point and Pose Graph Algorithms.","authors":"Birthe Göbel, Jonas Huurdeman, Alexander Reiterer, Knut Möller","doi":"10.3390/jimaging11020044","DOIUrl":"10.3390/jimaging11020044","url":null,"abstract":"<p><p>Image-based 3D reconstruction enables robot-assisted interventions and image-guided navigation, which are emerging technologies in laparoscopy. When a robotic arm guides a laparoscope for image acquisition, hand-eye calibration is required to know the transformation between the camera and the robot flange. The calibration procedure is complex and must be conducted after each intervention (when the laparoscope is dismounted for cleaning). In the field, the surgeons and their assistants cannot be expected to do so. Thus, our approach is a procedure for a robot-based multi-view 3D reconstruction without hand-eye calibration, but with pose optimization algorithms instead. In this work, a robotic arm and a stereo laparoscope build the experimental setup. The procedure includes the stereo matching algorithm Semi Global Matching from OpenCV for depth measurement and the multiscale color iterative closest point algorithm from Open3D (v0.19), along with the multiway registration algorithm using a pose graph from Open3D (v0.19) for pose optimization. The procedure is evaluated quantitatively and qualitatively on ex vivo organs. The results are a low root mean squared error (1.1-3.37 mm) and dense point clouds. The proposed procedure leads to a plausible 3D model, and there is no need for complex hand-eye calibration, as this step can be compensated for by pose optimization algorithms.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dual-Model Synergy for Fingerprint Spoof Detection Using VGG16 and ResNet50.","authors":"Mohamed Cheniti, Zahid Akhtar, Praveen Kumar Chandaliya","doi":"10.3390/jimaging11020042","DOIUrl":"10.3390/jimaging11020042","url":null,"abstract":"<p><p>In this paper, we address the challenge of fingerprint liveness detection by proposing a dual pre-trained model approach that combines VGG16 and ResNet50 architectures. While existing methods often rely on a single feature extraction model, they may struggle with generalization across diverse spoofing materials and sensor types. To overcome this limitation, our approach leverages the high-resolution feature extraction of VGG16 and the deep layer architecture of ResNet50 to capture a more comprehensive range of features for improved spoof detection. The proposed approach integrates these two models by concatenating their extracted features, which are then used to classify the captured fingerprint as live or spoofed. Evaluated on the Livedet2013 and Livedet2015 datasets, our method achieves state-of-the-art performance, with an accuracy of 99.72% on Livedet2013, surpassing existing methods like the Gram model (98.95%) and Pre-trained CNN (98.45%). On Livedet2015, our method achieves an average accuracy of 96.32%, outperforming several state-of-the-art models, including CNN (95.27%) and LivDet 2015 (95.39%). Error rate analysis reveals consistently low Bonafide Presentation Classification Error Rate (BPCER) scores with 0.28% on LivDet 2013 and 1.45% on LivDet 2015. Similarly, the Attack Presentation Classification Error Rate (APCER) remains low at 0.35% on LivDet 2013 and 3.68% on LivDet 2015. However, higher APCER values are observed for unknown spoof materials, particularly in the Crossmatch subset of Livedet2015, where the APCER rises to 8.12%. These findings highlight the robustness and adaptability of our simple dual-model framework while identifying areas for further optimization in handling unseen spoof materials.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856235/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meeghage Randika Perera, Graeme M Bydder, Samantha J Holdsworth, Geoffrey G Handsfield
{"title":"Imaging and Image Processing Techniques for High-Resolution Visualization of Connective Tissue with MRI: Application to Fascia, Aponeurosis, and Tendon.","authors":"Meeghage Randika Perera, Graeme M Bydder, Samantha J Holdsworth, Geoffrey G Handsfield","doi":"10.3390/jimaging11020043","DOIUrl":"10.3390/jimaging11020043","url":null,"abstract":"<p><p>Recent interest in musculoskeletal connective tissues like tendons, aponeurosis, and deep fascia has led to a greater focus on in vivo medical imaging, particularly MRI. Given the rapid T<sub>2</sub>* decay of collagenous tissues, advanced ultra-short echo time (UTE) MRI sequences have proven useful in generating high-signal images of these tissues. To further these advances, we discuss the integration of UTE with Diffusion Tensor Imaging (DTI) and explore image processing techniques to enhance the localization, labeling, and modeling of connective tissues. These techniques are especially valuable for extracting features from thin tissues that may be difficult to distinguish. We present data from lower leg scans of 30 healthy subjects using a non-Cartesian MRI sequence to acquire axial 2D images to segment skeletal muscle and connective tissue. DTI helped differentiate aponeurosis from deep fascia by analyzing muscle fiber orientations. The dual echo imaging methods yielded high-resolution images of deep fascia, where in-plane spatial resolutions were between 0.3 × 0.3 mm to 0.5 × 0.5 mm with a slice thickness of 3-5 mm. Techniques such as K-Means clustering, FFT edge detection, and region-specific scaling were most effective in enhancing images of deep fascia, aponeurosis, and tendon to enable high-fidelity modeling of these tissues.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Søren Pedersen, Sanyam Jain, Mikkel Chavez, Viktor Ladehoff, Bruna Neves de Freitas, Ruben Pauwels
{"title":"Pano-GAN: A Deep Generative Model for Panoramic Dental Radiographs.","authors":"Søren Pedersen, Sanyam Jain, Mikkel Chavez, Viktor Ladehoff, Bruna Neves de Freitas, Ruben Pauwels","doi":"10.3390/jimaging11020041","DOIUrl":"10.3390/jimaging11020041","url":null,"abstract":"<p><p>This paper presents the development of a generative adversarial network (GAN) for the generation of synthetic dental panoramic radiographs. While this is an exploratory study, the ultimate aim is to address the scarcity of data in dental research and education. A deep convolutional GAN (DCGAN) with the Wasserstein loss and a gradient penalty (WGAN-GP) was trained on a dataset of 2322 radiographs of varying quality. The focus of this study was on the dentoalveolar part of the radiographs; other structures were cropped out. Significant data cleaning and preprocessing were conducted to standardize the input formats while maintaining anatomical variability. Four candidate models were identified by varying the critic iterations, number of features and the use of denoising prior to training. To assess the quality of the generated images, a clinical expert evaluated a set of generated synthetic radiographs using a ranking system based on visibility and realism, with scores ranging from 1 (very poor) to 5 (excellent). It was found that most generated radiographs showed moderate depictions of dentoalveolar anatomical structures, although they were considerably impaired by artifacts. The mean evaluation scores showed a trade-off between the model trained on non-denoised data, which showed the highest subjective quality for finer structures, such as the <i>mandibular canal</i> and <i>trabecular bone</i>, and one of the models trained on denoised data, which offered better overall image quality, especially in terms of <i>clarity and sharpness</i> and <i>overall realism</i>. These outcomes serve as a foundation for further research into GAN architectures for dental imaging applications.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Olgar Birsel, Umut Zengin, Ilker Eren, Ali Ersen, Beren Semiz, Mehmet Demirhan
{"title":"Validation of Novel Image Processing Method for Objective Quantification of Intra-Articular Bleeding During Arthroscopic Procedures.","authors":"Olgar Birsel, Umut Zengin, Ilker Eren, Ali Ersen, Beren Semiz, Mehmet Demirhan","doi":"10.3390/jimaging11020040","DOIUrl":"10.3390/jimaging11020040","url":null,"abstract":"<p><p>Visual clarity is crucial for shoulder arthroscopy, directly influencing surgical precision and outcomes. Despite advances in imaging technology, intraoperative bleeding remains a significant obstacle to optimal visibility, with subjective evaluation methods lacking consistency and standardization. This study proposes a novel image processing system to objectively quantify bleeding and assess surgical effectiveness. The system uses color recognition algorithms to calculate a bleeding score based on pixel ratios by incorporating multiple color spaces to enhance accuracy and minimize errors. Moreover, 200 three-second video clips from prior arthroscopic rotator cuff repairs were evaluated by three senior surgeons trained on the system's color metrics and scoring process. Assessments were repeated two weeks later to test intraobserver reliability. The system's scores were compared to the average score given by the surgeons. The average surgeon-assigned score was 5.10 (range: 1-9.66), while the system scored videos from 1 to 9.46, with an average of 5.08. The mean absolute error between system and surgeon scores was 0.56, with a standard deviation of 0.50, achieving agreement ranging from [0.96,0.98] with 96.7% confidence (ICC = 0.967). This system provides a standardized method to evaluate intraoperative bleeding, enabling the precise detection of blood variations and supporting advanced technologies like autonomous arthropumps to enhance arthroscopy and surgical outcomes.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856628/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kevin Wendo, Catherine Behets, Olivier Barbier, Benoit Herman, Thomas Schubert, Benoit Raucent, Raphael Olszewski
{"title":"Dimensional Accuracy Assessment of Medical Anatomical Models Produced by Hospital-Based Fused Deposition Modeling 3D Printer.","authors":"Kevin Wendo, Catherine Behets, Olivier Barbier, Benoit Herman, Thomas Schubert, Benoit Raucent, Raphael Olszewski","doi":"10.3390/jimaging11020039","DOIUrl":"10.3390/jimaging11020039","url":null,"abstract":"<p><p>As 3D printing technology expands rapidly in medical disciplines, the accuracy evaluation of 3D-printed medical models is required. However, no established guidelines to assess the dimensional error of anatomical models exist. This study aims to evaluate the dimensional accuracy of medical models 3D-printed using a hospital-based Fused Deposition Modeling (FDM) 3D printer. Two dissected cadaveric right hands were marked with Titanium Kirshner wires to identify landmarks on the heads and bases of all metacarpals and proximal and middle phalanges. Both hands were scanned using a Cone Beam Computed Tomography scanner. Image post-processing and segmentation were performed on 3D Slicer software. Hand models were 3D-printed using a professional hospital-based FDM 3D printer. Manual measurements of all landmarks marked on both pairs of cadaveric and 3D-printed hands were taken by two independent observers using a digital caliper. The Mean Absolute Difference (MAD) and Mean Dimensional Error (MDE) were calculated. Our results showed an acceptable level of dimensional accuracy. The overall study's MAD was 0.32 mm (±0.34), and its MDE was 1.03% (±0.83). These values fall within the recommended range of errors. A high level of dimensional accuracy of the 3D-printed anatomical models was achieved, suggesting their reliability and suitability for medical applications.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiqiang Pi, Tao Zhang, Rongyang Wang, Guowei Ma, Yong Wang, Jianmin Du
{"title":"Semantic-Guided Transformer Network for Crop Classification in Hyperspectral Images.","authors":"Weiqiang Pi, Tao Zhang, Rongyang Wang, Guowei Ma, Yong Wang, Jianmin Du","doi":"10.3390/jimaging11020037","DOIUrl":"10.3390/jimaging11020037","url":null,"abstract":"<p><p>The hyperspectral remote sensing images of agricultural crops contain rich spectral information, which can provide important details about crop growth status, diseases, and pests. However, existing crop classification methods face several key limitations when processing hyperspectral remote sensing images, primarily in the following aspects. First, the complex background in the images. Various elements in the background may have similar spectral characteristics to the crops, and this spectral similarity makes the classification model susceptible to background interference, thus reducing classification accuracy. Second, the differences in crop scales increase the difficulty of feature extraction. In different image regions, the scale of crops can vary significantly, and traditional classification methods often struggle to effectively capture this information. Additionally, due to the limitations of spectral information, especially under multi-scale variation backgrounds, the extraction of crop information becomes even more challenging, leading to instability in the classification results. To address these issues, a semantic-guided transformer network (SGTN) is proposed, which aims to effectively overcome the limitations of these deep learning methods and improve crop classification accuracy and robustness. First, a multi-scale spatial-spectral information extraction (MSIE) module is designed that effectively handle the variations of crops at different scales in the image, thereby extracting richer and more accurate features, and reducing the impact of scale changes. Second, a semantic-guided attention (SGA) module is proposed, which enhances the model's sensitivity to crop semantic information, further reducing background interference and improving the accuracy of crop area recognition. By combining the MSIE and SGA modules, the SGTN can focus on the semantic features of crops at multiple scales, thus generating more accurate classification results. Finally, a two-stage feature extraction structure is employed to further optimize the extraction of crop semantic features and enhance classification accuracy. The results show that on the Indian Pines, Pavia University, and Salinas benchmark datasets, the overall accuracies of the proposed model are 98.24%, 98.34%, and 97.89%, respectively. Compared with other methods, the model achieves better classification accuracy and generalization performance. In the future, the SGTN is expected to be applied to more agricultural remote sensing tasks, such as crop disease detection and yield prediction, providing more reliable technical support for precision agriculture and agricultural monitoring.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}