Jakob Nazarenus, Simin Kou, Fang-Lue Zhang, Reinhard Koch
{"title":"Arbitrary Optics for Gaussian Splatting Using Space Warping.","authors":"Jakob Nazarenus, Simin Kou, Fang-Lue Zhang, Reinhard Koch","doi":"10.3390/jimaging10120330","DOIUrl":"10.3390/jimaging10120330","url":null,"abstract":"<p><p>Due to recent advances in 3D reconstruction from RGB images, it is now possible to create photorealistic representations of real-world scenes that only require minutes to be reconstructed and can be rendered in real time. In particular, 3D Gaussian splatting shows promising results, outperforming preceding reconstruction methods while simultaneously reducing the overall computational requirements. The main success of 3D Gaussian splatting relies on the efficient use of a differentiable rasterizer to render the Gaussian scene representation. One major drawback of this method is its underlying pinhole camera model. In this paper, we propose an extension of the existing method that removes this constraint and enables scene reconstructions using arbitrary camera optics such as highly distorting fisheye lenses. Our method achieves this by applying a differentiable warping function to the Gaussian scene representation. Additionally, we reduce overfitting in outdoor scenes by utilizing a learnable skybox, reducing the presence of floating artifacts within the reconstructed scene. Based on synthetic and real-world image datasets, we show that our method is capable of creating an accurate scene reconstruction from highly distorted images and rendering photorealistic images from such reconstructions.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11678575/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Adil Hussain Shah, Syed Taimoor Hussain Shah, Roa'a Khaled, Andrea Buccoliero, Syed Baqir Hussain Shah, Angelo Di Terlizzi, Giacomo Di Benedetto, Marco Agostino Deriu
{"title":"Explainable AI-Based Skin Cancer Detection Using CNN, Particle Swarm Optimization and Machine Learning.","authors":"Syed Adil Hussain Shah, Syed Taimoor Hussain Shah, Roa'a Khaled, Andrea Buccoliero, Syed Baqir Hussain Shah, Angelo Di Terlizzi, Giacomo Di Benedetto, Marco Agostino Deriu","doi":"10.3390/jimaging10120332","DOIUrl":"10.3390/jimaging10120332","url":null,"abstract":"<p><p>Skin cancer is among the most prevalent cancers globally, emphasizing the need for early detection and accurate diagnosis to improve outcomes. Traditional diagnostic methods, based on visual examination, are subjective, time-intensive, and require specialized expertise. Current artificial intelligence (AI) approaches for skin cancer detection face challenges such as computational inefficiency, lack of interpretability, and reliance on standalone CNN architectures. To address these limitations, this study proposes a comprehensive pipeline combining transfer learning, feature selection, and machine-learning algorithms to improve detection accuracy. Multiple pretrained CNN models were evaluated, with Xception emerging as the optimal choice for its balance of computational efficiency and performance. An ablation study further validated the effectiveness of freezing task-specific layers within the Xception architecture. Feature dimensionality was optimized using Particle Swarm Optimization, reducing dimensions from 1024 to 508, significantly enhancing computational efficiency. Machine-learning classifiers, including Subspace KNN and Medium Gaussian SVM, further improved classification accuracy. Evaluated on the ISIC 2018 and HAM10000 datasets, the proposed pipeline achieved impressive accuracies of 98.5% and 86.1%, respectively. Moreover, Explainable-AI (XAI) techniques, such as Grad-CAM, LIME, and Occlusion Sensitivity, enhanced interpretability. This approach provides a robust, efficient, and interpretable solution for automated skin cancer diagnosis in clinical applications.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727770/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrés Munguía-Siu, Irene Vergara, Juan Horacio Espinoza-Rodríguez
{"title":"The Use of Hybrid CNN-RNN Deep Learning Models to Discriminate Tumor Tissue in Dynamic Breast Thermography.","authors":"Andrés Munguía-Siu, Irene Vergara, Juan Horacio Espinoza-Rodríguez","doi":"10.3390/jimaging10120329","DOIUrl":"10.3390/jimaging10120329","url":null,"abstract":"<p><p>Breast cancer is one of the leading causes of death for women worldwide, and early detection can help reduce the death rate. Infrared thermography has gained popularity as a non-invasive and rapid method for detecting this pathology and can be further enhanced by applying neural networks to extract spatial and even temporal data derived from breast thermographic images if they are acquired sequentially. In this study, we evaluated hybrid convolutional-recurrent neural network (CNN-RNN) models based on five state-of-the-art pre-trained CNN architectures coupled with three RNNs to discern tumor abnormalities in dynamic breast thermographic images. The hybrid architecture that achieved the best performance for detecting breast cancer was VGG16-LSTM, which showed accuracy (ACC), sensitivity (SENS), and specificity (SPEC) of 95.72%, 92.76%, and 98.68%, respectively, with a CPU runtime of 3.9 s. However, the hybrid architecture that showed the fastest CPU runtime was AlexNet-RNN with 0.61 s, although with lower performance (ACC: 80.59%, SENS: 68.52%, SPEC: 92.76%), but still superior to AlexNet (ACC: 69.41%, SENS: 52.63%, SPEC: 86.18%) with 0.44 s. Our findings show that hybrid CNN-RNN models outperform stand-alone CNN models, indicating that temporal data recovery from dynamic breast thermographs is possible without significantly compromising classifier runtime.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11728322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Yousuf Khan, Miguel-Angel Luque-Nieto, Muhammad Imran Saleem, Enrique Nava-Baro
{"title":"X-Ray Image-Based Real-Time COVID-19 Diagnosis Using Deep Neural Networks (CXR-DNNs).","authors":"Ali Yousuf Khan, Miguel-Angel Luque-Nieto, Muhammad Imran Saleem, Enrique Nava-Baro","doi":"10.3390/jimaging10120328","DOIUrl":"10.3390/jimaging10120328","url":null,"abstract":"<p><p>On 11 February 2020, the prevalent outbreak of COVID-19, a coronavirus illness, was declared a global pandemic. Since then, nearly seven million people have died and over 765 million confirmed cases of COVID-19 have been reported. The goal of this study is to develop a diagnostic tool for detecting COVID-19 infections more efficiently. Currently, the most widely used method is Reverse Transcription Polymerase Chain Reaction (RT-PCR), a clinical technique for infection identification. However, RT-PCR is expensive, has limited sensitivity, and requires specialized medical expertise. One of the major challenges in the rapid diagnosis of COVID-19 is the need for reliable imaging, particularly X-ray imaging. This work takes advantage of artificial intelligence (AI) techniques to enhance diagnostic accuracy by automating the detection of COVID-19 infections from chest X-ray (CXR) images. We obtained and analyzed CXR images from the Kaggle public database (4035 images in total), including cases of COVID-19, viral pneumonia, pulmonary opacity, and healthy controls. By integrating advanced techniques with transfer learning from pre-trained convolutional neural networks (CNNs), specifically InceptionV3, ResNet50, and Xception, we achieved an accuracy of 95%, significantly higher than the 85.5% achieved with ResNet50 alone. Additionally, our proposed method, CXR-DNNs, can accurately distinguish between three different types of chest X-ray images for the first time. This computer-assisted diagnostic tool has the potential to significantly enhance the speed and accuracy of COVID-19 diagnoses.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11728291/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kacoutchy Jean Ayikpa, Abou Bakary Ballo, Diarra Mamadou, Pierre Gouton
{"title":"Optimization of Cocoa Pods Maturity Classification Using Stacking and Voting with Ensemble Learning Methods in RGB and LAB Spaces.","authors":"Kacoutchy Jean Ayikpa, Abou Bakary Ballo, Diarra Mamadou, Pierre Gouton","doi":"10.3390/jimaging10120327","DOIUrl":"10.3390/jimaging10120327","url":null,"abstract":"<p><p>Determining the maturity of cocoa pods early is not just about guaranteeing harvest quality and optimizing yield. It is also about efficient resource management. Rapid identification of the stage of maturity helps avoid losses linked to a premature or late harvest, improving productivity. Early determination of cocoa pod maturity ensures both the quality and quantity of the harvest, as immature or overripe pods cannot produce premium cocoa beans. Our innovative research harnesses artificial intelligence and computer vision technologies to revolutionize the cocoa industry, offering precise and advanced tools for accurately assessing cocoa pod maturity. Providing an objective and rapid assessment enables farmers to make informed decisions about the optimal time to harvest, helping to maximize the yield of their plantations. Furthermore, by automating this process, these technologies reduce the margins for human error and improve the management of agricultural resources. With this in mind, our study proposes to exploit a computer vision method based on the GLCM (gray level co-occurrence matrix) algorithm to extract the characteristics of images in the RGB (red, green, blue) and LAB (luminance, axis between red and green, axis between yellow and blue) color spaces. This approach allows for in-depth image analysis, which is essential for capturing the nuances of cocoa pod maturity. Next, we apply classification algorithms to identify the best performers. These algorithms are then combined via stacking and voting techniques, allowing our model to be optimized by taking advantage of the strengths of each method, thus guaranteeing more robust and precise results. The results demonstrated that the combination of algorithms produced superior performance, especially in the LAB color space, where voting scored 98.49% and stacking 98.71%. In comparison, in the RGB color space, voting scored 96.59% and stacking 97.06%. These results surpass those generally reported in the literature, showing the increased effectiveness of combined approaches in improving the accuracy of classification models. This highlights the importance of exploring ensemble techniques to maximize performance in complex contexts such as cocoa pod maturity classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11727684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computer Vision-Based Gait Recognition on the Edge: A Survey on Feature Representations, Models, and Architectures.","authors":"Edwin Salcedo","doi":"10.3390/jimaging10120326","DOIUrl":"10.3390/jimaging10120326","url":null,"abstract":"<p><p>Computer vision-based gait recognition (CVGR) is a technology that has gained considerable attention in recent years due to its non-invasive, unobtrusive, and difficult-to-conceal nature. Beyond its applications in biometrics, CVGR holds significant potential for healthcare and human-computer interaction. Current CVGR systems often transmit collected data to a cloud server for machine learning-based gait pattern recognition. While effective, this cloud-centric approach can result in increased system response times. Alternatively, the emerging paradigm of edge computing, which involves moving computational processes to local devices, offers the potential to reduce latency, enable real-time surveillance, and eliminate reliance on internet connectivity. Furthermore, recent advancements in low-cost, compact microcomputers capable of handling complex inference tasks (e.g., Jetson Nano Orin, Jetson Xavier NX, and Khadas VIM4) have created exciting opportunities for deploying CVGR systems at the edge. This paper reports the state of the art in gait data acquisition modalities, feature representations, models, and architectures for CVGR systems suitable for edge computing. Additionally, this paper addresses the general limitations and highlights new avenues for future research in the promising intersection of CVGR and edge computing.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11728419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alvaro Casado-Coscolla, Carlos Sanchez-Belenguer, Erik Wolfart, Vitor Sequeira
{"title":"Point-Cloud Instance Segmentation for Spinning Laser Sensors.","authors":"Alvaro Casado-Coscolla, Carlos Sanchez-Belenguer, Erik Wolfart, Vitor Sequeira","doi":"10.3390/jimaging10120325","DOIUrl":"10.3390/jimaging10120325","url":null,"abstract":"<p><p>In this paper, we face the point-cloud segmentation problem for spinning laser sensors from a deep-learning (DL) perspective. Since the sensors natively provide their measurements in a 2D grid, we directly use state-of-the-art models designed for visual information for the segmentation task and then exploit the range information to ensure 3D accuracy. This allows us to effectively address the main challenges of applying DL techniques to point clouds, i.e., lack of structure and increased dimensionality. To the best of our knowledge, this is the first work that faces the 3D segmentation problem from a 2D perspective without explicitly re-projecting 3D point clouds. Moreover, our approach exploits multiple channels available in modern sensors, i.e., range, reflectivity, and ambient illumination. We also introduce a novel data-mining pipeline that enables the annotation of 3D scans without human intervention. Together with this paper, we present a new public dataset with all the data collected for training and evaluating our approach, where point clouds preserve their native sensor structure and where every single measurement contains range, reflectivity, and ambient information, together with its associated 3D point. As experimental results show, our approach achieves state-of-the-art results both in terms of performance and inference time. Additionally, we provide a novel ablation test that analyses the individual and combined contributions of the different channels provided by modern laser sensors.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11728245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ignacio Diaz-Lorenzo, Alberto Alonso-Burgos, Alfonsa Friera Reyes, Ruben Eduardo Pacios Blanco, Maria Del Carmen de Benavides Bernaldo de Quiros, Guillermo Gallardo Madueño
{"title":"Current Role of CT Pulmonary Angiography in Pulmonary Embolism: A State-of-the-Art Review.","authors":"Ignacio Diaz-Lorenzo, Alberto Alonso-Burgos, Alfonsa Friera Reyes, Ruben Eduardo Pacios Blanco, Maria Del Carmen de Benavides Bernaldo de Quiros, Guillermo Gallardo Madueño","doi":"10.3390/jimaging10120323","DOIUrl":"10.3390/jimaging10120323","url":null,"abstract":"<p><p>The purpose of this study is to conduct a literature review on the current role of computed tomography pulmonary angiography (CTPA) in the diagnosis and prognosis of pulmonary embolism (PE). It addresses key topics such as the quantification of the thrombotic burden, its role as a predictor of mortality, new diagnostic techniques that are available, the possibility of analyzing the thrombus composition to differentiate its evolutionary stage, and the applicability of artificial intelligence (AI) in PE through CTPA. The only finding from CTPA that has been validated as a prognostic factor so far is the right ventricle/left ventricle (RV/LV) diameter ratio being >1, which is associated with a 2.5-fold higher risk of all-cause mortality or adverse events, and a 5-fold higher risk of PE-related mortality. The increasing use of techniques such as dual-energy computed tomography allows for the more accurate diagnosis of perfusion defects, which may go undetected in conventional computed tomography, identifying up to 92% of these defects compared to 78% being detected by CTPA. Additionally, it is essential to explore the latest advances in the application of AI to CTPA, which are currently expanding and have demonstrated a 23% improvement in the detection of subsegmental emboli compared to manual interpretation. With deep image analysis, up to a 95% accuracy has been achieved in predicting PE severity based on the thrombus volume and perfusion deficits. These advancements over the past 10 years significantly contribute to early intervention strategies and, therefore, to the improvement of morbidity and mortality outcomes for these patients.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11678867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142898989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Connor C Mullins, Travis J Esau, Qamar U Zaman, Ahmad A Al-Mallahi, Aitazaz A Farooque
{"title":"Exploiting 2D Neural Network Frameworks for 3D Segmentation Through Depth Map Analytics of Harvested Wild Blueberries (<i>Vaccinium angustifolium</i> Ait.).","authors":"Connor C Mullins, Travis J Esau, Qamar U Zaman, Ahmad A Al-Mallahi, Aitazaz A Farooque","doi":"10.3390/jimaging10120324","DOIUrl":"10.3390/jimaging10120324","url":null,"abstract":"<p><p>This study introduced a novel approach to 3D image segmentation utilizing a neural network framework applied to 2D depth map imagery, with Z axis values visualized through color gradation. This research involved comprehensive data collection from mechanically harvested wild blueberries to populate 3D and red-green-blue (RGB) images of filled totes through time-of-flight and RGB cameras, respectively. Advanced neural network models from the YOLOv8 and Detectron2 frameworks were assessed for their segmentation capabilities. Notably, the YOLOv8 models, particularly YOLOv8n-seg, demonstrated superior processing efficiency, with an average time of 18.10 ms, significantly faster than the Detectron2 models, which exceeded 57 ms, while maintaining high performance with a mean intersection over union (IoU) of 0.944 and a Matthew's correlation coefficient (MCC) of 0.957. A qualitative comparison of segmentation masks indicated that the YOLO models produced smoother and more accurate object boundaries, whereas Detectron2 showed jagged edges and under-segmentation. Statistical analyses, including ANOVA and Tukey's HSD test (α = 0.05), confirmed the superior segmentation performance of models on depth maps over RGB images (<i>p</i> < 0.001). This study concludes by recommending the YOLOv8n-seg model for real-time 3D segmentation in precision agriculture, providing insights that can enhance volume estimation, yield prediction, and resource management practices.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676057/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Asghar Sharifi, Ali Zoljodi, Masoud Daneshtalab
{"title":"DAT: Deep Learning-Based Acceleration-Aware Trajectory Forecasting.","authors":"Ali Asghar Sharifi, Ali Zoljodi, Masoud Daneshtalab","doi":"10.3390/jimaging10120321","DOIUrl":"10.3390/jimaging10120321","url":null,"abstract":"<p><p>As the demand for autonomous driving (AD) systems has increased, the enhancement of their safety has become critically important. A fundamental capability of AD systems is object detection and trajectory forecasting of vehicles and pedestrians around the ego-vehicle, which is essential for preventing potential collisions. This study introduces the Deep learning-based Acceleration-aware Trajectory forecasting (DAT) model, a deep learning-based approach for object detection and trajectory forecasting, utilizing raw sensor measurements. DAT is an end-to-end model that processes sequential sensor data to detect objects and forecasts their future trajectories at each time step. The core innovation of DAT lies in its novel forecasting module, which leverages acceleration data to enhance trajectory forecasting, leading to the consideration of a variety of agent motion models. We propose a robust and innovative method for estimating ground-truth acceleration for objects, along with an object detector that predicts acceleration attributes for each detected object and a novel method for trajectory forecasting. DAT is trained and evaluated on the NuScenes dataset, demonstrating its empirical effectiveness through extensive experiments. The results indicate that DAT significantly surpasses state-of-the-art methods, particularly in enhancing forecasting accuracy for objects exhibiting both linear and nonlinear motion patterns, achieving up to a 2× improvement. This advancement highlights the critical role of incorporating acceleration data into predictive models, representing a substantial step forward in the development of safer autonomous driving systems.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"10 12","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11677107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}