{"title":"A Vision Method for Detecting Citrus Separation Lines Using Line-Structured Light.","authors":"Qingcang Yu, Song Xue, Yang Zheng","doi":"10.3390/jimaging11080265","DOIUrl":"10.3390/jimaging11080265","url":null,"abstract":"<p><p>The detection of citrus separation lines is a crucial step in the citrus processing industry. Inspired by the achievements of line-structured light technology in surface defect detection, this paper proposes a method for detecting citrus separation lines based on line-structured light. Firstly, a gamma-corrected Otsu method is employed to extract the laser stripe region from the image. Secondly, an improved skeleton extraction algorithm is employed to mitigate the bifurcation errors inherent in original skeleton extraction algorithms while simultaneously acquiring 3D point cloud data of the citrus surface. Finally, the least squares progressive iterative approximation algorithm is applied to approximate the ideal surface curve; subsequently, principal component analysis is used to derive the normals of this ideally fitted curve. The deviation between each point (along its corresponding normal direction) and the actual geometric characteristic curve is then adopted as a quantitative index for separation lines positioning. The average similarity between the extracted separation lines and the manually defined standard separation lines reaches 92.5%. In total, 95% of the points on the separation lines obtained by this method have an error of less than 4 pixels. Experimental results demonstrate that through quantitative deviation analysis of geometric features, automatic detection and positioning of the separation lines are achieved, satisfying the requirements of high precision and non-destructiveness for automatic citrus splitting.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387447/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SABE-YOLO: Structure-Aware and Boundary-Enhanced YOLO for Weld Seam Instance Segmentation.","authors":"Rui Wen, Wu Xie, Yong Fan, Lanlan Shen","doi":"10.3390/jimaging11080262","DOIUrl":"10.3390/jimaging11080262","url":null,"abstract":"<p><p>Accurate weld seam recognition is essential in automated welding systems, as it directly affects path planning and welding quality. With the rapid advancement of industrial vision, weld seam instance segmentation has emerged as a prominent research focus in both academia and industry. However, existing approaches still face significant challenges in boundary perception and structural representation. Due to the inherently elongated shapes, complex geometries, and blurred edges of weld seams, current segmentation models often struggle to maintain high accuracy in practical applications. To address this issue, a novel structure-aware and boundary-enhanced YOLO (SABE-YOLO) is proposed for weld seam instance segmentation. First, a Structure-Aware Fusion Module (SAFM) is designed to enhance structural feature representation through strip pooling attention and element-wise multiplicative fusion, targeting the difficulty in extracting elongated and complex features. Second, a C2f-based Boundary-Enhanced Aggregation Module (C2f-BEAM) is constructed to improve edge feature sensitivity by integrating multi-scale boundary detail extraction, feature aggregation, and attention mechanisms. Finally, the inner minimum point distance-based intersection over union (Inner-MPDIoU) is introduced to improve localization accuracy for weld seam regions. Experimental results on the self-built weld seam image dataset show that SABE-YOLO outperforms YOLOv8n-Seg by 3 percentage points in the AP(50-95) metric, reaching 46.3%. Meanwhile, it maintains a low computational cost (18.3 GFLOPs) and a small number of parameters (6.6M), while achieving an inference speed of 127 FPS, demonstrating a favorable trade-off between segmentation accuracy and computational efficiency. The proposed method provides an effective solution for high-precision visual perception of complex weld seam structures and demonstrates strong potential for industrial application.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Impact of 2D MRI Slice Orientation and Location on Alzheimer's Disease Diagnosis Using a Lightweight Convolutional Neural Network.","authors":"Nadia A Mohsin, Mohammed H Abdulameer","doi":"10.3390/jimaging11080260","DOIUrl":"10.3390/jimaging11080260","url":null,"abstract":"<p><p>Accurate detection of Alzheimer's disease (AD) is critical yet challenging for early medical intervention. Deep learning methods, especially convolutional neural networks (CNNs), have shown promising potential for improving diagnostic accuracy using magnetic resonance imaging (MRI). This study aims to identify the most informative combination of MRI slice orientation and anatomical location for AD classification. We propose an automated framework that first selects the most relevant slices using a feature entropy-based method applied to activation maps from a pretrained CNN model. For classification, we employ a lightweight CNN architecture based on depthwise separable convolutions to efficiently analyze the selected 2D MRI slices extracted from preprocessed 3D brain scans. To further interpret model behavior, an attention mechanism is integrated to analyze which feature level contributes the most to the classification process. The model is evaluated on three binary tasks: AD vs. mild cognitive impairment (MCI), AD vs. cognitively normal (CN), and MCI vs. CN. The experimental results show the highest accuracy (97.4%) in distinguishing AD from CN when utilizing the selected slices from the ninth axial segment, followed by the tenth segment of coronal and sagittal orientations. These findings demonstrate the significance of slice location and orientation in MRI-based AD diagnosis and highlight the potential of lightweight CNNs for clinical use.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyle S J Jamar, Adam Peszek, Catherine C Alder, Trevor J Wait, Caleb J Wipf, Carson L Keeter, Stephanie W Mayer, Charles P Ho, James W Genuario
{"title":"Quantitative Magnetic Resonance Imaging and Patient-Reported Outcomes in Patients Undergoing Hip Labral Repair or Reconstruction.","authors":"Kyle S J Jamar, Adam Peszek, Catherine C Alder, Trevor J Wait, Caleb J Wipf, Carson L Keeter, Stephanie W Mayer, Charles P Ho, James W Genuario","doi":"10.3390/jimaging11080261","DOIUrl":"10.3390/jimaging11080261","url":null,"abstract":"<p><p>This study evaluates the relationship between preoperative cartilage quality, measured by T2 mapping, and patient-reported outcomes following labral tear treatment. We retrospectively reviewed patients aged 14-50 who underwent primary hip arthroscopy with either labral repair or reconstruction. Preoperative T2 values of femoral, acetabular, and labral tissue were assessed from MRI by blinded reviewers. International Hip Outcome Tool (iHOT-12) scores were collected preoperatively and up to two years postoperatively. Associations between T2 values and iHOT-12 scores were analyzed using univariate mixed linear models. Twenty-nine patients were included (mean age of 32.5 years, BMI 24 kg/m<sup>2</sup>, 48.3% female, and 22 repairs). Across all patients, higher T2 values were associated with higher iHOT-12 scores at baseline and early postoperative timepoints (three months for cartilage and six months for labrum; <i>p</i> < 0.05). Lower T2 values were associated with higher 12- and 24-month iHOT-12 scores across all structures (<i>p</i> < 0.001). Similar trends were observed within the repair and reconstruction subgroups, with delayed negative associations correlating with worse tissue quality. T2 mapping showed time-dependent correlations with iHOT-12 scores, indicating that worse cartilage or labral quality predicts poorer long-term outcomes. These findings support the utility of T2 mapping as a preoperative tool for prognosis in hip preservation surgery.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387705/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zihan Zhu, Henghong Lin, Anastasia Ioannou, Tao Wang
{"title":"Adaptive RGB-D Semantic Segmentation with Skip-Connection Fusion for Indoor Staircase and Elevator Localization.","authors":"Zihan Zhu, Henghong Lin, Anastasia Ioannou, Tao Wang","doi":"10.3390/jimaging11080258","DOIUrl":"10.3390/jimaging11080258","url":null,"abstract":"<p><p>Accurate semantic segmentation of indoor architectural elements, such as staircases and elevators, is critical for safe and efficient robotic navigation, particularly in complex multi-floor environments. Traditional fusion methods struggle with occlusions, reflections, and low-contrast regions. In this paper, we propose a novel feature fusion module, Skip-Connection Fusion (SCF), that dynamically integrates RGB (Red, Green, Blue) and depth features through an adaptive weighting mechanism and skip-connection integration. This approach enables the model to selectively emphasize informative regions while suppressing noise, effectively addressing challenging conditions such as partially blocked staircases, glossy elevator doors, and dimly lit stair edges, which improves obstacle detection and supports reliable human-robot interaction in complex environments. Extensive experiments on a newly collected dataset demonstrate that SCF consistently outperforms state-of-the-art methods, including PSPNet and DeepLabv3, in both overall mIoU (mean Intersection over Union) and challenging-case performance. Specifically, our SCF module improves segmentation accuracy by 5.23% in the top 10% of challenging samples, highlighting its robustness in real-world conditions. Furthermore, we conduct a sensitivity analysis on the learnable weights, demonstrating their impact on segmentation quality across varying scene complexities. Our work provides a strong foundation for real-world applications in autonomous navigation, assistive robotics, and smart surveillance.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387349/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Road Marking Damage Degree Detection Based on Boundary Features Enhanced and Asymmetric Large Field-of-View Contextual Features.","authors":"Zheng Wang, Ryojun Ikeura, Soichiro Hayakawa, Zhiliang Zhang","doi":"10.3390/jimaging11080259","DOIUrl":"10.3390/jimaging11080259","url":null,"abstract":"<p><p>Road markings, as critical components of transportation infrastructure, are crucial for ensuring traffic safety. Accurate quantification of their damage severity is vital for effective maintenance prioritization. However, existing methods are limited to detecting the presence of damage without assessing its extent. To address this limitation, we propose a novel segmentation-based framework for estimating the degree of road marking damage. The method comprises two stages: segmentation of residual pixels from the damaged markings and segmentation of the intact markings region. This dual-segmentation strategy enables precise reconstruction and comparison for severity estimation. To enhance segmentation performance, we proposed two key modules: the Asymmetric Large Field-of-View Contextual (ALFVC) module, which captures rich multi-scale contextual features, and the supervised Boundary Feature Enhancement (BFE) module, which strengthens shape representation and boundary accuracy. The experimental results demonstrate that our method achieved an average segmentation accuracy of 89.44%, outperforming the baseline by 5.86 percentage points. Moreover, the damage quantification achieved a minimum error rate of just 0.22% on the proprietary dataset. The proposed approach was both effective and lightweight, providing valuable support for automated maintenance planning, and significantly improving the efficiency and precision of road marking management.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387804/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María Arcas-Carbonell, Elvira Orduna-Hospital, María Mechó-García, Guisela Fernández-Espinosa, Ana Sanchez-Cano
{"title":"A Novel Method for Analysing the Curvature of the Anterior Lens: Multi-Radial Scheimpflug Imaging and Custom Conic Fitting Algorithm.","authors":"María Arcas-Carbonell, Elvira Orduna-Hospital, María Mechó-García, Guisela Fernández-Espinosa, Ana Sanchez-Cano","doi":"10.3390/jimaging11080257","DOIUrl":"10.3390/jimaging11080257","url":null,"abstract":"<p><p>This study describes and validates a novel method for assessing anterior crystalline lens curvature along vertical and horizontal meridians using radial measurements derived from Scheimpflug imaging. The aim was to evaluate whether pupil diameter (PD), anterior lens curvature, and anterior chamber depth (ACD) change during accommodation and whether these changes are age-dependent. A cross-sectional study was conducted on 104 right eyes from healthy participants aged 21-62 years. Sixteen radial images per eye were acquired using the Galilei Dual Scheimpflug Placido Disk Topographer under four accommodative demands (0, 1, 3, and 5 dioptres (D)). Custom software analysed lens curvature by calculating eccentricity in both meridians. Participants were analysed as a total group and by age subgroups. Accommodative amplitude and monocular accommodative facility were inversely correlated with age. Both PD and ACD significantly decreased with higher accommodative demands and age. Relative eccentricity decreased under accommodation, indicating increased lens curvature, especially in younger participants. Significant curvature changes were detected in the horizontal meridian only, although no statistically significant differences between meridians were found overall. The vertical meridian showed slightly higher eccentricity values, suggesting that it remained less curved. By enabling detailed, meridionally stratified in vivo assessment of anterior lens curvature, this novel method provides a valuable non-invasive approach for characterizing age-related biomechanical changes during accommodation. The resulting insights enhance our understanding of presbyopia progression, particularly regarding the spatial remodelling of the anterior lens surface.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Shoaib Farooq, Ayesha Kamran, Syed Atir Raza, Muhammad Farooq Wasiq, Bilal Hassan, Nitsa J Herzog
{"title":"Advancing Early Blight Detection in Potato Leaves Through ZeroShot Learning.","authors":"Muhammad Shoaib Farooq, Ayesha Kamran, Syed Atir Raza, Muhammad Farooq Wasiq, Bilal Hassan, Nitsa J Herzog","doi":"10.3390/jimaging11080256","DOIUrl":"10.3390/jimaging11080256","url":null,"abstract":"<p><p>Potatoes are one of the world's most widely cultivated crops, but their yield is coming under mounting pressure from early blight, a fungal disease caused by <i>Alternaria solani</i>. Early detection and accurate identification are key to effective disease management and yield protection. This paper introduces a novel deep learning framework called ZeroShot CNN, which integrates convolutional neural networks (CNNs) and ZeroShot Learning (ZSL) for the efficient classification of seen and unseen disease classes. The model utilizes convolutional layers for feature extraction and employs semantic embedding techniques to identify previously untrained classes. Implemented on the Kaggle potato disease dataset, ZeroShot CNN achieved 98.50% accuracy for seen categories and 99.91% accuracy for unseen categories, outperforming conventional methods. The hybrid approach demonstrated superior generalization, providing a scalable, real-time solution for detecting agricultural diseases. The success of this solution validates the potential in harnessing deep learning and ZeroShot inference to transform plant pathology and crop protection practices.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387161/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ze Chai, Zicheng Wang, Zeshan Xu, Ziyu Feng, Yafeng Zhao
{"title":"An Automated Method for Identifying Voids and Severe Loosening in GPR Images.","authors":"Ze Chai, Zicheng Wang, Zeshan Xu, Ziyu Feng, Yafeng Zhao","doi":"10.3390/jimaging11080255","DOIUrl":"10.3390/jimaging11080255","url":null,"abstract":"<p><p>This paper proposes a novel automatic recognition method for distinguishing voids and severe loosening in road structures based on features of ground-penetrating radar (GPR) B-scan images. By analyzing differences in image texture, the intensity and clarity of top reflection interfaces, and the regularity of internal waveforms, a set of discriminative features is constructed. Based on these features, we develop the FKS-GPR dataset, a high-quality, manually annotated GPR dataset collected from real road environments, covering diverse and complex background conditions. Compared to datasets based on simulations, FKS-GPR offers higher practical relevance. An improved ACF-YOLO network is then designed for automatic detection, and the experimental results show that the proposed method achieves superior accuracy and robustness, validating its effectiveness and engineering applicability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 8","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12387192/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}