Journal of Imaging最新文献

筛选
英文 中文
DP-AMF: Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion for Single-View 3D Reconstruction. DP-AMF:深度先验引导自适应多模态和全局-局部融合的单视图三维重建。
IF 2.7
Journal of Imaging Pub Date : 2025-07-21 DOI: 10.3390/jimaging11070246
Luoxi Zhang, Chun Xie, Itaru Kitahara
{"title":"DP-AMF: Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion for Single-View 3D Reconstruction.","authors":"Luoxi Zhang, Chun Xie, Itaru Kitahara","doi":"10.3390/jimaging11070246","DOIUrl":"https://doi.org/10.3390/jimaging11070246","url":null,"abstract":"<p><p>Single-view 3D reconstruction remains fundamentally ill-posed, as a single RGB image lacks scale and depth cues, often yielding ambiguous results under occlusion or in texture-poor regions. We propose DP-AMF, a novel Depth-Prior-Guided Adaptive Multi-Modal and Global-Local Fusion framework that integrates high-fidelity depth priors-generated offline by the MARIGOLD diffusion-based estimator and cached to avoid extra training cost-with hierarchical local features from ResNet-32/ResNet-18 and semantic global features from DINO-ViT. A learnable fusion module dynamically adjusts per-channel weights to balance these modalities according to local texture and occlusion, and an implicit signed-distance field decoder reconstructs the final mesh. Extensive experiments on 3D-FRONT and Pix3D demonstrate that DP-AMF reduces Chamfer Distance by 7.64%, increases F-Score by 2.81%, and boosts Normal Consistency by 5.88% compared to strong baselines, while qualitative results show sharper edges and more complete geometry in challenging scenes. DP-AMF achieves these gains without substantially increasing model size or inference time, offering a robust and effective solution for complex single-view reconstruction tasks.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Ultraviolet Fluorescence Imaging in Cultural Heritage: A Review of Applications in Multi-Material Artworks. 文物三维紫外荧光成像:多材料艺术品的应用综述。
IF 2.7
Journal of Imaging Pub Date : 2025-07-21 DOI: 10.3390/jimaging11070245
Luca Lanteri, Claudia Pelosi, Paola Pogliani
{"title":"Three-Dimensional Ultraviolet Fluorescence Imaging in Cultural Heritage: A Review of Applications in Multi-Material Artworks.","authors":"Luca Lanteri, Claudia Pelosi, Paola Pogliani","doi":"10.3390/jimaging11070245","DOIUrl":"https://doi.org/10.3390/jimaging11070245","url":null,"abstract":"<p><p>Ultraviolet-induced fluorescence (UVF) imaging represents a simple but powerful technique in cultural heritage studies. It is a nondestructive and non-invasive imaging technique which can supply useful and relevant information to define the state of conservation of an artifact. UVF imaging also helps to establish the value of an artwork by indicating inpainting, repaired areas, grouting, etc. In general, ultraviolet fluorescence imaging output takes the form of 2D photographs in the case of both paintings and sculptures. For this reason, a few years ago the idea of applying the photogrammetric method to create 3D digital twins under ultraviolet fluorescence was developed to address the requirements of restorers who need daily documentation tools for their work that are simple to use and can display the entire 3D object in a single file. This review explores recent applications of this innovative method of ultraviolet fluorescence imaging with reference to the wider literature on the UVF technique to make evident the practical importance of its application in cultural heritage.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Algorithm for the Classification of Left Ventricle Segments by Hypertrophy Severity. 基于深度学习的左心室节段肥厚程度分类算法。
IF 2.7
Journal of Imaging Pub Date : 2025-07-20 DOI: 10.3390/jimaging11070244
Wafa Baccouch, Bilel Hasnaoui, Narjes Benameur, Abderrazak Jemai, Dhaker Lahidheb, Salam Labidi
{"title":"Deep Learning-Based Algorithm for the Classification of Left Ventricle Segments by Hypertrophy Severity.","authors":"Wafa Baccouch, Bilel Hasnaoui, Narjes Benameur, Abderrazak Jemai, Dhaker Lahidheb, Salam Labidi","doi":"10.3390/jimaging11070244","DOIUrl":"https://doi.org/10.3390/jimaging11070244","url":null,"abstract":"<p><p>In clinical practice, left ventricle hypertrophy (LVH) continues to pose a considerable challenge, highlighting the need for more reliable diagnostic approaches. This study aims to propose an automated framework for the quantification of LVH extent and the classification of myocardial segments according to hypertrophy severity using a deep learning-based algorithm. The proposed method was validated on 133 subjects, including both healthy individuals and patients with LVH. The process starts with automatic LV segmentation using U-Net and the segmentation of the left ventricle cavity based on the American Heart Association (AHA) standards, followed by the division of each segment into three equal sub-segments. Then, an automated quantification of regional wall thickness (RWT) was performed. Finally, a convolutional neural network (CNN) was developed to classify each myocardial sub-segment according to hypertrophy severity. The proposed approach demonstrates strong performance in contour segmentation, achieving a Dice Similarity Coefficient (DSC) of 98.47% and a Hausdorff Distance (HD) of 6.345 ± 3.5 mm. For thickness quantification, it reaches a minimal mean absolute error (MAE) of 1.01 ± 1.16. Regarding segment classification, it achieves competitive performance metrics compared to state-of-the-art methods with an accuracy of 98.19%, a precision of 98.27%, a recall of 99.13%, and an F1-score of 98.7%. The obtained results confirm the high performance of the proposed method and highlight its clinical utility in accurately assessing and classifying cardiac hypertrophy. This approach provides valuable insights that can guide clinical decision-making and improve patient management strategies.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel 3D Convolutional Neural Network-Based Deep Learning Model for Spatiotemporal Feature Mapping for Video Analysis: Feasibility Study for Gastrointestinal Endoscopic Video Classification. 一种新的基于三维卷积神经网络的视频分析时空特征映射深度学习模型:胃肠内镜视频分类的可行性研究
IF 2.7
Journal of Imaging Pub Date : 2025-07-18 DOI: 10.3390/jimaging11070243
Mrinal Kanti Dhar, Mou Deb, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Divyanshi Sood, Avneet Kaur, Charmy Parikh, Swetha Rapolu, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Scott A Helgeson, Venkata S Akshintala, Shivaram P Arunachalam
{"title":"A Novel 3D Convolutional Neural Network-Based Deep Learning Model for Spatiotemporal Feature Mapping for Video Analysis: Feasibility Study for Gastrointestinal Endoscopic Video Classification.","authors":"Mrinal Kanti Dhar, Mou Deb, Poonguzhali Elangovan, Keerthy Gopalakrishnan, Divyanshi Sood, Avneet Kaur, Charmy Parikh, Swetha Rapolu, Gianeshwaree Alias Rachna Panjwani, Rabiah Aslam Ansari, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Scott A Helgeson, Venkata S Akshintala, Shivaram P Arunachalam","doi":"10.3390/jimaging11070243","DOIUrl":"https://doi.org/10.3390/jimaging11070243","url":null,"abstract":"<p><p>Accurate analysis of medical videos remains a major challenge in deep learning (DL) due to the need for effective spatiotemporal feature mapping that captures both spatial detail and temporal dynamics. Despite advances in DL, most existing models in medical AI focus on static images, overlooking critical temporal cues present in video data. To bridge this gap, a novel DL-based framework is proposed for spatiotemporal feature extraction from medical video sequences. As a feasibility use case, this study focuses on gastrointestinal (GI) endoscopic video classification. A 3D convolutional neural network (CNN) is developed to classify upper and lower GI endoscopic videos using the hyperKvasir dataset, which contains 314 lower and 60 upper GI videos. To address data imbalance, 60 matched pairs of videos are randomly selected across 20 experimental runs. Videos are resized to 224 × 224, and the 3D CNN captures spatiotemporal information. A 3D version of the parallel spatial and channel squeeze-and-excitation (P-scSE) is implemented, and a new block called the residual with parallel attention (RPA) block is proposed by combining P-scSE3D with a residual block. To reduce computational complexity, a (2 + 1)D convolution is used in place of full 3D convolution. The model achieves an average accuracy of 0.933, precision of 0.932, recall of 0.944, F1-score of 0.935, and AUC of 0.933. It is also observed that the integration of P-scSE3D increased the F1-score by 7%. This preliminary work opens avenues for exploring various GI endoscopic video-based prospective studies.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FakeMusicCaps: A Dataset for Detection and Attribution of Synthetic Music Generated via Text-to-Music Models. FakeMusicCaps:通过文本到音乐模型生成的合成音乐的检测和归属数据集。
IF 2.7
Journal of Imaging Pub Date : 2025-07-18 DOI: 10.3390/jimaging11070242
Luca Comanducci, Paolo Bestagini, Stefano Tubaro
{"title":"FakeMusicCaps: A Dataset for Detection and Attribution of Synthetic Music Generated via Text-to-Music Models.","authors":"Luca Comanducci, Paolo Bestagini, Stefano Tubaro","doi":"10.3390/jimaging11070242","DOIUrl":"https://doi.org/10.3390/jimaging11070242","url":null,"abstract":"<p><p>Text-to-music (TTM) models have recently revolutionized the automatic music generation research field, specifically by being able to generate music that sounds more plausible than all previous state-of-the-art models and by lowering the technical proficiency needed to use them. For these reasons, they have readily started to be adopted for commercial uses and music production practices. This widespread diffusion of TTMs poses several concerns regarding copyright violation and rightful attribution, posing the need of serious consideration of them by the audio forensics community. In this paper, we tackle the problem of detection and attribution of TTM-generated data. We propose a dataset, FakeMusicCaps, that contains several versions of the music-caption pairs dataset MusicCaps regenerated via several state-of-the-art TTM techniques. We evaluate the proposed dataset by performing initial experiments regarding the detection and attribution of TTM-generated audio considering both closed-set and open-set classification.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning. 深度学习预测大人群BI-RADS 3病变的极早期乳腺癌。
IF 2.7
Journal of Imaging Pub Date : 2025-07-15 DOI: 10.3390/jimaging11070240
Congyu Wang, Changzhen Li, Gengxiao Lin
{"title":"Predicting Very Early-Stage Breast Cancer in BI-RADS 3 Lesions of Large Population with Deep Learning.","authors":"Congyu Wang, Changzhen Li, Gengxiao Lin","doi":"10.3390/jimaging11070240","DOIUrl":"https://doi.org/10.3390/jimaging11070240","url":null,"abstract":"<p><p>Breast cancer accounts for one in four new malignant tumors in women, and misdiagnosis can lead to severe consequences, including delayed treatment. Among patients classified with a BI-RADS 3 rating, the risk of very early-stage malignancy remains over 2%. However, due to the benign imaging characteristics of these lesions, radiologists often recommend follow-up rather than immediate biopsy, potentially missing critical early interventions. This study aims to develop a deep learning (DL) model to accurately identify very early-stage malignancies in BI-RADS 3 lesions using ultrasound (US) images, thereby improving diagnostic precision and clinical decision-making. A total of 852 lesions (256 malignant and 596 benign) from 685 patients who underwent biopsies or 3-year follow-up were collected by Southwest Hospital (SW) and Tangshan People's Hospital (TS) to develop and validate a deep learning model based on a novel transfer learning method. To further evaluate the performance of the model, six radiologists independently reviewed the external testing set on a web-based rating platform. The proposed model achieved an area under the receiver operating characteristic curve (AUC), sensitivity, and specificity of 0.880, 0.786, and 0.833 in predicting BI-RADS 3 malignant lesions in the internal testing set. The proposed transfer learning method improves the clinical AUC of predicting BI-RADS 3 malignancy from 0.721 to 0.880. In the external testing set, the model achieved AUC, sensitivity, and specificity of 0.910, 0.875, and 0.786 and outperformed the radiologists with an average AUC of 0.653 (<i>p</i> = 0.021). The DL model could detect very early-stage malignancy of BI-RADS 3 lesions in US images and had higher diagnostic capability compared with experienced radiologists.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Self-Supervised Adversarial Deblurring Face Recognition Network for Edge Devices. 边缘设备的自监督对抗去模糊人脸识别网络。
IF 2.7
Journal of Imaging Pub Date : 2025-07-15 DOI: 10.3390/jimaging11070241
Hanwen Zhang, Myun Kim, Baitong Li, Yanping Lu
{"title":"A Self-Supervised Adversarial Deblurring Face Recognition Network for Edge Devices.","authors":"Hanwen Zhang, Myun Kim, Baitong Li, Yanping Lu","doi":"10.3390/jimaging11070241","DOIUrl":"https://doi.org/10.3390/jimaging11070241","url":null,"abstract":"<p><p>With the advancement of information technology, human activity recognition (HAR) has been widely applied in fields such as intelligent surveillance, health monitoring, and human-computer interaction. As a crucial component of HAR, facial recognition plays a key role, especially in vision-based activity recognition. However, current facial recognition models on the market perform poorly in handling blurry images and dynamic scenarios, limiting their effectiveness in real-world HAR applications. This study aims to construct a fast and accurate facial recognition model based on novel adversarial learning and deblurring theory to enhance its performance in human activity recognition. The model employs a generative adversarial network (GAN) as the core algorithm, optimizing its generation and recognition modules by decomposing the global loss function and incorporating a feature pyramid, thereby solving the balance challenge in GAN training. Additionally, deblurring techniques are introduced to improve the model's ability to handle blurry and dynamic images. Experimental results show that the proposed model achieves high accuracy and recall rates across multiple facial recognition datasets, with an average recall rate of 87.40% and accuracy rates of 81.06% and 79.77% on the YTF, IMDB-WIKI, and WiderFace datasets, respectively. These findings confirm that the model effectively addresses the challenges of recognizing faces in dynamic and blurry conditions in human activity recognition, demonstrating significant application potential.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating Snow-Related Daily Change Events in the Canadian Winter Season: A Deep Learning-Based Approach. 估计加拿大冬季与雪相关的每日变化事件:基于深度学习的方法。
IF 2.7
Journal of Imaging Pub Date : 2025-07-14 DOI: 10.3390/jimaging11070239
Karim Malik, Isteyak Isteyak, Colin Robertson
{"title":"Estimating Snow-Related Daily Change Events in the Canadian Winter Season: A Deep Learning-Based Approach.","authors":"Karim Malik, Isteyak Isteyak, Colin Robertson","doi":"10.3390/jimaging11070239","DOIUrl":"https://doi.org/10.3390/jimaging11070239","url":null,"abstract":"<p><p>Snow water equivalent (SWE), an essential parameter of snow, is largely studied to understand the impact of climate regime effects on snowmelt patterns. This study developed a Siamese Attention U-Net (Si-Att-UNet) model to detect daily change events in the winter season. The daily SWE change event detection task is treated as an image content comparison problem in which the Si-Att-UNet compares a pair of SWE maps sampled at two temporal windows. The model detected SWE similarity and dissimilarity with an F1 score of 99.3% at a 50% confidence threshold. The change events were derived from the model's prediction of SWE similarity using the 50% threshold. Daily SWE change events increased between 1979 and 2018. However, the SWE change events were significant in March and April, with a positive Mann-Kendall test statistic (<i>tau</i> = 0.25 and 0.38, respectively). The highest frequency of zero-change events occurred in February. A comparison of the SWE change events and mean change segments with those of the northern hemisphere's climate anomalies revealed that low temperature and low precipitation anomalies reduced the frequency of SWE change events. The findings highlight the influence of climate variables on daily changes in snow-related water storage in March and April.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of Major Depressive Disorder from Functional Magnetic Resonance Imaging Using Regional Homogeneity and Feature/Sample Selective Evolving Voting Ensemble Approaches. 基于区域同质性和特征/样本选择性进化投票集合方法的功能磁共振成像检测重度抑郁症。
IF 2.7
Journal of Imaging Pub Date : 2025-07-14 DOI: 10.3390/jimaging11070238
Bindiya A R, B S Mahanand, Vasily Sachnev, Direct Consortium
{"title":"Detection of Major Depressive Disorder from Functional Magnetic Resonance Imaging Using Regional Homogeneity and Feature/Sample Selective Evolving Voting Ensemble Approaches.","authors":"Bindiya A R, B S Mahanand, Vasily Sachnev, Direct Consortium","doi":"10.3390/jimaging11070238","DOIUrl":"https://doi.org/10.3390/jimaging11070238","url":null,"abstract":"<p><p>Major depressive disorder is a mental illness characterized by persistent sadness or loss of interest that affects a person's daily life. Early detection of this disorder is crucial for providing timely and effective treatment. Neuroimaging modalities, namely, functional magnetic resonance imaging, can be used to identify changes in brain regions related to major depressive disorder. In this study, regional homogeneity images, one of the derivative of functional magnetic resonance imaging is employed to detect major depressive disorder using the proposed feature/sample evolving voting ensemble approach. A total of 2380 subjects consisting of 1104 healthy controls and 1276 patients with major depressive disorder from Rest-meta-MDD consortium are studied. Regional homogeneity features from 90 regions are extracted using automated anatomical labeling template. These regional homogeneity features are then fed as an input to the proposed feature/sample selective evolving voting ensemble for classification. The proposed approach achieves an accuracy of 91.93%, and discriminative features obtained from the classifier are used to identify brain regions which may be responsible for major depressive disorder. A total of nine brain regions, namely, left superior temporal gyrus, left postcentral gyrus, left anterior cingulate gyrus, right inferior parietal lobule, right superior medial frontal gyrus, left lingual gyrus, right putamen, left fusiform gyrus, and left middle temporal gyrus, are identified. This study clearly indicates that these brain regions play a critical role in detecting major depressive disorder.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Neuroplasticity: Modulating Cortical Excitability with Flickering Light Stimulation. 视觉神经可塑性:用闪烁光刺激调节皮质兴奋性。
IF 2.7
Journal of Imaging Pub Date : 2025-07-14 DOI: 10.3390/jimaging11070237
Francisco J Ávila
{"title":"Visual Neuroplasticity: Modulating Cortical Excitability with Flickering Light Stimulation.","authors":"Francisco J Ávila","doi":"10.3390/jimaging11070237","DOIUrl":"https://doi.org/10.3390/jimaging11070237","url":null,"abstract":"<p><p>The balance between cortical excitation and inhibition (E/I balance) in the cerebral cortex is critical for cognitive processing and neuroplasticity. Modulation of this balance has been linked to a wide range of neuropsychiatric and neurodegenerative disorders. The human visual system has well-differentiated magnocellular (M) and parvocellular (P) pathways, which provide a useful model to study cortical excitability using non-invasive visual flicker stimulation. We present an Arduino-driven non-image forming system to deliver controlled flickering light stimuli at different frequencies and wavelengths. By triggering the critical flicker fusion (CFF) frequency, we attempt to modulate the M-pathway activity and attenuate P-pathway responses, in parallel with induced optical scattering. EEG recordings were used to monitor cortical excitability and oscillatory dynamics during visual stimulation. Visual stimulation in the CFF, combined with induced optical scattering, selectively enhanced magnocellular activity and suppressed parvocellular input. EEG analysis showed a modulation of cortical oscillations, especially in the high frequency beta and gamma range. Our results support the hypothesis that visual flicker in the CFF, in addition to spatial degradation, initiates detectable neuroplasticity and regulates cortical excitation and inhibition. These findings suggest new avenues for therapeutic manipulation through visual pathways in diseases such as Alzheimer's disease, epilepsy, severe depression, and schizophrenia.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信