Vision (Switzerland)最新文献

筛选
英文 中文
Training Improves Avoidance of Natural Sick Faces: Changes in Visual Attention and Approach Decisions. 训练提高对自然病态面孔的回避:视觉注意力和接近决策的改变。
Vision (Switzerland) Pub Date : 2025-05-02 DOI: 10.3390/vision9020039
Tiffany S Leung, Krisztina V Jakobsen, Sarah E Maylott, Arushi Malik, Shuo Zhang, Elizabeth A Simpson
{"title":"Training Improves Avoidance of Natural Sick Faces: Changes in Visual Attention and Approach Decisions.","authors":"Tiffany S Leung, Krisztina V Jakobsen, Sarah E Maylott, Arushi Malik, Shuo Zhang, Elizabeth A Simpson","doi":"10.3390/vision9020039","DOIUrl":"10.3390/vision9020039","url":null,"abstract":"<p><p>Humans evolved a behavioral immune system to avoid infectious disease, including the ability to detect sickness in faces. However, it is unclear whether the ability to recognize and avoid facial cues of disease is malleable, flexibly calibrated by experience. Thus, we experimentally tested whether we can improve adults' (<i>N</i> = 133) lassitude (sick) face perception, measuring their recognition, avoidance, and visual attention to naturally sick and healthy faces. Participants randomly assigned to a training about disease, but not a control group, were better at avoiding sick people. The disease-trained group also looked more equally between sick and healthy faces when identifying who was sick compared to the control group who looked longer at the sick faces than the healthy faces. Though we detected no group differences in time looking at the eyes and at the mouths, the disease-trained group used these features more to decide who was sick, reflecting key features of the lassitude expression. Our findings suggest that facial sickness perception may be flexible, influenced by experience, and underscore the need for future studies to test how to further strengthen this skill. Ultimately, developing interventions that use this sick face plasticity may reduce disease transmission.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12101381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144128995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Phenomenology of Offline Perception: Multisensory Profiles of Voluntary Mental Imagery and Dream Imagery. 离线感知的现象学:自愿心理意象和梦意象的多感官概况。
Vision (Switzerland) Pub Date : 2025-04-21 DOI: 10.3390/vision9020037
Maren Bilzer, Merlin Monzel
{"title":"The Phenomenology of Offline Perception: Multisensory Profiles of Voluntary Mental Imagery and Dream Imagery.","authors":"Maren Bilzer, Merlin Monzel","doi":"10.3390/vision9020037","DOIUrl":"https://doi.org/10.3390/vision9020037","url":null,"abstract":"<p><p>Both voluntary mental imagery and dream imagery involve multisensory representations without externally present stimuli that can be categorized as offline perceptions. Due to common mechanisms, correlations between multisensory dream imagery profiles and multisensory voluntary mental imagery profiles were hypothesized. In a sample of 226 participants, correlations within the respective state of consciousness were significantly bigger than across, favouring two distinct networks. However, the association between the vividness of voluntary mental imagery and vividness of dream imagery was moderated by the frequency of dream recall and lucid dreaming, suggesting that both networks become increasingly similar when higher metacognition is involved. Additionally, the vividness of emotional and visual imagery was significantly higher for dream imagery than for voluntary mental imagery, reflecting the immersive nature of dreams and the continuity of visual dominance while being awake and asleep. In contrast, the vividness of auditory, olfactory, gustatory, and tactile imagery was higher for voluntary mental imagery, probably due to higher cognitive control while being awake. Most results were replicated four weeks later, weakening the notion of state influences. Overall, our results indicate similarities between dream imagery and voluntary mental imagery that justify a common classification as offline perception, but also highlight important differences.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015918/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Short-Wavelength and Infrared Autofluorescence Imaging in Pachychoroid Neovasculopathy. 厚脉络膜血管病的短波长和红外自身荧光成像。
Vision (Switzerland) Pub Date : 2025-04-21 DOI: 10.3390/vision9020038
Norihiko Nakagawa, Takuya Shunto, Issei Nishiyama, Kohei Maruyama, Miki Sawa
{"title":"Short-Wavelength and Infrared Autofluorescence Imaging in Pachychoroid Neovasculopathy.","authors":"Norihiko Nakagawa, Takuya Shunto, Issei Nishiyama, Kohei Maruyama, Miki Sawa","doi":"10.3390/vision9020038","DOIUrl":"https://doi.org/10.3390/vision9020038","url":null,"abstract":"<p><strong>Purpose: </strong>The purpose of this paper is to investigate the relationship between short-wavelength autofluorescence (SWAF) and infrared autofluorescence (IRAF) patterns in pachychoroid neovasculopathy (PNV) with serous retinal detachment (SRD).</p><p><strong>Methods: </strong>This study used an observational case series of 62 eyes of 58 consecutive patients diagnosed with symptomatic PNV from January 2019 and October 2021 at a single institution. SWAF and IRAF patterns were analyzed with disease chronicity, and autofluorescence changes in macular neovascularization (MNV) were assessed in two images.</p><p><strong>Results: </strong>SWAF patterns and the mean duration of symptoms were as follows: blocked (15 eyes, 24%), 1.0 months; mottled (8 eyes, 13%), 2.8 months; hyper (24 eyes, 39%), 5.0 months; hyper/hypo (10 eyes, 16%), 7.0 months; descending tract (5 eyes, 8%), 12.0 months (<i>p</i> < 0.01). IRAF patterns and the mean duration of symptoms were as follows: blocked (17 eyes, 27%), 1.0 months; hyper (22 eyes, 35%), 4.0 months; mixed/hyper dominant (9 eyes, 15%), 5.0 months; mixed/hypo dominant (9 eyes, 15%), 6.8 months; descending tract (5 eyes, 8%), 12.0 months (<i>p</i> < 0.01). Abnormal autofluorescence corresponding to MNV lesion was seen in 34 eyes (55%) with SWAF and 59 eyes (95%) with IRAF (<i>p</i> < 0.01).</p><p><strong>Conclusions: </strong>SWAF and IRAF show multiple patterns and are related to disease chronicity in symptomatic PNV. IRAF could be helpful in detecting the lesion of MNV.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Relationship Between Anxiety Sensitivity, Emotional States, and Dry Eye Disease Symptom Severity: A Cross-Sectional Study. 焦虑敏感性、情绪状态与干眼病症状严重程度的关系:一项横断面研究
Vision (Switzerland) Pub Date : 2025-04-18 DOI: 10.3390/vision9020036
Marko Toth, Nataša Jokić-Begić, Sandro Krašić
{"title":"The Relationship Between Anxiety Sensitivity, Emotional States, and Dry Eye Disease Symptom Severity: A Cross-Sectional Study.","authors":"Marko Toth, Nataša Jokić-Begić, Sandro Krašić","doi":"10.3390/vision9020036","DOIUrl":"https://doi.org/10.3390/vision9020036","url":null,"abstract":"<p><p>Dry eye disease (DED) is often comorbid with psychiatric conditions and psychological disturbances like anxiety and depression. The psychological symptoms are mostly considered to be a consequence of DED or a side-effect of medication. However, the possible psychological etiology of DED is seldom explored. This study explores the relationship between anxiety sensitivity (AS), unpleasant emotional states, and the severity of DED symptoms in a healthy general population sample in Croatia. A total of 766 adults (62.27% females) aged between 18 and 88 years completed an online survey consisting of the Ocular Surface Disease Index (OSDI), Anxiety Sensitivity Index (ASI), and Depression Anxiety Stress Scales (DASS21) together with socio-demographic data. The results revealed significant positive correlations between ASI, emotional states, and OSDI (r = 0.25-0.29, <i>p</i> < 0.01). Mediation analysis showed that DASS21 significantly mediates the relationship between ASI and OSDI (B = 0.1, CI = [0.004, 0.2]). Highly anxiety sensitive people are more sensitive to DED symptoms, which additionally increases in a state of emotional stress. Thus, DED symptoms are perceived more intensely and frequently than in less sensitive people. Understanding these associations is crucial for comprehensive DED management, indicating potential benefits from addressing psychological health in DED patients and eye health in psychiatric patients.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144027707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Therapeutic Approaches for Treatment of Diabetic Retinopathy and Age-Related Macular Degeneration. 治疗糖尿病视网膜病变和老年性黄斑变性的新方法。
Vision (Switzerland) Pub Date : 2025-04-17 DOI: 10.3390/vision9020035
Deokho Lee, Soo Jin Kim, Junyeop Lee
{"title":"Novel Therapeutic Approaches for Treatment of Diabetic Retinopathy and Age-Related Macular Degeneration.","authors":"Deokho Lee, Soo Jin Kim, Junyeop Lee","doi":"10.3390/vision9020035","DOIUrl":"https://doi.org/10.3390/vision9020035","url":null,"abstract":"<p><p>Retina, a light-sensitive layer of tissue of the eye, requires high levels of oxygen for its physiology. Retinal ischemia occurs due to inadequate supply of blood to the retina and choroid. Retinal ischemia is implicated in the development or progression of many ocular diseases, such as diabetic retinopathy (DR) and age-related macular degeneration (AMD). To date, anti-vascular endothelial growth factor (VEGF) treatment has been widely used to manage neovascular diseases associated with retinal ischemia. Nonetheless, a substantial number of patients with DR or AMD still suffer from incomplete response and adverse effects related to its therapy with limitations. Therefore, research scientists have been developing and finding novel treatments to protect against or prevent vision loss in those diseases. In this review article, we summarize the recent novel therapeutic approaches for the treatment of ischemic retinopathy (e.g., cell therapy, advanced molecular targeting, or drug delivery). This summary enables further research to obtain more solid evidence of novel effective drug development in retinal ischemic diseases.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconciling the Neurophysiological and Cognitive Theories of Stimulus-Response Spatial Compatibility Effects: A Visual-Motor Dissociation Approach. 协调刺激-反应空间相容性效应的神经生理学和认知理论:一种视觉-运动分离方法。
Vision (Switzerland) Pub Date : 2025-04-17 DOI: 10.3390/vision9020034
Elton H Matsushima, Jose Antonio Aznar-Casanova
{"title":"Reconciling the Neurophysiological and Cognitive Theories of Stimulus-Response Spatial Compatibility Effects: A Visual-Motor Dissociation Approach.","authors":"Elton H Matsushima, Jose Antonio Aznar-Casanova","doi":"10.3390/vision9020034","DOIUrl":"https://doi.org/10.3390/vision9020034","url":null,"abstract":"<p><p>This study investigated the differential impact of two visual dimensions (direction and spatial location) in two spatial Stroop tasks, where the relevant dimension for the response varied. Three studies compared the interactions between spatial compatibility and congruence effects on reaction time performances to infer how the dorsal pathway (DP) and ventral pathway (VP) of visual processing interfered with one another in processing relevant and irrelevant spatial information. This allowed us to bridge neurophysiological mechanisms with dual-process models of spatial compatibility. The participants responded from an avatar's perspective, manipulated through rotations relative to the forward position, along with independent rotations of the avatar's screen and keyboard. The results revealed two distinct response patterns: in the Direction Stroop, the performance was influenced equally by the relevant direction (VP) and the automatic processing of irrelevant location (DP); in the Location Stroop, the VP exerted minimal interference on the DP. Furthermore, the Only Keyboard rotation disrupted hand-eye coordination, modulating the DP interference on the VP in the Direction Stroop. These findings provide insights into the functional interaction of these visual pathways and their contributions to spatial compatibility effects, along with evidence for the dual-process model of spatial compatibility. Some issues about the separation of visual pathways are discussed based on our results.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015925/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144044491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT). 使用多模态大语言模型(ChatGPT)连贯地解释整个视野测试报告。
Vision (Switzerland) Pub Date : 2025-04-11 DOI: 10.3390/vision9020033
Jeremy C K Tan
{"title":"Coherent Interpretation of Entire Visual Field Test Reports Using a Multimodal Large Language Model (ChatGPT).","authors":"Jeremy C K Tan","doi":"10.3390/vision9020033","DOIUrl":"https://doi.org/10.3390/vision9020033","url":null,"abstract":"<p><p>This study assesses the accuracy and consistency of a commercially available large language model (LLM) in extracting and interpreting sensitivity and reliability data from entire visual field (VF) test reports for the evaluation of glaucomatous defects. Single-page anonymised VF test reports from 60 eyes of 60 subjects were analysed by an LLM (ChatGPT 4o) across four domains-test reliability, defect type, defect severity and overall diagnosis. The main outcome measures were accuracy of data extraction, interpretation of glaucomatous field defects and diagnostic classification. The LLM displayed 100% accuracy in the extraction of global sensitivity and reliability metrics and in classifying test reliability. It also demonstrated high accuracy (96.7%) in diagnosing whether the VF defect was consistent with a healthy, suspect or glaucomatous eye. The accuracy in correctly defining the type of defect was moderate (73.3%), which only partially improved when provided with a more defined region of interest. The causes of incorrect defect type were mostly attributed to the wrong location, particularly confusing the superior and inferior hemifields. Numerical/text-based data extraction and interpretation was overall notably superior to image-based interpretation of VF defects. This study demonstrates the potential and also limitations of multimodal LLMs in processing multimodal medical investigation data such as VF reports.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture. 基于复合主干和双解码器结构的智能手机手持式检眼镜图像杯盘分割。
Vision (Switzerland) Pub Date : 2025-04-11 DOI: 10.3390/vision9020032
Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior
{"title":"Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture.","authors":"Thiago Paiva Freire, Geraldo Braz Júnior, João Dallyson Sousa de Almeida, José Ribamar Durand Rodrigues Junior","doi":"10.3390/vision9020032","DOIUrl":"https://doi.org/10.3390/vision9020032","url":null,"abstract":"<p><p>Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144015550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination. 人工智能与人类认知:ChatGPT和参加欧洲眼科文凭考试的考生的比较分析。
Vision (Switzerland) Pub Date : 2025-04-09 DOI: 10.3390/vision9020031
Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski
{"title":"Artificial Intelligence vs. Human Cognition: A Comparative Analysis of ChatGPT and Candidates Sitting the European Board of Ophthalmology Diploma Examination.","authors":"Anna P Maino, Jakub Klikowski, Brendan Strong, Wahid Ghaffari, Michał Woźniak, Tristan Bourcier, Andrzej Grzybowski","doi":"10.3390/vision9020031","DOIUrl":"https://doi.org/10.3390/vision9020031","url":null,"abstract":"<p><strong>Background/objectives: </strong>This paper aims to assess ChatGPT's performance in answering European Board of Ophthalmology Diploma (EBOD) examination papers and to compare these results to pass benchmarks and candidate results.</p><p><strong>Methods: </strong>This cross-sectional study used a sample of past exam papers from 2012, 2013, 2020-2023 EBOD examinations. This study analyzed ChatGPT's responses to 440 multiple choice questions (MCQs), each containing five true/false statements (2200 statements in total) and 48 single best answer (SBA) questions.</p><p><strong>Results: </strong>ChatGPT, for MCQs, scored on average 64.39%. ChatGPT's strongest metric performance for MCQs was precision (68.76%). ChatGPT performed best at answering pathology MCQs (Grubbs test <i>p</i> < 0.05). Optics and refraction had the lowest-scoring MCQ performance across all metrics. ChatGPT-3.5 Turbo performed worse than human candidates and ChatGPT-4o on easy questions (75% vs. 100% accuracy) but outperformed humans and ChatGPT-4o on challenging questions (50% vs. 28% accuracy). ChatGPT's SBA performance averaged 28.43%, with the highest score and strongest performance in precision (29.36%). Pathology SBA questions were consistently the lowest-scoring topic across most metrics. ChatGPT demonstrated a nonsignificant tendency to select option 1 more frequently (<i>p</i> = 0.19). When answering SBAs, human candidates scored higher than ChatGPT in all metric areas measured.</p><p><strong>Conclusions: </strong>ChatGPT performed stronger for true/false questions, scoring a pass mark in most instances. Performance was poorer for SBA questions, suggesting that ChatGPT's ability in information retrieval is better than that in knowledge integration. ChatGPT could become a valuable tool in ophthalmic education, allowing exam boards to test their exam papers to ensure they are pitched at the right level, marking open-ended questions and providing detailed feedback.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain Functional Connectivity During First- and Third-Person Visual Imagery. 第一和第三人称视觉意象中的脑功能连通性。
Vision (Switzerland) Pub Date : 2025-04-06 DOI: 10.3390/vision9020030
Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina
{"title":"Brain Functional Connectivity During First- and Third-Person Visual Imagery.","authors":"Ekaterina Pechenkova, Mary Rachinskaya, Varvara Vasilenko, Olesya Blazhenkova, Elena Mershina","doi":"10.3390/vision9020030","DOIUrl":"https://doi.org/10.3390/vision9020030","url":null,"abstract":"<p><p>The ability to adopt different perspectives, or vantage points, is fundamental to human cognition, affecting reasoning, memory, and imagery. While the first-person perspective allows individuals to experience a scene through their own eyes, the third-person perspective involves an external viewpoint, which is thought to demand greater cognitive effort and different neural processing. Despite the frequent use of perspective switching across various contexts, including modern media and in therapeutic settings, the neural mechanisms differentiating these two perspectives in visual imagery remain largely underexplored. In an exploratory fMRI study, we compared both activation and task-based functional connectivity underlying first-person and third-person perspective taking in the same 26 participants performing two spatial egocentric imagery tasks, namely imaginary tennis and house navigation. No significant differences in activation emerged between the first-person and third-person conditions. The network-based statistics analysis revealed a small subnetwork of the early visual and posterior temporal areas that manifested stronger functional connectivity during the first-person perspective, suggesting a closer sensory recruitment loop, or, in different terms, a loop between long-term memory and the \"visual buffer\" circuits. The absence of a strong neural distinction between the first-person and third-person perspectives suggests that third-person imagery may not fully decenter individuals from the scene, as is often assumed.</p>","PeriodicalId":36586,"journal":{"name":"Vision (Switzerland)","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12015856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信