Xiuwen Wu, Rongjie Hu, Jie Liang, Yanming Wang, Bensheng Qiu, Xiaoxiao Wang
{"title":"MRGazer:从个体空间的功能性磁共振成像解码眼球注视点。","authors":"Xiuwen Wu, Rongjie Hu, Jie Liang, Yanming Wang, Bensheng Qiu, Xiaoxiao Wang","doi":"10.1088/1741-2552/ad6185","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective</i>. Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Frey<i>et al</i>provided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space.<i>Approach</i>. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression.<i>Main results</i>. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE = 2.04°) than the co-registration-based method (EE = 2.89°), and delivered objective results within a shorter time (∼0.02 s volume<sup>-1</sup>) than prior method (∼0.3 s volume<sup>-1</sup>).<i>Significance</i>. The MRGazer is an efficient, simple, and accurate deep learning framework for predicting eye movement from fMRI data, and can be employed during fMRI scans in psychological and cognitive research. The code is available athttps://github.com/ustc-bmec/MRGazer.</p>","PeriodicalId":94096,"journal":{"name":"Journal of neural engineering","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MRGazer: decoding eye gaze points from functional magnetic resonance imaging in individual space.\",\"authors\":\"Xiuwen Wu, Rongjie Hu, Jie Liang, Yanming Wang, Bensheng Qiu, Xiaoxiao Wang\",\"doi\":\"10.1088/1741-2552/ad6185\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p><i>Objective</i>. Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Frey<i>et al</i>provided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space.<i>Approach</i>. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression.<i>Main results</i>. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE = 2.04°) than the co-registration-based method (EE = 2.89°), and delivered objective results within a shorter time (∼0.02 s volume<sup>-1</sup>) than prior method (∼0.3 s volume<sup>-1</sup>).<i>Significance</i>. The MRGazer is an efficient, simple, and accurate deep learning framework for predicting eye movement from fMRI data, and can be employed during fMRI scans in psychological and cognitive research. The code is available athttps://github.com/ustc-bmec/MRGazer.</p>\",\"PeriodicalId\":94096,\"journal\":{\"name\":\"Journal of neural engineering\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of neural engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1741-2552/ad6185\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of neural engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1741-2552/ad6185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MRGazer: decoding eye gaze points from functional magnetic resonance imaging in individual space.
Objective. Eye-tracking research has proven valuable in understanding numerous cognitive functions. Recently, Freyet alprovided an exciting deep learning method for learning eye movements from functional magnetic resonance imaging (fMRI) data. It employed the multi-step co-registration of fMRI into the group template to obtain eyeball signal, and thus required additional templates and was time consuming. To resolve this issue, in this paper, we propose a framework named MRGazer for predicting eye gaze points from fMRI in individual space.Approach. The MRGazer consists of an eyeball extraction module and a residual network-based eye gaze prediction module. Compared to the previous method, the proposed framework skips the fMRI co-registration step, simplifies the processing protocol, and achieves end-to-end eye gaze regression.Main results. The proposed method achieved superior performance in eye fixation regression (Euclidean error, EE = 2.04°) than the co-registration-based method (EE = 2.89°), and delivered objective results within a shorter time (∼0.02 s volume-1) than prior method (∼0.3 s volume-1).Significance. The MRGazer is an efficient, simple, and accurate deep learning framework for predicting eye movement from fMRI data, and can be employed during fMRI scans in psychological and cognitive research. The code is available athttps://github.com/ustc-bmec/MRGazer.