{"title":"Multimodality During Fixation - Part II: Evidence for Multimodality in Spatial Precision-Related Distributions and Impact on Precision Estimates.","authors":"Lee Friedman, Timothy Hanson, Oleg V Komogortsev","doi":"10.16910/jemr.14.3.4","DOIUrl":"https://doi.org/10.16910/jemr.14.3.4","url":null,"abstract":"<p><p>This paper is a follow-on to our earlier paper (7), which focused on the multimodality of angular offsets. This paper applies the same analysis to the measurement of spatial precision. Following the literature, we refer these measurements as estimates of device precision, but, in fact, subject characteristics clearly affect the measurements. One typical measure of the spatial precision of an eye-tracking device is the standard deviation (SD) of the position signals (horizontal and vertical) during a fixation. The SD is a highly interpretable measure of spread if the underlying error distribution is unimodal and normal. However, in the context of an underlying multimodal distribution, the SD is less interpretable. We will present evidence that the majority of such distributions are multimodal (68-70% strongly multimodal). Only 21-23% of position distributions were unimodal. We present an alternative method for measuring precision that is appropriate for both unimodal and multimodal distributions. This alternative method produces precision estimates that are substantially smaller than classic measures. We present illustrations of both unimodality and multimodality with either drift or a microsaccade present during fixation. At present, these observations apply only to the EyeLink 1000, and the subjects evaluated herein.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8566061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39864684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Smyrnakis, Vassilios Andreadakis, Andriani Rina, Nadia Bοufachrentin, Ioannis M Aslanides
{"title":"Silent versus Reading Out Loud modes: An eye-tracking study.","authors":"Ioannis Smyrnakis, Vassilios Andreadakis, Andriani Rina, Nadia Bοufachrentin, Ioannis M Aslanides","doi":"10.16910/jemr.14.2.1","DOIUrl":"10.16910/jemr.14.2.1","url":null,"abstract":"<p><p>The main purpose of this study is to compare the silent and loud reading ability of typical and dyslexic readers, using eye-tracking technology to monitor the reading process. The participants (156 students of normal intelligence) were first divided into three groups based on their school grade, and each subgroup was then further separated into typical readers and students diagnosed with dyslexia. The students read the same text twice, one time silently and one time out loud. Various eye-tracking parameters were calculated for both types of reading. In general, the performance of the typical students was better for both modes of reading - regardless of age. In the older age groups, typical readers performed better at silent reading. The dyslexic readers in all age groups performed better at reading out loud. However, this was less prominent in secondary and upper secondary dyslexics, reflecting a slow shift towards silent reading mode as they age. Our results confirm that the eye-tracking parameters of dyslexics improve with age in both silent and loud reading, and their reading preference shifts slowly towards silent reading. Typical readers, before 4th grade do not show a clear reading mode preference, however, after that age they develop a clear preference for silent reading.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8565638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39864683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual scanpath training to emotional faces following severe traumatic brain injury: A single case design.","authors":"Suzane Vassallo, Jacinta Douglas","doi":"10.16910/jemr.14.4.6","DOIUrl":"https://doi.org/10.16910/jemr.14.4.6","url":null,"abstract":"<p><p>The visual scanpath to emotional facial expressions was recorded in BR, a 35-year-old male with chronic severe traumatic brain injury (TBI), both before and after he underwent intervention. The novel intervention paradigm combined visual scanpath training with verbal feedback and was implemented over a 3-month period using a single case design (AB) with one follow up session. At baseline BR's scanpath was restricted, characterised by gaze allocation primarily to salient facial features on the right side of the face stimulus. Following intervention his visual scanpath became more lateralised, although he continued to demonstrate an attentional bias to the right side of the face stimulus. This study is the first to demonstrate change in both the pattern and the position of the visual scanpath to emotional faces following intervention in a person with chronic severe TBI. In addition, these findings extend upon our previous work to suggest that modification of the visual scanpath through targeted facial feature training can support improved facial recognition performance in a person with severe TBI.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8575428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39716446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suvi K Holm, Tuomo Häikiö, Konstantin Olli, Johanna K Kaakinen
{"title":"Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos.","authors":"Suvi K Holm, Tuomo Häikiö, Konstantin Olli, Johanna K Kaakinen","doi":"10.16910/jemr.14.2.3","DOIUrl":"https://doi.org/10.16910/jemr.14.2.3","url":null,"abstract":"<p><p>The role of individual differences during dynamic scene viewing was explored. Participants (N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their eye movements were recorded. In addition, the participants' skills in three visual attention tasks (attentional blink, visual search, and multiple object tracking) were assessed. The results showed that individual differences in visual attention tasks were associated with eye movement patterns observed during viewing of the gameplay video. The differences were noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes and fixation distances from the center of the screen. The individual differences showed during specific events of the video as well as during the video as a whole. The results highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual differences in dynamic scene viewing.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8566014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39864685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The association of eye movements and performance accuracy in a novel sight-reading task.","authors":"Lucas Lörch","doi":"10.16910/jemr.14.4.5","DOIUrl":"https://doi.org/10.16910/jemr.14.4.5","url":null,"abstract":"<p><p>The present study investigated how eye movements were associated with performance accuracy during sight-reading. Participants performed a complex span task in which sequences of single quarter note symbols that either enabled chunking or did not enable chunking were presented for subsequent serial recall. In between the presentation of each note, participants sight-read a notated melody on an electric piano in the tempo of 70 bpm. All melodies were unique but contained four types of note pairs: eighth-eighth, eighthquarter, quarter-eighth, quarter-quarter. Analyses revealed that reading with fewer fixations was associated with a more accurate note onset. Fewer fixations might be advantageous for sight-reading as fewer saccades have to be planned and less information has to be integrated. Moreover, the quarter-quarter note pair was read with a larger number of fixations and the eighth-quarter note pair was read with a longer gaze duration. This suggests that when rhythm is processed, additional beats might trigger re-fixations and unconventional rhythmical patterns might trigger longer gazes. Neither recall accuracy nor chunking processes were found to explain additional variance in the eye movement data.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8573852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39604841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Can longer gaze duration determine risky investment decisions? An interactive perspective.","authors":"Yiheng Wang, Yanping Liu","doi":"10.16910/jemr.14.4.3","DOIUrl":"https://doi.org/10.16910/jemr.14.4.3","url":null,"abstract":"<p><p>Can longer gaze duration determine risky investment decisions? Recent studies have tested how gaze influences people's decisions and the boundary of the gaze effect. The current experiment used adaptive gaze-contingent manipulation by adding a self-determined option to test whether longer gaze duration can determine risky investment decisions. The results showed that both the expected value of each option and the gaze duration influenced people's decisions. This result was consistent with the attentional diffusion model (aDDM) proposed by Krajbich et al. (2010), which suggests that gaze can influence the choice process by amplify the value of the choice. Therefore, the gaze duration would influence the decision when people do not have clear preference.The result also showed that the similarity between options and the computational difficulty would also influence the gaze effect. This result was inconsistent with prior research that used option similarities to represent difficulty, suggesting that both similarity between options and computational difficulty induce different underlying mechanisms of decision difficulty.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8562223/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rhythmic subvocalization: An eye-tracking study on silent poetry reading.","authors":"Judith Beck, Lars Konieczny","doi":"10.16910/jemr.13.3.5","DOIUrl":"https://doi.org/10.16910/jemr.13.3.5","url":null,"abstract":"<p><p>The present study investigates effects of conventionally metered and rhymed poetry on eyemovements in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in building up auditive expectations that are based on a rhythmic \"audible gestalt\" and propose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also affected reading in general. Effects of syllable number indicated a high degree of subvocalization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek
{"title":"Metacognitive monitoring and metacognitive strategies of gifted and average children on dealing with deductive reasoning task.","authors":"Ondřeji Straka, Šárka Portešová, Daniela Halámková, Michal Jabůrek","doi":"10.16910/jemr.14.4.1","DOIUrl":"https://doi.org/10.16910/jemr.14.4.1","url":null,"abstract":"<p><p>In this paper, we inquire into possible differences between children with exceptionally high intellectual abilities and their average peers as regards metacognitive monitoring and related metacognitive strategies. The question whether gifted children surpass their typically developing peers not only in the intellectual abilities, but also in their level of metacognitive skills, has not been convincingly answered so far. We sought to examine the indicators of metacognitive behavior by means of eye-tracking technology and to compare these findings with the participants' subjective confidence ratings. Eye-movement data of gifted and average students attending final grades of primary school (4th and 5th grades) were recorded while they dealt with a deductive reasoning task, and four metrics supposed to bear on metacognitive skills, namely the overall trial duration, mean fixation duration, number of regressions and normalized gaze transition entropy, were analyzed. No significant differences between gifted and average children were found in the normalized gaze transition entropy, in mean fixation duration, nor - after controlling for the trial duration - in number of regressions. Both groups of children differed in the time devoted to solving the task. Both groups significantly differed in the association between time devoted to the task and the participants' subjective confidence rating, where only the gifted children tended to devote more time when they felt less confident. Several implications of these findings are discussed.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8559419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reading Eye Movements Performance on iPad vs Print Using a Visagraph.","authors":"Alicia Feis, Amanda Lallensack, Elizabeth Pallante, Melanie Nielsen, Nicole Demarco, Balamurali Vasudevan","doi":"10.16910/jemr.14.2.6","DOIUrl":"https://doi.org/10.16910/jemr.14.2.6","url":null,"abstract":"This study investigated reading comprehension, reading speed, and the quality of eye movements while reading on an iPad, as compared to printed text. 31 visually-normal subjects were enrolled. Two of the passages were read from the Visagraph standardized text on iPad and Print. Eye movement characteristics and comprehension were evaluated. Mean (SD) fixation duration was significantly longer with the iPad at 270 ms (40) compared to the printed text (p=0.04) at 260 ms (40). Subjects’ mean reading rates were significantly lower on the iPad at 294 words per minute (wpm) than the printed text at 318 wpm (p=0.03). The mean (SD) overall reading duration was significantly (p=0.02) slower on the iPad that took 31 s (9.3) than the printed text at 28 s (8.0). Overall reading performance is lower with an iPad than printed text in normal individuals. These findings might be more consequential in children and adult slower readers when they read using iPads.","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8557948/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39585513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas
{"title":"I2DNet - Design and Real-Time Evaluation of Appearance-based gaze estimation system.","authors":"L R D Murthy, Siddhi Brahmbhatt, Somnath Arjun, Pradipta Biswas","doi":"10.16910/jemr.14.4.2","DOIUrl":"https://doi.org/10.16910/jemr.14.4.2","url":null,"abstract":"<p><p>Gaze estimation problem can be addressed using either model-based or appearance-based approaches. Model-based approaches rely on features extracted from eye images to fit a 3D eye-ball model to obtain gaze point estimate while appearance-based methods attempt to directly map captured eye images to gaze point without any handcrafted features. Recently, availability of large datasets and novel deep learning techniques made appearance-based methods achieve superior accuracy than model-based approaches. However, many appearance- based gaze estimation systems perform well in within-dataset validation but fail to provide the same degree of accuracy in cross-dataset evaluation. Hence, it is still unclear how well the current state-of-the-art approaches perform in real-time in an interactive setting on unseen users. This paper proposes I2DNet, a novel architecture aimed to improve subject- independent gaze estimation accuracy that achieved a state-of-the-art 4.3 and 8.4 degree mean angle error on the MPIIGaze and RT-Gene datasets respectively. We have evaluated the proposed system as a gaze-controlled interface in real-time for a 9-block pointing and selection task and compared it with Webgazer.js and OpenFace 2.0. We have conducted a user study with 16 participants, and our proposed system reduces selection time and the number of missed selections statistically significantly compared to other two systems.</p>","PeriodicalId":15813,"journal":{"name":"Journal of Eye Movement Research","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2021-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8561667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39588925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}