{"title":"Addressing long-standing controversies in conceptual knowledge representation in the temporal pole: A cross-modal paradigm.","authors":"Lora T Likova","doi":"10.2352/ISSN.2470-1173.2017.14.HVEI-155","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.14.HVEI-155","url":null,"abstract":"<p><p>Conceptual knowledge allows us to comprehend the multisensory stimulation impinging on our senses. Its representation in the anterior temporal lobe is a subject of considerable debate, with the \"enigmatic\" temporal pole (TP) being at the center of that debate. The controversial models of the organization of knowledge representation in TP range from unilateral to fully unified bilateral representational systems. To address the multitude of mutually exclusive options, we developed a novel cross-modal approach in a multifactorial brain imaging study of the blind, manipulating the modality (verbal vs pictorial) of both the reception source (reading text/verbal vs images/pictorial) and the expression (writing text/verbal vs drawing/pictorial) of conceptual knowledge. Furthermore, we also varied the level of familiarity. This study is the first to investigate the functional organization of (amodal) conceptual knowledge in TP in the blind, as well as, the first study of drawing based on the conceptual knowledge from memory of sentences delivered through Braille reading. Through this paradigm, we were able to functionally identify two novel subdivisions of the temporal pole - the TPa, at the apex, and the TPdm - dorso-medially. Their response characteristics revealed a complex interplay of non-visual specializations within the temporal pole, with a diversity of excitatory/inhibitory inversions as a function of hemisphere, task-domain and familiarity, which motivate an expanded neurocognitive analysis of conceptual knowledge. The interplay of inter-hemispheric specializations found here accounts for the variety of seemingly conflicting models in previous research for conceptual knowledge representation, reconciling them through the set of factors we have investigated: the two main knowledge domains (verbal and pictorial/sensory-motor) and the two main knowledge processing modes (receptive and expressive), including the level of familiarity as a modifier. Furthermore, the interplay of these factors allowed us to also reveal for the first time a system of complementary symmetries, asymmetries and unexpected anti-symmetries in the TP organization. Thus, taken together these results constitute a unifying explanation of the conflicting models in previous research on conceptual knowledge representation.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2017 ","pages":"268-272"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2017.14.HVEI-155","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41222287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lora T Likova, Christopher W Tyler, Laura Cacciamani, Kristyo Mineff, Spero Nicholas
{"title":"The Cortical Network for Braille Writing in the Blind.","authors":"Lora T Likova, Christopher W Tyler, Laura Cacciamani, Kristyo Mineff, Spero Nicholas","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-095","DOIUrl":"10.2352/ISSN.2470-1173.2016.16.HVEI-095","url":null,"abstract":"<p><p>Fundamental forms of high-order cognition, such as reading and writing, are usually studied in the context of one modality - vision. People without sight, however, use the kinesthetic-based Braille writing, and haptic-based Braille reading. We asked whether the cognitive and motor control mechanisms underlying writing and reading are modality-specific or supramodal. While a number of previous functional Magnetic Resonance Imaging (fMRI) studies have investigated the brain network for Braille reading in the blind, such studies on Braille writing are lacking. Consequently, no comparative network analysis of Braille writing vs. reading exists. Here, we report the first study of Braille writing, and a comparison of the brain organization for Braille writing vs Braille reading. FMRI was conducted in a Siemens 3T Trio scanner. Our custom MRI-compatible drawing/writing lectern was further modified to provide for Braille reading and writing. Each of five paragraphs of novel Braille text describing objects, faces and navigation sequences was read, then reproduced twice by Braille writing from memory, then read a second time. During Braille reading, the haptic-sensing of the Braille letters strongly activated not only the early visual area V1 and V2, but some highly specialized areas, such as the classical visual grapheme area and the Exner motor grapheme area. Braille-writing-from-memory, engaged a significantly more extensive network in dorsal motor, somatosensory/kinesthetic, dorsal parietal and prefrontal cortex. However, in contrast to the largely extended V1 activation in drawing-from-memory in the blind after training (Likova, 2012), Braille writing from memory generated focal activation restricted to the most foveal part of V1, presumably reflecting topographically the focal demands of such a \"pin-pricking\" task.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5589194/pdf/nihms795213.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35498342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Positive and negative polarity contrast sensitivity measuring app.","authors":"Alex D Hwang, Eli Peli","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-122","DOIUrl":"10.2352/ISSN.2470-1173.2016.16.HVEI-122","url":null,"abstract":"<p><p>Contrast sensitivity (CS) quantifies an observer's ability to detect the smallest (threshold) luminance difference between a target and its surrounding. In clinical settings, printed letter contrast charts are commonly used, and the contrast of the letter stimuli is specified by the Weber contrast definition. Those paper-printed charts use negative polarity contrast (NP, dark letters on bright background) and are not available with positive polarity contrast (PP, bright letters on dark background), as needed in a number of applications. We implemented a mobile CS measuring app supporting both NP and PP contrast stimuli that mimic the paper charts for NP. A novel modified Weber definition was developed to specify the contrast of PP letters. The validity of the app is established in comparison with the paper chart. We found that our app generates more accurate and a wider range of contrast stimuli than the paper chart (especially at the critical high CS, low contrast range), and found a clear difference between NP and PP CS measures (CS<sub>NP</sub>>CS<sub>PP</sub>) despite the symmetry afforded by the modified Weber contrast definition. Our app provides a convenient way to measure CS in both lighted and dark environments.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5481843/pdf/nihms868149.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35120202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing object recognition from binary and bipolar edge features.","authors":"Jae-Hyun Jung, Tian Pu, Eli Peli","doi":"10.2352/ISSN.2470-1173.2016.16.HVEI-111","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2016.16.HVEI-111","url":null,"abstract":"<p><p>Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary edge images (black edges on white background or white edges on black background) have been used to represent features (edges and cusps) in scenes. However, the polarity of cusps and edges may contain important depth information (depth from shading) which is lost in the binary edge representation. This depth information may be restored, to some degree, using bipolar edges. We compared recognition rates of 16 binary edge images, or bipolar features, by 26 subjects. Object recognition rates were higher with bipolar edges and the improvement was significant in scenes with complex backgrounds.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2016 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2016.16.HVEI-111","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34913547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}