{"title":"Mobile Devices and Multimedia: Enabling Technologies, Algorithms, and Applications 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.3.mobmu-a03","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-a03","url":null,"abstract":"Abstract The goal of this conference is to provide an international forum for presenting recent research results on multimedia for mobile devices, and to bring together experts from both academia and industry for a fruitful exchange of ideas and discussion on future challenges. The authors are encouraged to submit work-in-progress papers as well as updates on previously reported systems. Outstanding papers may be recommended for the publication in the Journal Electronic Imaging or the Journal of Imaging Science and Technology.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lora T Likova, Zhangziyi Zhou, Michael Liang, Christopher W Tyler
{"title":"Spatial cognition training rapidly induces cortical plasticity in blind navigation: Transfer of training effect & Granger causal connectivity analysis.","authors":"Lora T Likova, Zhangziyi Zhou, Michael Liang, Christopher W Tyler","doi":"10.2352/EI.2023.35.10.HVEI-256","DOIUrl":"https://doi.org/10.2352/EI.2023.35.10.HVEI-256","url":null,"abstract":"<p><p>How is the cortical navigation network reorganized by the Likova Cognitive-Kinesthetic Navigation Training? We measured Granger-causal connectivity of the frontal-hippocampal-insular-retrosplenial-V1 network of cortical areas before and after this one-week training in the blind. Primarily top-down influences were seen during two tasks of drawing-from-memory (drawing complex maps and drawing the shortest path between designated map locations), with the dominant role being congruent influences from the egocentric insular to the allocentric spatial retrosplenial cortex and the amodal-spatial sketchpad of V1, with concomitant influences of the frontal cortex on these areas. After training, and during planning-from-memory of the best on-demand path, the hippocampus played a much stronger role, with the V1 sketchpad feeding information forward to the retrosplenial region. The inverse causal influences among these regions generally followed a recursive feedback model of the opposite pattern to a subset of congruent influences. Thus, this navigational network reorganized its pattern of causal influences with task demands and the navigation training, which produced marked enhancement of the navigational skills.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"35 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10228514/pdf/nihms-1898995.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9553349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multipurpose Spatiomotor Capture System for Haptic and Visual Training and Testing in the Blind and Sighted","authors":"Lora T. Likova, Kristyo Mineff, C. Tyler","doi":"10.2352/issn.2470-1173.2021.11.hvei-160","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.11.hvei-160","url":null,"abstract":"We describe the development of a multipurpose haptic stimulus delivery and spatiomotor recording system with tactile map-overlays for electronic processing This innovative multipurpose spatiomotor capture system will serve a wide range of functions in the training and behavioral assessment of spatial memory and precise motor control for blindness rehabilitation, both for STEM learning and for navigation training and map reading. Capacitive coupling through the map-overlays to the touch-tablet screen below them allows precise recording i) of hand movements during haptic exploration of tactile raised-line images on one tablet and ii) of line-drawing trajectories on the other, for analysis of navigational errors, speed, time elapsed, etc. Thus, this system will provide for the first time in an integrated and automated manner quantitative assessments of the whole 'perception-cognition-action' loop - from non-visual exploration strategies, spatial memory, precise spatiomotor control and coordination, drawing performance, and navigation capabilities, as well as of haptic and movement planning and control. The accuracy of memory encoding, in particular, can be assessed by the memory-drawing operation of the capture system. Importantly, this system allows for both remote and in-person operation. Although the focus is on visually impaired populations, the system is designed to equally serve training and assessments in the normally sighted as well.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88929982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Controllable Medical Image Generation via Generative Adversarial Networks.","authors":"Zhihang Ren, Stella X Yu, David Whitney","doi":"10.2352/issn.2470-1173.2021.11.hvei-112","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.11.hvei-112","url":null,"abstract":"<p><p>Radiologists and pathologists frequently make highly consequential perceptual decisions. For example, visually searching for a tumor and recognizing whether it is malignant can have a life-changing impact on a patient. Unfortunately, all human perceivers-even radiologists-have perceptual biases. Because human perceivers (medical doctors) will, for the foreseeable future, be the final judges of whether a tumor is malignant, understanding and mitigating human perceptual biases is important. While there has been research on perceptual biases in medical image perception tasks, the stimuli used for these studies were highly artificial and often critiqued. Realistic stimuli have not been used because it has not been possible to generate or control them for psychophysical experiments. Here, we propose to use Generative Adversarial Networks (GAN) to create vivid and realistic medical image stimuli that can be used in psychophysical and computer vision studies of medical image perception. Our model can generate tumor-like stimuli with specified shapes and realistic textures in a controlled manner. Various experiments showed the authenticity of our GAN-generated stimuli and the controllability of our model.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"33 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9897627/pdf/nihms-1673431.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9229753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Helminiak, Hang Hu, Julia Laskin, Dong Hye Ye
{"title":"Deep Learning Approach for Dynamic Sparse Sampling for High-Throughput Mass Spectrometry Imaging.","authors":"David Helminiak, Hang Hu, Julia Laskin, Dong Hye Ye","doi":"10.2352/issn.2470-1173.2021.15.coimg-290","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2021.15.coimg-290","url":null,"abstract":"<p><p>A Supervised Learning Approach for Dynamic Sampling (SLADS) addresses traditional issues with the incorporation of stochastic processes into a compressed sensing method. Statistical features, extracted from a sample reconstruction, estimate entropy reduction with regression models, in order to dynamically determine optimal sampling locations. This work introduces an enhanced SLADS method, in the form of a Deep Learning Approach for Dynamic Sampling (DLADS), showing reductions in sample acquisition times for high-fidelity reconstructions between ~ 70-80% over traditional rectilinear scanning. These improvements are demonstrated for dimensionally asymmetric, high-resolution molecular images of mouse uterine and kidney tissues, as obtained using Nanospray Desorption ElectroSpray Ionization (nano-DESI) Mass Spectrometry Imaging (MSI). The methodology for training set creation is adjusted to mitigate stretching artifacts generated when using prior SLADS approaches. Transitioning to DLADS removes the need for feature extraction, further advanced with the employment of convolutional layers to leverage inter-pixel spatial relationships. Additionally, DLADS demonstrates effective generalization, despite dissimilar training and testing data. Overall, DLADS is shown to maximize potential experimental throughput for nano-DESI MSI.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2021 Computational Imaging XIX","pages":"2901-2907"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8553253/pdf/nihms-1699290.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39580835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Differences in the major fiber-tracts of people with congenital and acquired blindness.","authors":"Katherine E M Tregillus, Lora T Likova","doi":"10.2352/issn.2470-1173.2020.11.hvei-366","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2020.11.hvei-366","url":null,"abstract":"<p><p>In order to better understand how our visual system processes information, we must understand the underlying brain connectivity architecture, and how it can get reorganized under visual deprivation. The full extent to which visual development and visual loss affect connectivity is not well known. To investigate the effect of the onset of blindness on structural connectivity both at the whole-brain voxel-wise level and at the level of all major white-matter tracts, we applied two complementary Diffusion-Tension Imaging (DTI) methods, TBSS and AFQ. Diffusion-weighted brain images were collected from three groups of participants: congenitally blind (CB), acquired blind (AB), and fully sighted controls. The differences between these groups were evaluated on a voxel-wise scale with Tract-Based Spatial Statistics (TBSS) method, and on larger-scale with Automated Fiber Quantification (AFQ), a method that allows for between-group comparisons at the level of the major fiber tracts. TBSS revealed that both blind groups tended to have higher FA than sighted controls in the central structures of the brain. AFQ revealed that, where the three groups differed, congenitally blind participants tended to be more similar to sighted controls than to those participants who had acquired blindness later in life. These differences were specifically manifested in the left uncinated fasciculus, the right corticospinal fasciculus, and the left superior longitudinal fasciculus, areas broadly associated with a range of higher-level cognitive systems.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2020 ","pages":"3661-3667"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8445597/pdf/nihms-1616194.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39453010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lora T. Likova, Ming Mei, Kristyo Mineff, S. Nicholas
{"title":"Learning face perception without vision: Rebound learning effect and hemispheric differences in congenital vs late-onset blindness","authors":"Lora T. Likova, Ming Mei, Kristyo Mineff, S. Nicholas","doi":"10.2352/issn.2470-1173.2019.12.hvei-237","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2019.12.hvei-237","url":null,"abstract":"To address the longstanding questions of whether the blind-from-birth have an innate face-schema, what plasticity mechanisms underlie non-visual face learning, and whether there are interhemispheric face processing differences in face processing in the blind, we used a unique non-visual drawing-based training in congenitally blind (CB), late-blind (LB) and blindfolded-sighted (BF) groups of adults. This Cognitive-Kinesthetic Drawing approach previously developed by Likova (e.g., 2010, 2012, 2013) enabled us to rapidly train and study training-driven neuroplasticity in both the blind and sighted groups. The five-day two-hour training taught participants to haptically explore, recognize, memorize raised-line images, and draw them free-hand from memory, in detail, including the fine facial characteristics of the face stimuli. Such drawings represent an externalization of the formed memory. Functional MRI was run before and after the training. Tactile-face perception activated the occipito-temporal cortex in all groups. However, the training led to a strong, predominantly left-hemispheric reorganization in the two blind groups, in contrast to right-hemispheric in blindfolded-sighted, i.e., the post-training response-change was stronger in the left hemisphere in the blind, but in the right in the blindfolded. This is the first study to discover interhemispheric differences in non-visual face processing. Remarkably, for face perception this learning-based change was positive in the CB and BF groups, but negative in the LB-group. Both the lateralization and inversed-sign learning effects were specific to face perception, but absent for the control nonface categories of small objects and houses. The unexpected inversed-sign training effect in CB vs LB suggests different stages of brain plasticity in the ventral pathway specific to the face category. Importantly, the fact that only after a very few days of our training, the totally-blind-from-birth CB manifested a very good (haptic) face perception, and even developed strong empathy to the explored faces, implies a preexisting face schema that can be \"unmasked\" and \"tuned up\" by a proper learning procedure. The Likova Cognitive-Kinesthetic Training is a powerful tool for driving brain plasticity, and providing deeper insights into non-visual learning, including emergence of perceptual categories. A rebound learning model and a neuro-Bayesian economy principle are proposed to explain the multidimensional learning effects. The results provide new insights into the Nature-vs-Nurture interplay in rapid brain plasticity and neurorehabilitation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"101 1","pages":"2371-23713"},"PeriodicalIF":0.0,"publicationDate":"2019-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85036531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Haptic aesthetics in the blind: A behavioral and fMRI investigation","authors":"A. R. Karim, Lora T. Likova","doi":"10.2352/ISSN.2470-1173.2018.14.HVEI-532","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2018.14.HVEI-532","url":null,"abstract":"Understanding perception and aesthetic appeal of arts and environmental objects, what is appreciated, liked, or preferred, and why, is of prime importance for improving the functional capacity of the blind and visually impaired and the ergonomic design for their environment, which however so far, has been examined only in sighted individuals. This paper provides a general overview of the first experimental study of tactile aesthetics as a function of visual experience and level of visual deprivation, using both behavioral and brain imaging techniques. We investigated how blind people perceive 3D tactile objects, how they characterize them, and whether the tactile perception, and tactile shape preference (liking or disliking) and tactile aesthetic appreciation (judging tactile qualities of an object, such as pleasantness, comfortableness etc.) of 3D tactile objects can be affected by the level of visual experience. The study employed innovative behavioral measures, such as new forms of aesthetic preference-appreciation and perceptual discrimination questionnaires, in combination with advanced functional Magnetic Resonance Imaging (fMRI) techniques, and compared congenitally blind, late-onset blind and blindfolded (sighted) participants. Behavioral results demonstrated that both blind and blindfolded-sighted participants assessed curved or rounded 3D tactile objects as significantly more pleasing than sharp 3D tactile objects, and symmetric 3D tactile objects as significantly more pleasing than asymmetric 3D tactile objects. However, as compared to the sighted, blind people showed better skills in tactile discrimination as demonstrated by accuracy and speed of discrimination. Functional MRI results demonstrated that there was a large overlap and characteristic differences in the aesthetic appreciation brain networks in the blind and the sighted. As demonstrated both populations commonly recruited the somatosensory and motor areas of the brain, but with stronger activations in the blind as compared to the sighted. Secondly, sighted people recruited more frontal regions whereas blind people, in particular, the congenitally blind, paradoxically recruited more 'visual' areas of the brain. These differences were more pronounced between the sighted and the congenitally blind rather than between the sighted and the late-onset blind, indicating the key influence of the onset time of visual deprivation. Understanding of the underlying brain mechanisms should have a wide range of important implications for a generalized cross-sensory theory and practice in the rapidly evolving field of neuroaesthetics, as well as for 'cutting-edge' rehabilitation technologies for the blind and the visually impaired.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73538726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D Inupakutika, G Natarajan, S Kaghyan, D Akopian, M Evans, Y Zenong, D Parra-Medina
{"title":"An Integration of Health Tracking Sensor Applications and eLearning Environments for Cloud-Based Health Promotion Campaigns.","authors":"D Inupakutika, G Natarajan, S Kaghyan, D Akopian, M Evans, Y Zenong, D Parra-Medina","doi":"10.2352/issn.2470-1173.2018.06.mobmu-114","DOIUrl":"https://doi.org/10.2352/issn.2470-1173.2018.06.mobmu-114","url":null,"abstract":"<p><p>Rapidly evolving technologies like data analysis, smartphone and web-based applications, and the Internet of things have been increasingly used for healthy living, fitness and well-being. These technologies are being utilized by various research studies to reduce obesity. This paper demonstrates design and development of a dataflow protocol that integrates several applications. After registration of a user, activity, nutrition and other lifestyle data from participants are retrieved in a centralized cloud dedicated for health promotion. In addition, users are provided accounts in an e-Learning environment from which learning outcomes can be retrieved. Using the proposed system, health promotion campaigners have the ability to provide feedback to the participants using a dedicated messaging system. Participants authorize the system to use their activity data for the program participation. The implemented system and servicing protocol minimize personnel overhead of large-scale health promotion campaigns and are scalable to assist automated interventions, from automated data retrieval to automated messaging feedback. This paper describes end-to -end workflow of the proposed system. The case study tests are carried with Fitbit Flex2 activity trackers, Withings Scale, Verizon Android-based tablets, Moodle learning management system, and Articulate RISE for learning content development.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2018 ","pages":"1141-1148"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/issn.2470-1173.2018.06.mobmu-114","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39102931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicole M Scarborough, G M Dilshan P Godaliyadda, Dong Hye Ye, David J Kissick, Shijie Zhang, Justin A Newman, Michael J Sheedlo, Azhad Chowdhury, Robert F Fischetti, Chittaranjan Das, Gregery T Buzzard, Charles A Bouman, Garth J Simpson
{"title":"Synchrotron X-Ray Diffraction Dynamic Sampling for Protein Crystal Centering.","authors":"Nicole M Scarborough, G M Dilshan P Godaliyadda, Dong Hye Ye, David J Kissick, Shijie Zhang, Justin A Newman, Michael J Sheedlo, Azhad Chowdhury, Robert F Fischetti, Chittaranjan Das, Gregery T Buzzard, Charles A Bouman, Garth J Simpson","doi":"10.2352/ISSN.2470-1173.2017.17.COIMG-415","DOIUrl":"https://doi.org/10.2352/ISSN.2470-1173.2017.17.COIMG-415","url":null,"abstract":"<p><p>A supervised learning approach for dynamic sampling (SLADS) was developed to reduce X-ray exposure prior to data collection in protein structure determination. Implementation of this algorithm allowed reduction of the X-ray dose to the central core of the crystal by up to 20-fold compared to current raster scanning approaches. This dose reduction corresponds directly to a reduction on X-ray damage to the protein crystals prior to data collection for structure determination. Implementation at a beamline at Argonne National Laboratory suggests promise for the use of the SLADS approach to aid in the analysis of X-ray labile crystals. The potential benefits match a growing need for improvements in automated approaches for microcrystal positioning.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2017 ","pages":"6-9"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2352/ISSN.2470-1173.2017.17.COIMG-415","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35903378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}