International Conference on Information Photonics最新文献

筛选
英文 中文
Learning Parseval Frames for Sparse Representation with Frame Perspective 学习Parseval框架的框架透视稀疏表示
International Conference on Information Photonics Pub Date : 2018-10-01 DOI: 10.1109/ICIP.2018.8451713
Ping-Tzan Huang, W. Hwang, T. Jong
{"title":"Learning Parseval Frames for Sparse Representation with Frame Perspective","authors":"Ping-Tzan Huang, W. Hwang, T. Jong","doi":"10.1109/ICIP.2018.8451713","DOIUrl":"https://doi.org/10.1109/ICIP.2018.8451713","url":null,"abstract":"Frames are the foundation of the linear operators used in the decomposition and reconstruction of signals, such as the discrete Fourier transform, Gabor, wavelets, and curvelet transforms. The emergence of sparse representation models has shifted of the emphasis of signal representation in frame theory toward sparse $l_{1}$ -minimization problems. In this paper, we apply frame theory to the sparse representation of signals in which a synthesis dictionary is used for a frame and an analysis dictionary is used for a dual frame. We developed a novel dictionary learning algorithm (called Parseval K-SVD) to learn a tight-frame dictionary. We then leveraged the analysis and synthesis perspectives of signal representation with frames to derive optimization formulations for problems pertaining to image recovery. Our preliminary results demonstrate that the images recovered using this approach are correlated to the frame bounds of dictionaries, thereby demonstrating the importance of using different dictionaries for different applications.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131635215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
VR+HDR: A system for view-dependent rendering of HDR video in virtual reality VR+HDR:虚拟现实中HDR视频的视相关渲染系统
International Conference on Information Photonics Pub Date : 2017-09-01 DOI: 10.1109/ICIP.2017.8296438
Hossein Najaf-Zadeh, M. Budagavi, Esmaeil Faramarzi
{"title":"VR+HDR: A system for view-dependent rendering of HDR video in virtual reality","authors":"Hossein Najaf-Zadeh, M. Budagavi, Esmaeil Faramarzi","doi":"10.1109/ICIP.2017.8296438","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296438","url":null,"abstract":"This paper introduces a view-dependent method for rendering high-dynamic-range (HDR) video in virtual reality (VR) on VR displays such as head mounted displays (HMD), mobile phones, TVs, and computer monitors. The user's view direction is taken into account to design a tone-mapping operator which appropriately displays the HDR content on the display device. The proposed method can be utilized if HDR capturing and playback (i.e. HDR camera and 10-bit video codec) are available. However, it can also be used on the 8-bit pipeline (i.e. 8-bit camera and 8-bit video codec). A VR+HDR prototype was implemented using Samsung Gear360 camera and SamsungVR Android app on Samsung Galaxy Note 5 smartphone. The subjective comparison between the proposed method and rendering of 360-degree VR video through global tone mapping was performed using this prototype and showed that for all test sequences, the proposed technique significantly improves the video quality in VR rendering.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131124913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Active learning for hyperspectral image classification with a stacked autoencoders based neural network 基于堆叠自编码器的神经网络的高光谱图像分类主动学习
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532520
Jiming Li
{"title":"Active learning for hyperspectral image classification with a stacked autoencoders based neural network","authors":"Jiming Li","doi":"10.1109/ICIP.2016.7532520","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532520","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130945064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Kernel-free video deblurring via synthesis 通过合成无核视频去模糊
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532846
Feitong Tan, Shuaicheng Liu, Liaoyuan Zeng, B. Zeng
{"title":"Kernel-free video deblurring via synthesis","authors":"Feitong Tan, Shuaicheng Liu, Liaoyuan Zeng, B. Zeng","doi":"10.1109/ICIP.2016.7532846","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532846","url":null,"abstract":"Shaky cameras often capture videos with motion blurs, especially when the light is insufficient (e.g., dimly-lit indoor environment or outdoor in a cloudy day). In this paper, we present a framework that can restore blurry frames effectively by synthesizing the details from sharp frames. The uniqueness of our approach is that we do not require blur kernels, which are needed previously either for deconvolution or convolving with sharp frames before patch matching. We develop this kernel-free method mainly because accurate kernel estimation is challenging due to noises, depth variations, and dynamic objects. Our method compares a blur patch directly against sharp candidates, in which the nearest neighbor matches can be recovered with sufficient accuracy for the deblurring. Moreover, to restore one blurry frame, instead of searching over a number of nearby sharp frames, we only search from a synthesized sharp frame that is merged by different regions from different sharp frames via an MRF-based region selection. Our experiments show that this method achieves a competitive quality in comparison with the state-of-the-art approaches with an improved efficiency and robustness.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115730612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Points classification for non-rigid structure from motion 基于运动的非刚性结构点分类
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532826
Junjie Hu, T. Aoki
{"title":"Points classification for non-rigid structure from motion","authors":"Junjie Hu, T. Aoki","doi":"10.1109/ICIP.2016.7532826","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532826","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129652382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient misalignment-robust face recognition via locality-constrained representation 基于位置约束表示的高效失位鲁棒人脸识别
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532914
Yandong Wen, Weiyang Liu, Meng Yang, Ming Li
{"title":"Efficient misalignment-robust face recognition via locality-constrained representation","authors":"Yandong Wen, Weiyang Liu, Meng Yang, Ming Li","doi":"10.1109/ICIP.2016.7532914","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532914","url":null,"abstract":"Current prevailing approaches for misaligned face recognition achieve satisfactory accuracy. However, the efficiency and scalability have not yet been well addressed, which limits their applications in practical systems. To address this problem, we propose a highly efficient algorithm for misaligned face recognition, namely misalignment-robust locality-constrained representation (MRLR). Specifically, MRLR aligns the query face by appropriately harnessing the locality constraint in representation. Since MRLR avoids the exhaustive subject-by-subject search in datasets and complex operation on large matrix, the efficiency is significantly boosted. Moreover, we take the advantage of the block structure in dictionary to accelerate the derived analytical solution, making the algorithm more scalable to the large-scale datasets. Experimental results on public datasets show that MRLR substantially improves the efficiency and scalability with even better accuracy.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114551123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning weighted hashing on local structured data 学习局部结构化数据的加权哈希
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532349
Yun-qiang Li, Yufei Zha, Huanyu Li, Shengjie Zhang, Ku Tao, Yang Yuan
{"title":"Learning weighted hashing on local structured data","authors":"Yun-qiang Li, Yufei Zha, Huanyu Li, Shengjie Zhang, Ku Tao, Yang Yuan","doi":"10.1109/ICIP.2016.7532349","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532349","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125520788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A novel framework of frame rate up conversion integrated within HEVC coding 在HEVC编码中集成了一种新的帧率提升转换框架
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7533159
Guo Lu, Xiaoyun Zhang, Zhiyong Gao
{"title":"A novel framework of frame rate up conversion integrated within HEVC coding","authors":"Guo Lu, Xiaoyun Zhang, Zhiyong Gao","doi":"10.1109/ICIP.2016.7533159","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533159","url":null,"abstract":"Frame rate up conversion (FRUC) is a key technology for providing high frame rate video with better visual quality, but at the cost of huge computational complexity and bandwidth consumption. In this paper, a standard compatible framework of FRUC integrated within high efficiency video coding (HEVC) is innovatively proposed. First, spatial and temporal motion candidates of skip mode are collected and employed for the motion estimation of FRUC, where FRUC selects the best motion vectors (MVs) according to the texture-aware criterion based on coding block size. Then, the interpolated frame is handed over to HEVC coding and mostly encoded as skip mode, which consumes very limited bits. Experimental results show that the proposed method has a great advantage both in computational complexity and bandwidth consumption compared with the traditional method of FRUC cascaded with coding.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121484537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Word recognition with deep conditional random fields 具有深度条件随机场的单词识别
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532694
Gang Chen, Yawei Li, S. Srihari
{"title":"Word recognition with deep conditional random fields","authors":"Gang Chen, Yawei Li, S. Srihari","doi":"10.1109/ICIP.2016.7532694","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532694","url":null,"abstract":"Recognition of handwritten words continues to be an important problem in document analysis and recognition. Existing approaches extract hand-engineered features from word images--which can perform poorly with new data sets. Recently, deep learning has attracted great attention because of the ability to learn features from raw data. Moreover they have yielded state-of-the-art results in classification tasks including character recognition and scene recognition. On the other hand, word recognition is a sequential problem where we need to model the correlation between characters. In this paper, we propose using deep Conditional Random Fields (deep CRFs) for word recognition. Basically, we combine CRFs with deep learning, in which deep features are learned and sequences are labeled in a unified framework. We pre-train the deep structure with stacked restricted Boltzmann machines (RBMs) for feature learning and optimize the entire network with an online learning algorithm. The proposed model was evaluated on two datasets, and seen to perform significantly better than competitive baseline models. The source code is available at this https URL","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124684679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Hyperspectral material classification under monochromatic and trichromatic sampling rates 单色和三色采样率下的高光谱材料分类
International Conference on Information Photonics Pub Date : 2016-09-01 DOI: 10.1109/ICIP.2016.7532747
M. Aghagolzadeh, H. Radha
{"title":"Hyperspectral material classification under monochromatic and trichromatic sampling rates","authors":"M. Aghagolzadeh, H. Radha","doi":"10.1109/ICIP.2016.7532747","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532747","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127399747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信