2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)最新文献

筛选
英文 中文
REVERSIBLE COLOR-TO-GRAY MAPPING WITH RESISTANCE TO JPEG ENCODING 可逆的彩色到灰色映射与抵制jpeg编码
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470306
T. Horiuchi, Xu Wen, K. Hirai
{"title":"REVERSIBLE COLOR-TO-GRAY MAPPING WITH RESISTANCE TO JPEG ENCODING","authors":"T. Horiuchi, Xu Wen, K. Hirai","doi":"10.1109/SSIAI.2018.8470306","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470306","url":null,"abstract":"The use of a reversible color-to-gray algorithm is an effective technique in practical applications, in terms of running cost, data quantity, security, etc. Most conventional image processing approaches cannot apply image encoding to color- embedded gray images. In this study, we propose a reversible color-to-gray method with resistance to JPEG encoding. To embed color information, the discrete cosine transformation (DCT) is employed, with good affinity to JPEG encoding. In the proposed method, first an input color image is converted to the YCbCr color space, and the DCT coefficients of the Cb and Cr components are embedded into the DCT coefficients of the Y component. Then using the JPEG quantization table, an appropriate embedding position is determined. Experiments were performed on standard color images, and the color recovery error after JPEG coding was compared and verified with conventional methods using PSNR and CIEDE2000.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"76 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133007335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Assessment of Hoarding Clutter from Images Using Convolutional Neural Networks 基于卷积神经网络的图像囤积杂波自动评估
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470375
M. Tezcan, J. Konrad, Jordana Muroff
{"title":"Automatic Assessment of Hoarding Clutter from Images Using Convolutional Neural Networks","authors":"M. Tezcan, J. Konrad, Jordana Muroff","doi":"10.1109/SSIAI.2018.8470375","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470375","url":null,"abstract":"Hoarding is a mental and public health problem stemming from difficulty associated with discarding one’s possessions and resulting clutter. In the last decade, a visual method, called \"Clutter Image Rating\" (CIR), has been developed for the assessment of hoarding severity. It involves rating clutter in patient’s home on the CIR scale from 1 to 9 using a set of reference images. Such assessment, however, is time-consuming, subjective, and may be non-repeatable. In this paper, we propose a new automatic clutter assessment method from images, according to the CIR scale, based on deep learning. While, ideally, the goal is to perfectly classify clutter, trained professionals admit assigning CIR values within ±1. Therefore, we study two loss functions for our network: one that aims to precisely assign a CIR value and one that aims to do so within ±1. We also propose a weighted combination of these loss functions that, as a byproduct, allows us to control the CIR mean absolute error (MAE). On a recently-collected dataset, we achieved ±1 accuracy of 82% and MAE of 0.88, significantly outperforming our previous results of 60% and 1.58, respectively.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Golden Number Sampling Applied to Compressive Sensing 黄金数抽样在压缩感知中的应用
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470345
F. B. D. Silva, R. V. Borries, C. Miosso
{"title":"Golden Number Sampling Applied to Compressive Sensing","authors":"F. B. D. Silva, R. V. Borries, C. Miosso","doi":"10.1109/SSIAI.2018.8470345","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470345","url":null,"abstract":"In a common compressive sensing (CS) formulation, limited Discrete Fourier Transform samples of a signal allow someone to reconstruct it by using an optimization procedure provided that certain well-known conditions hold. However, the frequencies in the Discrete Fourier Transform correspond to equally spaced samples of the continuous frequency domain, and the other possible frequency distributions are not usually considered in compressive sensing. This paper presents an irregular sampling of the normalized frequencies of the Discrete Fourier Transform which converges to an equidistributed sequence. This is done by taking the sequence of the fractional parts of the successive multiples of the golden number. That sequence was considered in applications in computer graphics and in magnetic resonance imaging [1], [2]. We also show that sub-matrices of the Discrete Fourier Transform with frequencies corresponding to fractional parts of multiples of the golden number produce signal-to-error ratios almost as high as the equally spaced counterpart. In addition, we show that the proposed irregular sampling converges faster to a uniform distribution in the range (0, 1). Thus, it reduces the discrepancy of pairwise distances of consecutive elements in the frequency sampling.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121568819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New Hardware Architecture for the Ridge Regression Optical Flow Algorithm 一种新的脊回归光流算法硬件结构
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470370
Taylor Simons, Dah-Jye Lee
{"title":"A New Hardware Architecture for the Ridge Regression Optical Flow Algorithm","authors":"Taylor Simons, Dah-Jye Lee","doi":"10.1109/SSIAI.2018.8470370","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470370","url":null,"abstract":"We present a new hardware architecture for calculating the optical flow of real time video streams. Our system produces dense motion fields in real time at high resolutions. We implemented a new version of the Ridge Regression Optical flow algorithm. This architecture design focuses on maximizing parallel operations of large amounts of pixel data and pipelining the data flow to allow for real time throughput. A specialized memory controller unit was designed to access pixel data from seven different frames. This memory control alleviates any memory bottleneck. The new architecture can process 1080p HD video streams at over 60 frames per second. This design requires no processor nor data bus which allows it to be more easily manufactured as an ASIC.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"9 27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124683047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complex Correntropy Induced Metric Applied to Compressive Sensing with Complex-Valued Data 复熵诱导度量在复值数据压缩感知中的应用
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470371
João P. F. Guimarães, A. I. R. Fontes, F. B. D. Silva, A. Martins, R. V. Borries
{"title":"Complex Correntropy Induced Metric Applied to Compressive Sensing with Complex-Valued Data","authors":"João P. F. Guimarães, A. I. R. Fontes, F. B. D. Silva, A. Martins, R. V. Borries","doi":"10.1109/SSIAI.2018.8470371","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470371","url":null,"abstract":"The correntropy induced metric (CIM) is a well- defined metric induced by the correntropy function and has been applied to different problems in signal processing and machine learning, but CIM was limited to the case of real-valued data. This paper extends the CIM to the case of complex- valued data, denoted by Complex Correntropy Induced Metric (CCIM). The new metric preserves the well known benefits of extracting high order statistical information from correntropy, but now dealing with complex-valued data. As an example, the paper shows the CCIM applied in the approximation of ℓ0-minimization in the reconstruction of complex-valued sparse signals in a compressive sensing problem formulation. A mathematical proof is presented as well as simulation results that indicate the viability of the proposed new metric.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115014189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
STRATEGIES FOR QUALITY-AWARE VIDEO CONTENT ANALYTICS 质量意识视频内容分析策略
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470354
A. Reibman
{"title":"STRATEGIES FOR QUALITY-AWARE VIDEO CONTENT ANALYTICS","authors":"A. Reibman","doi":"10.1109/SSIAI.2018.8470354","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470354","url":null,"abstract":"Recent research in video analytics promises the capability to automatically detect and extract information from video. Potential tasks include object and pedestrian detection, object and face recognition, motion detection, object tracking, as well as background subtraction and activity recognition. However, in many instances, the quality of the video from which information is to be extracted is not very high. This may be because of system constraints (like a bandwidth constraint or VHS recorder), environmental conditions (fog or low light), or a poor camera (wobbly/moving camera, limited FOV, or just a low-quality lens).","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114814290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fully Automatic Baseline Correction in Magnetic Resonance Spectroscopy 磁共振波谱的全自动基线校正
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470319
Omid Bazgir, S. Mitra, B. Nutter, E. Walden
{"title":"Fully Automatic Baseline Correction in Magnetic Resonance Spectroscopy","authors":"Omid Bazgir, S. Mitra, B. Nutter, E. Walden","doi":"10.1109/SSIAI.2018.8470319","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470319","url":null,"abstract":"Proton Magnetic Resonance Spectroscopy (1H MRS) in conjunction with Magnetic Resonance Imaging (MRI) has been a significant topic of research for quantitative assessment and early detection of neurodegenerative disorders for more than two decades. However, robust techniques for MRS data analysis are still being developed for wide clinical use. Many neurodegenerative diseases exhibit changes in concentrations of specific metabolites. One of the challenging problems in developing consistent quantitative estimation of metabolite concentration is proper correction of the MRS baseline due to the contributions from macromolecules and lipids. We have proposed a novel approach based on interpolation of minima in MR spectra and applied this technique to both in vitro and in vivo MRS data analysis. Our results demonstrate that the proposed method is fast, independent of tuning, and provides an accurate estimation of MRS baseline, leading to improved computational estimates for metabolic concentrations.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"449 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123200669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of Primary Cilia in Microscopy Images Using Convolutional Neural Random Forests 利用卷积神经随机森林对显微镜图像中的初级纤毛进行分类
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470320
Sundaresh Ram, Mohammed S. Majdi, Jeffrey J. Rodríguez, Yang Gao, H. Brooks
{"title":"Classification of Primary Cilia in Microscopy Images Using Convolutional Neural Random Forests","authors":"Sundaresh Ram, Mohammed S. Majdi, Jeffrey J. Rodríguez, Yang Gao, H. Brooks","doi":"10.1109/SSIAI.2018.8470320","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470320","url":null,"abstract":"Accurate detection and classification of primary cilia in microscopy images is an essential and fundamental task for many biological studies including diagnosis of primary ciliary dyskinesia. Manual detection and classification of individual primary cilia by visual inspection is time consuming, and prone to induce subjective bias. However, automation of this process is challenging as well, due to clutter, bleed-through, imaging noise, and the similar characteristics of the non-cilia candidates present within the image. We propose a convolutional neural random forest classifier that combines a convolutional neural network with random decision forests to classify the primary cilia in fluorescence microscopy images. We compare the performance of the proposed classifier with that of an unsupervised k-means classifier and a supervised multi-layer perceptron classifier on real data consisting of 8 representative cilia images, containing more than 2300 primary cilia using precision/recall rates, ROC curves, AUC, and Fβ-score for classification accuracy. Results show that our proposed classifier achieves better classification accuracy.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127158399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Underwater Image Restoration using Deep Networks to Estimate Background Light and Scene Depth 利用深度网络估计背景光和场景深度的水下图像恢复
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470347
Keming Cao, Yan-Tsung Peng, P. Cosman
{"title":"Underwater Image Restoration using Deep Networks to Estimate Background Light and Scene Depth","authors":"Keming Cao, Yan-Tsung Peng, P. Cosman","doi":"10.1109/SSIAI.2018.8470347","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470347","url":null,"abstract":"Images taken underwater often suffer color distortion and low contrast because of light scattering and absorption. An underwater image can be modeled as a blend of a clear image and a background light, with the relative amounts of each determined by the depth from the camera. In this paper, we propose two neural network structures to estimate background light and scene depth, to restore underwater images. Experimental results on synthetic and real underwater images demonstrate the effectiveness of the proposed method.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114894576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Thermal Image Enhancement Algorithm Using Local And Global Logarithmic Transform Histogram Matching With Spatial Equalization 基于局部和全局对数变换直方图匹配和空间均衡的热图像增强算法
2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI) Pub Date : 2018-04-01 DOI: 10.1109/SSIAI.2018.8470344
V. Voronin, S. Tokareva, E. Semenishchev, S. Agaian
{"title":"Thermal Image Enhancement Algorithm Using Local And Global Logarithmic Transform Histogram Matching With Spatial Equalization","authors":"V. Voronin, S. Tokareva, E. Semenishchev, S. Agaian","doi":"10.1109/SSIAI.2018.8470344","DOIUrl":"https://doi.org/10.1109/SSIAI.2018.8470344","url":null,"abstract":"This paper presents a new thermal image enhancement algorithm based on combined local and global image processing in the frequency domain. The presented approach uses the fact that the relationship between stimulus and perception is logarithmic. The basic idea is to apply logarithmic transform histogram matching with spatial equalization approach on different image blocks. The resulting image is a weighted mean of all processing blocks. The weights for every local and global enhanced image driven through optimization of measure of enhancement (EME). Some presented experimental results illustrate the performance of the proposed algorithm on real thermal images in comparison with the traditional methods.","PeriodicalId":422209,"journal":{"name":"2018 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121188002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信