2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Virtual Biopsy: Distinguishing Post-traumatic Stress from Mild Traumatic Brain Injury Using Magnetic Resonance Spectroscopy 虚拟活检:用磁共振波谱区分创伤后应激和轻度创伤性脑损伤
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457941
L. Mariano, John M. Irvine, B. Rowland, HuiJun Liao, K. Heaton, Irina Orlovsky, Katherine Finkelstein, A. Lin
{"title":"Virtual Biopsy: Distinguishing Post-traumatic Stress from Mild Traumatic Brain Injury Using Magnetic Resonance Spectroscopy","authors":"L. Mariano, John M. Irvine, B. Rowland, HuiJun Liao, K. Heaton, Irina Orlovsky, Katherine Finkelstein, A. Lin","doi":"10.1109/AIPR.2017.8457941","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457941","url":null,"abstract":"Post-Traumatic Stress Disorder (PTSD) and mild Traumatic Brain Injury (mTBI) affect soldiers returning from recent conflicts at an elevated rate. Our study focuses on the use of magnetic resonance spectroscopy (MRS) measurements to distinguish subjects having mTBI, PTSD, or both, with the goal of identifying biomarkers for of these specific disorders from the MRS data. MRS provides a non-invasive in vivo technique for measuring the concentration of metabolites in the brain, thus serving as a “virtual biopsy” that can be used to monitor a range of neurological diseases. The traditional method for analyzing MRS data assumes that the signal arises from a known set of metabolites and finds the best fit to a collection of pre-defined basis functions representing this set. Our novel approach makes no assumptions about the underlying metabolite population, and instead extracts a rich set of wavelet-based features from the entire MRS signal. Capturing the structure of all significant peaks in the signal allows for the discovery of previously unknown signatures related to disease state. We applied this approach to MRS data from 100 participants across five categories: civilian control subjects, military control subjects, military with PTSD, military with mTBI, and military with both PTSD and mTBI. After signal processing to remove artifacts, features were extracted from each signal using a wavelet decomposition approach, and MRS features from subjects with PTSD, mTBI, or both, were compared to both military and civilian control subjects. Our analysis identified significant changes in many different regions of the MR spectrum, including regions corresponding to glutamate, glutamine, GABA, Creatine, and Lactate. Classifiers based on these features exhibit correct classification rates of 80% or better in cross-validation, demonstrating the value of MRS as a non-invasive means of measuring biochemical signatures associated with PTSD and mTBI in military service men and women.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125644727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse Unsupervised Clustering with Mixture Observations for Video Summarization 基于混合观测的稀疏无监督聚类视频摘要
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457955
Xiang Xiang, D. Tran, T. Tran
{"title":"Sparse Unsupervised Clustering with Mixture Observations for Video Summarization","authors":"Xiang Xiang, D. Tran, T. Tran","doi":"10.1109/AIPR.2017.8457955","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457955","url":null,"abstract":"This paper designs a robot rapid moving strategy based on curve model. The virtual target points are introduced into the path planning of the robot so that the robot can complete the task smoothly and quickly. We give the method to solve the curve model in detail. At the same time, the design of state feedback from the robot control model based on the turning radius is used to solve the practical error problem. Simulation experiments show that the design of virtual target points can not only make the robot complete the task faster, but also can be applied to multi-robot formation control. The real experiment shows that the curve model can correct the error through the robot state feedback and finally make the robots reach the target point successfully.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"569 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and Segmenting White Blood Cells in Microscopy Images of Thin Blood Smears 薄血涂片显微图像中白细胞的检测与分割
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457970
Golnaz Moallem, M. Poostchi, Hang Yu, K. Silamut, N. Palaniappan, Sameer Kiran Antani, M. A. Hossain, R. Maude, Stefan Jaeger, G. Thoma
{"title":"Detecting and Segmenting White Blood Cells in Microscopy Images of Thin Blood Smears","authors":"Golnaz Moallem, M. Poostchi, Hang Yu, K. Silamut, N. Palaniappan, Sameer Kiran Antani, M. A. Hossain, R. Maude, Stefan Jaeger, G. Thoma","doi":"10.1109/AIPR.2017.8457970","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457970","url":null,"abstract":"A malarial infection is diagnosed and monitored by screening microscope images of blood smears for parasite-infected red blood cells. Millions of blood slides are manually screened for parasites every year, which is a tedious and error-prone process, and which largely depends on the expertise of the microscopists. We have developed a software to perform this task on a smartphone, using machine learning and image analysis methods for counting infected red blood cells automatically. The method we implemented first needs to detect and segment red blood cells. However, the presence of white blood cells (WBCs) contaminates the red blood cell detection and segmentation process because WBCs can be miscounted as red blood cells by automatic cell detection methods. As a result, a preprocessing step for WBC elimination is essential. Our paper proposes a novel method for white blood cell segmentation in microscopic images of blood smears. First, a range filtering algorithm is used to specify the location of white blood cells in the image following a Chan- Vese level-set algorithm to estimate the boundaries of each white blood cell present in the image. The proposed segmentation algorithm is systematically tested on a database of more than 1300 thin blood smear images exhibiting approximately 1350 WBCs. We evaluate the performance of the proposed method for the two WBC detection and WBC segmentation steps by comparing the annotations provided by a human expert with the results produced by the proposed algorithm. Our detection technique achieves a 96.37 % overall precision, 98.37 % recall, and 97.36 % Fl-score. The proposed segmentation method grants an overall 82.28 % Jaccard Similarity Index. These results demonstrate that our approach allows us to filter out WBCs, which significantly improves the precision of the cell counts for malaria diagnosis.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Ballistic Missile Boost Phase Acceleration Reconstruction using Wavelet Multi-resolution Analysis 基于小波多分辨率分析的弹道导弹助推相位加速度重建
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457975
Colter Long, Soundararajan Ezekiel, Larry Pearlstein, J. Raquepas
{"title":"Ballistic Missile Boost Phase Acceleration Reconstruction using Wavelet Multi-resolution Analysis","authors":"Colter Long, Soundararajan Ezekiel, Larry Pearlstein, J. Raquepas","doi":"10.1109/AIPR.2017.8457975","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457975","url":null,"abstract":"In recent years, the accurate characterization of the boost phase of a missile's flight has become a more challenging and prominent research topic as the noise level is extremely large relative to the quantity of interest. Reconstructing the boost phase acceleration profile of a ballistic missile from state observation is of interest to the technical intelligence community, ballistic missile defense, as well as the missile warning community. There are methods available such as Tikhonov regularization if the noise level is not too large. However, if the noise environment is very high most algorithms will perform poorly. In this paper, we explore the problem of estimating the thrust of a missile from very noisy estimates of its position over time by using wavelet techniques. Several wavelet basis functions and multi-resolution methods are explored to yield the most effective solution to this problem. These techniques have been successfully used on actual rocket-launch data in the past. Our method can be applied to US boost-phase missile defense such as protection of US homeland against nuclear attacks, other weapons of mass destructions or conventional ballistic missile attacks, military bases, and protecting US allies and partners.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131118028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Categorizating 3D Fetal Ultrasound Image Database in First Trimester Pregnancy based on Mid-Sagittal Plane Assessments 基于中矢状面评估的早期妊娠胎儿三维超声图像数据库分类
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457976
Cheung-Wen Chang, Shih-Ting Huang, Yu-Han Huang, Yung-Nien Sun, Pei-Ying Tsai
{"title":"Categorizating 3D Fetal Ultrasound Image Database in First Trimester Pregnancy based on Mid-Sagittal Plane Assessments","authors":"Cheung-Wen Chang, Shih-Ting Huang, Yu-Han Huang, Yung-Nien Sun, Pei-Ying Tsai","doi":"10.1109/AIPR.2017.8457976","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457976","url":null,"abstract":"Mid-Sagittal Plane (MSP) detection is crucial for the biometry assessments in ultrasound examinations. Screening on the correct MSP has been proven as the key condition for acquiring good quality of specified biometry measurements. In this paper, we proposed to categorize the 3D fetal ultrasound volume images based on the results of MSP detection. Based on MSP-detection results, our main focus here is to find the distinct descriptions or factors for database categorization. It is essential to realize how robust and effective the MSP-detection algorithm achieves with these factors. The database, including 381 fetal ultrasound image volumes have been collected from 141 different normal pregnant women, has been collected for more than three years in NCKU Hospital. The five factors adopted in categorizing the database include levels of image blurring, levels of weak edges, fetal adhesion, fetal posture and fetal size. The proposed MSP detection algorithm has been applied on 268 cases from the whole database (excluding the worst levels), and found the correct rate achieving 85.1 %. Then, the correct rate increases up to 90.0% by using the cases with the best conditions of all factors. Furthermore, the degree of influence for these factors in MSP detection has been discussed. At first, the results show that the image with highly weak edges (level 3) results in poor detections. Secondly, the poor fetal posture makes the highest effects on MSP detection (with 32% incorrect rate). It may be caused by having deep adhesions with the endometrium so that the fetal head boundary could not be fitted well. In fine-quality images, the adhesion factor reveals more determinative than the rough-quality factors. Thirdly, two factors of adhesion and weak edges achieved similar effects (not significant in statistics), with 23% and 25.7% incorrect rates, respectively. The less-influential factors are the fetus size and image blurring, achieving up to 14% and 16% incorrect rates, respectively.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123350371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gaze Tracking in 3D Space with a Convolution Neural Network “See What I See” 基于卷积神经网络的三维空间凝视跟踪
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457962
A. Adiba, Satoshi Asatani, Seiichi Tagawa, H. Niioka, Jun Miyake
{"title":"Gaze Tracking in 3D Space with a Convolution Neural Network “See What I See”","authors":"A. Adiba, Satoshi Asatani, Seiichi Tagawa, H. Niioka, Jun Miyake","doi":"10.1109/AIPR.2017.8457962","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457962","url":null,"abstract":"This paper presents integrated architecture to estimate gaze vectors under unrestricted head motions. Since previous approaches focused on estimating gaze toward a small planar screen, calibration is needed prior to use. With a Kinect device, we develop a method that relies on depth sensing to obtain robust and accurate head pose tracking and obtain the eye-in-head gaze direction information by training the visual data from eye images with a Neural Network (NN) model. Our model uses a Convolution Neural Network (CNN) that has five layers: two sets of convolution-pooling pairs and a fully connected-output layer. The filters are taken from the random patches of the images in an unsupervised way by k-means clustering. The learned filters are fed to a convolution layer, each of which is followed by a pooling layer, to reduce the resolution of the feature map and the sensitivity of the output to the shifts and the distortions. In the end, fully connected layers can be used as a classifier with a feed-forward-based process to obtain the weight. We reconstruct the gaze vectors from a set of head and eye pose orientations. The results of this approach suggest that the gaze estimation error is 5 degrees. This model is more accurate than a simple NN and an adaptive linear regression (ALR) approach.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133539539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A virtual filesystem approach to storage, analysis and delivery of volumetric image data for connectomics 一种虚拟文件系统方法,用于存储,分析和传输连接组的体积图像数据
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457951
Arthur W. Wetzel, Greg Hood, A. Ropelewski
{"title":"A virtual filesystem approach to storage, analysis and delivery of volumetric image data for connectomics","authors":"Arthur W. Wetzel, Greg Hood, A. Ropelewski","doi":"10.1109/AIPR.2017.8457951","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457951","url":null,"abstract":"Biology and medicine are increasingly driven by analyses of 3D and time series imagery for studies that are not possible with 2D images. Structural data required for building spatially realistic cell and connectomics models are particularly demanding of both resolution and spatial extent. Image capture methods for optical and electron microscopy at gigapixel per second rates are now routine. In combination these factors can currently produce hundreds of terabytes per specimen at data densities up to 1 PB per cubic mm of tissue. New techniques are needed to economically handle these speeds and data scales and to distribute results for on-demand analyses by researchers and students nationwide. A virtual volume file system (VVFS) approach to these problems is suggested by trends in the economics of computation and data storage along with typical data access patterns. In recent years improvements in the speed and cost of computation have dramatically outpaced gains in storage cost and performance. This is particularly true in GPGPU computation where data bandwidth is often the limiting factor for overall throughput. The essence of this VVFS mechanism is to apply on-the-fly computation to replace redundant data storage in critical operations such as registration, rendering and automated recognition. This is accomplished using the Linux Filesystem in UserSpace (FUSE) mechanism to provide file compatible interfaces to programs that operate from data files. This interface produces the appropriate content on-demand as applications such as TensorFlow or other analysis systems access the virtual files. The VVFS provides a flexible framework for connecting multiple program units into large scale applications while also reducing redundant data storage. By moving computation directly into the access path it minimizes data traffic while processing only those parts of the virtual data which end user applications consume.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123355163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Satellite Image Classification with Deep Learning 基于深度学习的卫星图像分类
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457969
M. D. Pritt, Gary Chern
{"title":"Satellite Image Classification with Deep Learning","authors":"M. D. Pritt, Gary Chern","doi":"10.1109/AIPR.2017.8457969","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457969","url":null,"abstract":"Satellite imagery is important for many applications including disaster response, law enforcement, and environmental monitoring. These applications require the manual identification of objects and facilities in the imagery. Because the geographic expanses to be covered are great and the analysts available to conduct the searches are few, automation is required. Yet traditional object detection and classification algorithms are too inaccurate and unreliable to solve the problem. Deep learning is a family of machine learning algorithms that have shown promise for the automation of such tasks. It has achieved success in image understanding by means of convolutional neural networks. In this paper we apply them to the problem of object and facility recognition in high-resolution, multi-spectral satellite imagery. We describe a deep learning system for classifying objects and facilities from the IARPA Functional Map of the World (fMoW) dataset into 63 different classes. The system consists of an ensemble of convolutional neural networks and additional neural networks that integrate satellite metadata with image features. It is implemented in Python using the Keras and TensorFlow deep learning libraries and runs on a Linux server with an NVIDIA Titan X graphics card. At the time of writing the system is in 2nd place in the fMoW TopCoder competition. Its total accuracy is 83%, the F1score is 0.797, and it classifies 15 of the classes with accuracies of 95% or better.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115882111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 84
Vehicle Tracking in Wide Area Motion Imagery using KC- LoFT Multi-Feature Discriminative Modeling 基于KC- LoFT多特征判别建模的广域运动图像车辆跟踪
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457953
Noor M. Al-Shakarji, F. Bunyak, G. Seetharaman, K. Palaniappan
{"title":"Vehicle Tracking in Wide Area Motion Imagery using KC- LoFT Multi-Feature Discriminative Modeling","authors":"Noor M. Al-Shakarji, F. Bunyak, G. Seetharaman, K. Palaniappan","doi":"10.1109/AIPR.2017.8457953","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457953","url":null,"abstract":"Recently our group proposed LoFT (Likelihood of Features Tracking) tracker system [1] that can successfully track objects of interest under different scenarios of wide-area motion imagery and full motion video. LoFT is a recognition-based single target tracker that relies on fusion of multiple complementary features. In this paper, LoFT is extended with a kernelized correlation filter (KCF) module to incorporate a robust continuous target template update scheme to better localize the target and to recover from sudden appearance changes and occlusions. Decision module using peak-to-sidelobe ratio is added to KCF module to prevent error accumulation from blending non-target regions to target template during update, and to prevent fusion of the KCF likelihood map to the other LoFT feature likelihood maps when the KCF response is not reliable. KC-LoFT is a single object tracker that fuses the most discriminative features from LoFT and KCF to better localize the target object in the search window. KC-LoFT was tested on ABQ aerial wide area motion imagery dataset [2] and produced promising results compared to recent state-of-the-art tracking systems in term of accuracy and robustness.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115654823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generative Adversarial Networks for Classification 分类生成对抗网络
2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2017-10-01 DOI: 10.1109/AIPR.2017.8457952
S. Israel, J. Goldstein, Jeffrey Klein, J. Talamonti, Franklin R. Tanner, Shane Zabel, Phil Sallee, Lisa McCoy
{"title":"Generative Adversarial Networks for Classification","authors":"S. Israel, J. Goldstein, Jeffrey Klein, J. Talamonti, Franklin R. Tanner, Shane Zabel, Phil Sallee, Lisa McCoy","doi":"10.1109/AIPR.2017.8457952","DOIUrl":"https://doi.org/10.1109/AIPR.2017.8457952","url":null,"abstract":"Our team is reviewing tools and techniques that enable rapid prototyping. Generative Adversarial Networks (GANs) have been shown to reduce training requirements for detection problems. GANs compete generative and discriminative classifiers to improve detection performance. This paper expands the use of GANs from detection (k=2) to classification (k>2) problems. Several GAN network structures and training set sizes were compared to the baseline discriminative network and Bayes' classifiers. The results show no significant performance differences among any of the network configurations or training set size trials. However, the GANs trained with fewer network nodes and iterations than needed by the discriminator classifiers alone.","PeriodicalId":128779,"journal":{"name":"2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117053934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信