2022 IEEE International Conference on Image Processing (ICIP)最新文献

筛选
英文 中文
Graph Autoencoder-Based Embedded Learning in Dynamic Brain Networks for Autism Spectrum Disorder Identification 基于图自编码器的动态脑网络嵌入式学习用于自闭症谱系障碍识别
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9898034
Fuad M. Noman, S. Yap, R. Phan, H. Ombao, C. Ting
{"title":"Graph Autoencoder-Based Embedded Learning in Dynamic Brain Networks for Autism Spectrum Disorder Identification","authors":"Fuad M. Noman, S. Yap, R. Phan, H. Ombao, C. Ting","doi":"10.1109/ICIP46576.2022.9898034","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9898034","url":null,"abstract":"Recent applications of pattern recognition techniques to brain connectome-based classification focus on static functional connectivity (FC) neglecting the dynamics of FC over time, and use input connectivity matrices on a regular Euclidean grid. We exploit the graph convolutional networks (GCNs) to learn irregular structural patterns in brain FC networks and propose extensions to capture dynamic changes in network topology. We develop a dynamic graph autoencoder (DyGAE)-based framework to leverage the time-varying topological structures of dynamic brain networks for identification of autism spectrum disorder (ASD). The framework combines a GCN-based DyGAE to encode individual-level dynamic networks into time-varying low-dimensional network embeddings, and classifiers based on weighted fully-connected neural network (FCNN) and long short-term memory (LSTM) to facilitate dynamic graph classification via the learned spatial-temporal information. Evaluation on a large ABIDE resting-state functional magnetic resonance imaging (rs-fMRI) dataset shows that our method outperformed state-of-the-art methods in detecting altered FC in ASD. Dynamic FC analyses with DyGAE learned embeddings also reveal apparent group difference between ASD and healthy controls in network profiles and switching dynamics of brain states.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115155125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Aggregated Context Network For Semantic Segmentation Of Aerial Images 基于聚合上下文网络的航空图像语义分割
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9898016
A. Chouhan, A. Sur, D. Chutia
{"title":"Aggregated Context Network For Semantic Segmentation Of Aerial Images","authors":"A. Chouhan, A. Sur, D. Chutia","doi":"10.1109/ICIP46576.2022.9898016","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9898016","url":null,"abstract":"With the considerable advancement of remote sensing technology and computer vision, automatic scene understanding for very high-resolution aerial (VHR) imagery became a necessary research topic. Semantic segmentation of VHR imagery is an important task where context information plays a crucial role. Adequate feature delineation is difficult due to high-class imbalance in remotely sensed data. In this work, we proposed a variant of encoder-decoder-based architecture where residual attentive skip connections are incorporated. We added a multi-context block in each of the encoder units to capture multi-scale and multi-context features and used dense connections for effective feature extraction. A comprehensive set of experiments reveal that the proposed scheme outperformed recently published work by 3% in overall accuracy and F1 score for ISPRS Vaihingen and ISPRS Potsdam benchmark datasets.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Illumination of Flat-Colored Drawings by 3D Augmentation of 2D Silhouettes 二维轮廓的三维增强实现平面彩色绘图的自动照明
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897386
D. Tschumperlé, C. Porquet, A. Mahboubi
{"title":"Automatic Illumination of Flat-Colored Drawings by 3D Augmentation of 2D Silhouettes","authors":"D. Tschumperlé, C. Porquet, A. Mahboubi","doi":"10.1109/ICIP46576.2022.9897386","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897386","url":null,"abstract":"In this paper, a new automatic method for the illumination of flat-colored drawings is proposed. First, we reconstruct a 3D augmentation of a 2D silhouette from the analysis of its skeleton. Then, we apply the Phong lighting model that relies on the estimated normal map to generate an illuminated drawing. This method compares favorably to recent state-of-the-art methods, e.g. those using convolutional neural networks.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117344313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deformable Alignment And Scale-Adaptive Feature Extraction Network For Continuous-Scale Satellite Video Super-Resolution 面向连续尺度卫星视频超分辨率的形变对齐与尺度自适应特征提取网络
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897998
Ning Ni, Hanlin Wu, Li-bao Zhang
{"title":"Deformable Alignment And Scale-Adaptive Feature Extraction Network For Continuous-Scale Satellite Video Super-Resolution","authors":"Ning Ni, Hanlin Wu, Li-bao Zhang","doi":"10.1109/ICIP46576.2022.9897998","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897998","url":null,"abstract":"Video super-resolution (VSR), especially continuous-scale VSR, plays a crucial role in improving the quality of satellite video. Continuous-scale VSR aims to use a single model to process arbitrary (integer or non-integer) scale factors, which is conducive to meeting the needs of video images transmission with different compression ratios and arbitrarily zooming by rolling the mouse wheel. In this article, we propose a novel network to achieve continuous-scale satellite VSR (CAVSR). Specifically, first, we propose a time-series-aware dynamic routing deformable alignment module (TDAM) for feature alignment. Second, we develop a scale-adaptive feature extraction module (SFEM), which uses the proposed scale-adaptive convolution (SA-Conv) to dynamically generate different filters based on the input scale information. Finally, we design a global implicit function feature-adaptive walk continuous-scale upsampling module (GFCUM), which can perform feature-adaptive walks according to the input features with different scale information and finally complete the continuous-scale mapping from coordinates to pixel values. Experimental results have demonstrated the CAVSR has superior reconstruction performance.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120921372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-Stage Feature Alignment Network for Video Super-Resolution 视频超分辨率多阶段特征对齐网络
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897627
Keito Suzuki, M. Ikehara
{"title":"Multi-Stage Feature Alignment Network for Video Super-Resolution","authors":"Keito Suzuki, M. Ikehara","doi":"10.1109/ICIP46576.2022.9897627","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897627","url":null,"abstract":"Video super-resolution aims at generating high-resolution video frames using multiple adjacent low-resolution frames. An important aspect of video super-resolution is the alignment of neighboring frames to the reference frame. Previous methods directly align the frames either using optical flow or deformable convolution. However, directly estimating the motion from low-resolution inputs is hard since they often contain blur and noise that hinder the image quality. To address this problem, we propose to conduct feature alignment across multiple stages to more accurately align the frames. Furthermore, to fuse the aligned features, we introduce a novel Attentional Feature Fusion Block that applies a spatial attention mechanism to avoid areas with occlusion or misalignment. Experimental results show that the proposed method achieves competitive performance to other state-of-the-art super-resolution methods while reducing the network parameters.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"47 12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120923869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear Discriminant Analysis Metric Learning Using Siamese Neural Networks 基于暹罗神经网络的线性判别分析度量学习
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897925
Abin Jose, Qinglin Mei, D. Eschweiler, Ina Laube, J. Stegmaier
{"title":"Linear Discriminant Analysis Metric Learning Using Siamese Neural Networks","authors":"Abin Jose, Qinglin Mei, D. Eschweiler, Ina Laube, J. Stegmaier","doi":"10.1109/ICIP46576.2022.9897925","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897925","url":null,"abstract":"We propose a method for learning the Linear Discriminant Analysis (LDA) using a Siamese Neural Network (SNN) architecture for learning a low dimensional image descriptor. The novelty of our work is that we learn the LDA projection matrix between the final fully-connected layers of an SNN. An SNN architecture is used since the proposed loss maximizes the Kullback-Leibler divergence between the feature distributions from the two branches of an SNN. The network learns an optimized feature space having inherent properties pertaining to the learning of LDA. The learned image descriptors are a) low-dimensional, b) have small intra-class variance, c) large inter-class variance, and d) can distinguish the classes with linear decision hyperplanes. The proposed method has the advantage that LDA learning happens end-to-end. We measured the classification accuracy in the three datasets MNIST, CIFAR-10, and STL-10 and compared the performance with other state-of-the-art methods. We also measured the KL divergence between the class pairs and visualized the projections of feature vectors along the learned discriminant directions.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127088933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Artistic Style Transfer for Data Augmentation in A Real-Case Scenario 重新审视真实场景中数据增强的艺术风格转移
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897728
Stefano D'Angelo, F. Precioso, F. Gandon
{"title":"Revisiting Artistic Style Transfer for Data Augmentation in A Real-Case Scenario","authors":"Stefano D'Angelo, F. Precioso, F. Gandon","doi":"10.1109/ICIP46576.2022.9897728","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897728","url":null,"abstract":"A tremendous number of techniques have been proposed to transfer artistic style from one image to another. In particular, techniques exploiting neural representation of data; from Convolutional Neural Networks to Generative Adversarial Networks. However, most of these techniques do not accurately account for the semantic information related to the objects present in both images or require a considerable training set. In this paper, we provide a data augmentation technique that is as faithful as possible to the style of the reference artist, while requiring as few training samples as possible, as artworks containing the same semantics of an artist are usually rare. Hence, this paper aims to improve the state-of-the-art by first applying semantic segmentation on both images to then transfer the style from the painting to a photo while preserving common semantic regions. The method is exemplified on Van Gogh’s paintings, shown to be challenging to segment.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127250688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-Resolution NIR Prediction from RGB Images: Application to Plant Phenotyping 基于RGB图像的高分辨率近红外预测:在植物表型分析中的应用
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897670
Ankit Shukla, Avinash Upadhyay, Manoj Sharma, V. Chinnusamy, Sudhir Kumar
{"title":"High-Resolution NIR Prediction from RGB Images: Application to Plant Phenotyping","authors":"Ankit Shukla, Avinash Upadhyay, Manoj Sharma, V. Chinnusamy, Sudhir Kumar","doi":"10.1109/ICIP46576.2022.9897670","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897670","url":null,"abstract":"In contrast to the conventional RGB cameras, Near-infrared (NIR) spectroscopy provides images with rich information concerning the biological process of plants. However, NIR spectroscopy is a costly affair and produces low-resolution (LR) images. In this context, recently deep learning-based methods have been proposed in computer vision. In addition, the development of phenomics facilities has facilitated the generation of large plant data necessary for the utilization of these deep learning-based methods. Motivated by these developments, we propose a novel attention-based pix-to-pix generative adversarial network (GAN) followed by a super-resolution (SR) module to generate high-resolution (HR) NIR images from corresponding RGB images. An experiment including extraction of phenotypic data based on HR NIR images has also been conducted to evaluate its efficacy from an agricultural perspective. Our proposed architecture achieved state-of-the-art performance in terms of MRAE and RMSE on the Wheat plant multi-modality dataset.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125058159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AI4EO Hyperview: A Spectralnet3d and Rnnplus Approach for Sustainable Soil Parameter Estimation on Hyperspectral Image Data AI4EO Hyperview:基于高光谱影像数据的可持续土壤参数估计的Spectralnet3d和rnplus方法
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897889
Claudius Zelenka, Andreas Lohrer, Mirjam Bayer, Peer Kröger
{"title":"AI4EO Hyperview: A Spectralnet3d and Rnnplus Approach for Sustainable Soil Parameter Estimation on Hyperspectral Image Data","authors":"Claudius Zelenka, Andreas Lohrer, Mirjam Bayer, Peer Kröger","doi":"10.1109/ICIP46576.2022.9897889","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897889","url":null,"abstract":"The goal of the #Hyperview challenge is to use Hyperspectral Imaging (HSI) to predict the soil parameters potassium (K), phosphorus pentoxide (P2O5), magnesium (Mg) and the pH value. These are relevant parameters to determine the need of fertilization in agriculture. With this knowledge, fertilizers can be applied in a targeted way rather than in a prophylactic way which is the current procedure of choice.In this context we introduce two different approaches to solve this regression task based on 3D CNNs with Huber loss regression (SpectralNet3D) and on 1D RNNs. Both methods show distinct advantages with a peak challenge metric score of 0.808 on provided validation data.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125816814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strong-Weak Integrated Semi-Supervision for Unsupervised Domain Adaptation 无监督域自适应的强弱综合半监督
2022 IEEE International Conference on Image Processing (ICIP) Pub Date : 2022-10-16 DOI: 10.1109/ICIP46576.2022.9897242
Xiaohu Lu, H. Radha
{"title":"Strong-Weak Integrated Semi-Supervision for Unsupervised Domain Adaptation","authors":"Xiaohu Lu, H. Radha","doi":"10.1109/ICIP46576.2022.9897242","DOIUrl":"https://doi.org/10.1109/ICIP46576.2022.9897242","url":null,"abstract":"Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain. Semi-supervised learning is a proven strategy for improving UDA performance. In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for unsupervised domain adaptation. Under the proposed SWISSUDA framework, a strong representative set with high confidence but low diversity target domain samples and a weak representative set with low confidence but high diversity target domain samples are updated constantly during the training process. Both sets are fused randomly to generate an augmented strong-weak training batch with pseudo-labels to train the network during every iteration. Moreover, a novel adversarial logit loss is proposed to reduce the intra-class divergence between source and target domains, which is back-propagated adversarially with a gradient reverse layer between the classifier and the rest of the network. Experimental results based on two popular benchmarks, Office-Home, and DomainNet, show the effectiveness of the proposed SWISS framework with our method achieving the best performance in both Office-Home and DomainNet.","PeriodicalId":387035,"journal":{"name":"2022 IEEE International Conference on Image Processing (ICIP)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126010714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信