基于统一图神经网络联合学习的显著目标检测

Tiantian Wang, Yunbo Hu, Zheng Yan, Jiaqing Qiao, Bing Liu
{"title":"基于统一图神经网络联合学习的显著目标检测","authors":"Tiantian Wang, Yunbo Hu, Zheng Yan, Jiaqing Qiao, Bing Liu","doi":"10.1109/ICSMD57530.2022.10058426","DOIUrl":null,"url":null,"abstract":"In complex visual scene, the performance of existing deep convolutional neural network based methods of salient object detection still suffer from the loss of high-frequency visual information and global structure information of the object, which can be attributed to the weakness of convolutional neural network in capability of learning from the data in non-Euclidean space. To solve these problems, an end-to-end unified graph neural network joint learning framework is proposed, which realizes the joint learning process of salient edge features and salient region features. In this learning framework, we construct a multi-relations dynamic attention graph convolution operator, which captures non-Euclidean space global context structure information by enhancing message transfer between different graph nodes. Further, by introducing a graph attention fusion module, the full use of salient edge cues and salient region cues is achieved. Finally, by explicitly encoding the salient edge information to guide the feature learning of salient regions, salient regions in complex scenes can be located more accurately. The experiments on three public benchmark datasets show that our method has competitive detection results compared with the current mainstream deep convolutional neural network based salient object detection methods. More importantly, it uses fewer parameters and less computation, so it is a lightweight salient object detection model.","PeriodicalId":396735,"journal":{"name":"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Salient Object Detection Based on Unified Graph Neural Network Joint Learning\",\"authors\":\"Tiantian Wang, Yunbo Hu, Zheng Yan, Jiaqing Qiao, Bing Liu\",\"doi\":\"10.1109/ICSMD57530.2022.10058426\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In complex visual scene, the performance of existing deep convolutional neural network based methods of salient object detection still suffer from the loss of high-frequency visual information and global structure information of the object, which can be attributed to the weakness of convolutional neural network in capability of learning from the data in non-Euclidean space. To solve these problems, an end-to-end unified graph neural network joint learning framework is proposed, which realizes the joint learning process of salient edge features and salient region features. In this learning framework, we construct a multi-relations dynamic attention graph convolution operator, which captures non-Euclidean space global context structure information by enhancing message transfer between different graph nodes. Further, by introducing a graph attention fusion module, the full use of salient edge cues and salient region cues is achieved. Finally, by explicitly encoding the salient edge information to guide the feature learning of salient regions, salient regions in complex scenes can be located more accurately. The experiments on three public benchmark datasets show that our method has competitive detection results compared with the current mainstream deep convolutional neural network based salient object detection methods. More importantly, it uses fewer parameters and less computation, so it is a lightweight salient object detection model.\",\"PeriodicalId\":396735,\"journal\":{\"name\":\"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSMD57530.2022.10058426\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSMD57530.2022.10058426","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在复杂的视觉场景中,现有的基于深度卷积神经网络的显著目标检测方法的性能仍然存在物体的高频视觉信息和全局结构信息的丢失,这可以归因于卷积神经网络在非欧空间数据学习能力方面的弱点。针对这些问题,提出了端到端的统一图神经网络联合学习框架,实现了显著边缘特征和显著区域特征的联合学习过程。在该学习框架中,我们构造了一个多关系动态注意图卷积算子,通过增强图节点间的消息传递,捕获非欧几里德空间全局上下文结构信息。进一步,通过引入图形注意融合模块,实现了显著边缘线索和显著区域线索的充分利用。最后,通过显式编码显著边缘信息来指导显著区域的特征学习,可以更准确地定位复杂场景中的显著区域。在三个公开的基准数据集上的实验表明,与目前主流的基于深度卷积神经网络的显著目标检测方法相比,我们的方法具有竞争力的检测结果。更重要的是,它使用的参数少,计算量少,是一种轻量级的显著目标检测模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Salient Object Detection Based on Unified Graph Neural Network Joint Learning
In complex visual scene, the performance of existing deep convolutional neural network based methods of salient object detection still suffer from the loss of high-frequency visual information and global structure information of the object, which can be attributed to the weakness of convolutional neural network in capability of learning from the data in non-Euclidean space. To solve these problems, an end-to-end unified graph neural network joint learning framework is proposed, which realizes the joint learning process of salient edge features and salient region features. In this learning framework, we construct a multi-relations dynamic attention graph convolution operator, which captures non-Euclidean space global context structure information by enhancing message transfer between different graph nodes. Further, by introducing a graph attention fusion module, the full use of salient edge cues and salient region cues is achieved. Finally, by explicitly encoding the salient edge information to guide the feature learning of salient regions, salient regions in complex scenes can be located more accurately. The experiments on three public benchmark datasets show that our method has competitive detection results compared with the current mainstream deep convolutional neural network based salient object detection methods. More importantly, it uses fewer parameters and less computation, so it is a lightweight salient object detection model.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信