中国图象图形学报最新文献

筛选
英文 中文
Multi-loss siamese convolutional neural network for Chinese calligraphy font and style classification 基于多损失连体卷积神经网络的汉字字型分类
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.220252
Wenyan Cheng, Zhou Yong, Chengying Tao, Liu Li, Zhigang Li, Taorong Qiu
{"title":"Multi-loss siamese convolutional neural network for Chinese calligraphy font and style classification","authors":"Wenyan Cheng, Zhou Yong, Chengying Tao, Liu Li, Zhigang Li, Taorong Qiu","doi":"10.11834/jig.220252","DOIUrl":"https://doi.org/10.11834/jig.220252","url":null,"abstract":": Objective Chinese calligraphy can be seen as one of the symbolized icons in Chinese culture. Nowadays , Machine learning and pattern recognition techniques - derived calligraphy artworks are required for digitalization and preser‐ vation intensively. Our research is mainly focused on Chinese calligraphy classification in related to such font and style classification. However , the difference between calligraphy font and calligraphy style is often distorted. To resolve the problem of style classification , first , we distinguish the difference between font and style. Then , we illustrate a novel multi - loss siamese convolutional neural network to cope with the two mentioned problems simultaneously. Method The difference","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78818711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutual attention mechanism-driven lightweight semantic segmentation network 相互关注机制驱动的轻量级语义切分网络
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.211127
Li Fengyong, Ye Bin, Qin Chuan
{"title":"Mutual attention mechanism-driven lightweight semantic segmentation network","authors":"Li Fengyong, Ye Bin, Qin Chuan","doi":"10.11834/jig.211127","DOIUrl":"https://doi.org/10.11834/jig.211127","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"62 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72869604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consensus graph learning-based self-supervised ensemble clustering 基于共识图学习的自监督集成聚类
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.210947
Geng Weifeng, Wang Xiang, Jing Liping, Yu Jian
{"title":"Consensus graph learning-based self-supervised ensemble clustering","authors":"Geng Weifeng, Wang Xiang, Jing Liping, Yu Jian","doi":"10.11834/jig.210947","DOIUrl":"https://doi.org/10.11834/jig.210947","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83779974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight object detection model in remote sensing image by combining rotation box and attention mechanism 结合旋转盒和注意机制的遥感图像轻量化目标检测模型
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.220839
Li Zhaohui, An Jintang, Jia Hongyu, Fang Yan
{"title":"Lightweight object detection model in remote sensing image by combining rotation box and attention mechanism","authors":"Li Zhaohui, An Jintang, Jia Hongyu, Fang Yan","doi":"10.11834/jig.220839","DOIUrl":"https://doi.org/10.11834/jig.220839","url":null,"abstract":"目的 遥感图像目标检测在国防安全、智能监测等领域扮演着重要的角色。面对遥感图像中排列密集且方向任意分布的目标,传统水平框目标检测不能实现精细定位,大型和超大型的目标检测网络虽然有强大表征学习能力,但是忽略了模型准确率与计算量、参数量之间的性价比,也满足不了实时检测的要求,庞大的参数量和计算量在模型部署上也非常受限,针对以上问题,设计了一种轻量级的旋转框遥感图像目标检测模型(YOLO-RMV4)。方法 对原 MobileNetv3 网络进行改进,在特征提取网络中加入性能更好的通道注意力机制模块(efficient channelattention,ECA),并且对网络规模进行适当扩展,同时加入路径聚合网络(path aggregation network,PANet),对主干网络提取特征进行多尺度融合,为网络提供更丰富可靠的目标特征。网络检测头中则采用多尺度检测技术,来应对不同尺寸的目标物体,检测头中的角度预测加入了环形圆滑标签(circular smooth label,CSL),将角度回归问题转换为分类问题,从而使预测角度和真实角度之间的距离可以衡量。结果 将提出的检测模型在制备的 AVSP(aerialimages of vehicle ship and plane)数据集上进行实验验证,并对主流的 7 种轻量级网络模型进行了对比实验,相比RYOLOv5l,该模型大小(5.3 MB)仅为 RYOLOv5(l 45.3 MB)的 1/8,平均精度均值(mean average precision,mAP)提高了 1.2%,平均召回率(average recall,AR)提高了 1.6%。并且 mAP 和 AR 均远高于其他的轻量级网络模型。同时也对各个改进模块进行了消融实验,验证了不同模块对模型性能的提升程度。结论 本文提出的模型在轻量的网络结构下辅以多尺度融合和旋转框检测,使该模型在极有限参数量下实现实时推理和高精度检测。;Objective Remote sensing image object detection plays an important role in military security, maritime traffic supervision, intelligent monitoring, and other fields.Remote sensing images are different from natural images.Most remote sensing images are taken at altitudes ranging from several kilometers to tens of thousands of meters.Therefore, the scale of target objects in remote sensing images is large.Most of the target objects are small, such as small vehicles.The other target objects are huge, such as ships.The angles of the objects in the remote sensing images are distributed arbitrarily because of the shooting angle.Therefore, this scenario is a huge challenge for the feature extraction network in remote sensing image target detection, particularly in complex backgrounds.Given the continuous improvement in the computing power of hardware devices and the rapid development of deep learning theory, large and ultralarge object detection networks have been continuously proposed in recent years to improve detection accuracy.Although these detection networks have strong representation learning capabilities, they ignore the cost-effectiveness gained from the relationship of detection accuracy with model calculation amount and the number of parameters.Moreover, real-time detection requirements are difficult to achieve, and the number of parameters and amount of calculation are very limited in model deployment.In addition, most of the general target detection models are designed for natural field datasets.The detection effect in remote sensing image target detection is unsatisfactory, particularly for densely arranged objects.The traditional horizontal box object detection cannot achieve precise detection, such as ships in port and cars in parking lots.Aiming at the above problems, a lightweight rotating box remote sensing image object detection model (YOLO-RMV4)is designed.Method In the experiment, the open-source datasets DOTA2.0, FAIR1M, and HRSC2016 are used as the basic datasets.Moreover, four common vehicles, including a ship, a plane, a small vehicle, and a large vehicle, are selected as objects.A aerial images of vehicle ship and ","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135599819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dual of Transformer features-related map-intelligent generation method 一种双变压器特征相关地图智能生成方法
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.220887
Fang Zheng, Fu Ying, Liu Lixiong
{"title":"A dual of Transformer features-related map-intelligent generation method","authors":"Fang Zheng, Fu Ying, Liu Lixiong","doi":"10.11834/jig.220887","DOIUrl":"https://doi.org/10.11834/jig.220887","url":null,"abstract":"目的 现有的地图智能生成技术没有考虑到地图生成任务存在的地理要素类内差异性和地理要素域间差异性,这使得生成的地图质量难以满足实际需要。针对地理要素类内差异性和地理要素域间差异性,提出了一种Transformer特征引导的双阶段地图智能生成方法。方法 首先基于最新的Transformer网络,设计了一个基于该网络的特征提取模块,该模块提取遥感图像中的地理要素特征用于引导地图生成,解决了地理要素类内差异性导致的地图生成困难的问题。然后设计双阶段生成框架,该框架具备两个生成对抗网络,第1个生成对抗网络为初步生成对抗网络,利用遥感图像和Transformer特征得到初步的地图图像;第2个生成对抗网络为精修生成对抗网络利用初步地图图像生成高质量的精修地图图像,缓解了地理要素域间差异性导致的地图地理要素生成不准确问题。结果 在AIDOMG(aerial image dataset for online map generation)数据集上的9个区域进行了实验,与10种经典的和最新方法进行了比较,提出方法取得了最优的结果。其中,在海口区域,相比于Creative GAN方法,FID (Frechet inception distance)值降低了16.0%,WD (Wasserstein distance)降低了4.2%,1-NN (1-nearest neighbor)降低了5.9%;在巴黎区域,相比于Creative GAN方法,FID值降低了2.9%,WD降低了1.0%,1-NN降低了2.1%。结论 提出的Transformer特征引导的双阶段地图智能生成方法通过高质量的Transformer特征引导和双阶段生成框架解决了地理要素类内差异性和地理要素域间差异性所带来的地图生成质量较差的问题。;Objective Map intelligent generation technique is focused on generating map images quickly and cost efficiently. For existing intelligent map generation technique,to get quick-responsed and low-cost map generation,remote sensing image is taken as the input,and its generative adversarial network(GAN) is used to generate the corresponding map image. Inevitably,it is challenged that the intra-class differences within geographical elements in remote sensing images and the differences of geographical elements between domains in the map generation task are still not involved in. The intra-class difference of geographical elements refers that similar geographical elements in remote sensing images have several of appearances,which are difficult to be interpreted. Geographical elements segmentation is required for map generation in relevance to melting obvious intra-class differences into corresponding categories. The difference of geographical elements between different domains means that the corresponding geographical elements in remote sensing images and map images are not exactly matched well. For example,the edges of vegetation elements in remote sensing images are irregular, while the edges of vegetation elements in map images are flat. Another challenge for map generation is to generate and keep consistency to the features of map elements. Aiming at the intra-class difference of geographical elements and the superposition of geographical elements,we develop a dual of map-intelligent generation method based on Transformer features. Method The model consists of three sorts of modules relevant to feature extraction,preliminary and refined generative adversarial contexts. First,feature extraction module is developed based on the latest Transformer network. It consists of a backbone and segmentation branch in terms of Swin-Transformer structure. Self-attention mechanism based Transformer can be used to construct the global relationship of the image,and it has a larger receptive field and it can extract feature information effectively. The segmentation branch is composed of a pyramid pooling module(PPM) and a feature pyramid network(FPN). To get more effective geographic element features,feature pyramid is employ","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"2013 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135103457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene-constrained spatial-temporal graph convolutional network for pedestrian trajectory prediction 基于场景约束的时空图卷积网络行人轨迹预测
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.221027
Chen Haodong, Ji Qingge
{"title":"Scene-constrained spatial-temporal graph convolutional network for pedestrian trajectory prediction","authors":"Chen Haodong, Ji Qingge","doi":"10.11834/jig.221027","DOIUrl":"https://doi.org/10.11834/jig.221027","url":null,"abstract":"目的 针对行人轨迹预测问题,已有的几种结合场景信息的方法基于合并操作通过神经网络隐式学习场景与行人运动的关联,无法直观地解释场景对单个行人运动的调节作用。除此之外,基于图注意力机制的时空图神经网络旨在学习全局模式下行人之间的社会交互,在人群拥挤场景下精度不佳。鉴于此,本文提出一种场景限制时空图卷积神经网络(scene-constrained spatial-temporal graph convolutional neural network,Scene-STGCNN)。方法 Scene-STGCNN由运动模块、基于场景的微调模块、时空卷积和时空外推卷积组成。运动模块以时空图卷积提取局部行人时空特征,避免了时空图神经网络在全局模式下学习交互的局限性。基于场景的微调模块将场景信息嵌入为掩模矩阵,用来调节运动模块生成的中间运动特征,具备实际场景下的物理解释性。通过最小化核密度估计下真实轨迹的负对数似然,增强Scene-STGCNN输出的多模态性,减少预测误差。结果 实验在公开数据集ETH (包含ETH和HOTEL)和UCY (包含UNIV、ZARA1和ZARA2)上与其他7种主流方法进行比较,就平均值而言,相对于性能第2的模型,平均位移误差(average displacement error,ADE)值减少了12%,最终位移误差(final displacement error,FDE)值减少了9%。在同样的数据集上进行了消融实验以验证基于场景的微调模块的有效性,结果表明基于场景的微调模块能有效建模场景对行人轨迹的调节作用,从而减小算法的预测误差。结论 本文提出的场景限制时空图卷积网络能有效融合场景和行人运动,在学习局部模式下行人交互的同时基于场景特征对轨迹特征做实时性调节,相比于其他主流方法,具有更优的性能。;Objective Pedestrian trajectory prediction is essential for such domains like unmanned vehicles,security surveillance,and social robotics nowadays. Trajectory prediction is beneficial for computer systems to perform better decision making and planning to some extent. Current methods are focused on pedestrian trajectory information,and scene elements-related spatial constraints on pedestrian motion in the same space are challenged to explain human-to-human social interactions further,in which future location of pedestrians cannot be located in building walls,and pedestrians at building corners undergo large velocity direction deflections due to cornering behavior. The pathways can be focused on the integrated scene information,for which the scene image is melted into a one-dimensional vector and merged with the trajectory information. Two-dimensional spatial signal of the scene will be distorted and it cannot be intuitively explained according to the modulating effect of the scene on pedestrian motion. To build a spatiotemporal graph representation of pedestrians,recent graph neural network(GNN) is used to develop a method based on graph attention network(GAT),in which pedestrians are as the graph nodes,trajectory features as the node attributes,and pedestrians-between spatial interactions are as the edges in the graph. These sorts of methods can be used to focus on pedestrians-between social interactions in the global scale. However,for crowded scenes,graph attention mechanism may not be able to assign appropriate weights to each pedestrian accurately,resulting in poor algorithm accuracy. To resolve the two problems mentioned above,we develop a scene constraints-based spatiotemporal graph convolutional network,called Scene-STGCNN,which aggregates pedestrian motion status with a graph convolutional neural network for local interactions,and it achieves accurate aggregation of pedestrian motion status with a small number of parameters. At the same time,we design a scene-based fine-tuning module to explicitly model the modulating effect of scenes on pedestrian motion with the information of neighboring scene changes as input. Method Scene-STGCNN consists of a motion module,a scene-based fine-tunin","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135104243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ship hull number detection and recognition under sparse samples 稀疏样本下船体编号检测与识别
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.211167
Hong Hanyu, Chen Bingchuan, Ma Lei, Zhang Biyin
{"title":"Ship hull number detection and recognition under sparse samples","authors":"Hong Hanyu, Chen Bingchuan, Ma Lei, Zhang Biyin","doi":"10.11834/jig.211167","DOIUrl":"https://doi.org/10.11834/jig.211167","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"118 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87645385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recent progress in person re-ID 个人重新身份证的最新进展
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.230022
Yongfei Zhang, Hangyuan Yang, Zhang Yujia, Zhaopeng Dou, Shengcai Liao, Weishi Zheng, Shiliang Zhang, Ye Mang, Yichao Yan, Junjie Li, Shengjin Wang
{"title":"Recent progress in person re-ID","authors":"Yongfei Zhang, Hangyuan Yang, Zhang Yujia, Zhaopeng Dou, Shengcai Liao, Weishi Zheng, Shiliang Zhang, Ye Mang, Yichao Yan, Junjie Li, Shengjin Wang","doi":"10.11834/jig.230022","DOIUrl":"https://doi.org/10.11834/jig.230022","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"47 31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84199909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open set text recognition technology 开放集文本识别技术
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.230018
Yang Chun, Liu Chang, Fang Zhiyu, Han Zheng, Cheng Liu, Xucheng Yin
{"title":"Open set text recognition technology","authors":"Yang Chun, Liu Chang, Fang Zhiyu, Han Zheng, Cheng Liu, Xucheng Yin","doi":"10.11834/jig.230018","DOIUrl":"https://doi.org/10.11834/jig.230018","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76922757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of imitation learning: tradition and new advances 模仿学习综述:传统与新进展
中国图象图形学报 Pub Date : 2023-01-01 DOI: 10.11834/jig.230028
Zhang Chao, Bai Wensong, Du Xin, Liu Weijie, Zhou Chenhao, Qian Hui
{"title":"Survey of imitation learning: tradition and new advances","authors":"Zhang Chao, Bai Wensong, Du Xin, Liu Weijie, Zhou Chenhao, Qian Hui","doi":"10.11834/jig.230028","DOIUrl":"https://doi.org/10.11834/jig.230028","url":null,"abstract":"","PeriodicalId":36336,"journal":{"name":"中国图象图形学报","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75751425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信