计算机辅助设计与图形学学报最新文献

筛选
英文 中文
Dynamic Obstacle Avoidance Method for Carrier Aircraft Based on Deep Reinforcement Learning 基于深度强化学习的舰载机动态避障方法
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18637
Junxiao Xue, Xiangya Kong, Yibo Guo, Aiguo Lu, Jian Li, Xi Wan, Mingliang Xu
{"title":"Dynamic Obstacle Avoidance Method for Carrier Aircraft Based on Deep Reinforcement Learning","authors":"Junxiao Xue, Xiangya Kong, Yibo Guo, Aiguo Lu, Jian Li, Xi Wan, Mingliang Xu","doi":"10.3724/sp.j.1089.2021.18637","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18637","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43778362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Efficient 3D Object Detection of Indoor Scenes Based on RGB-D Video Stream 基于RGB-D视频流的室内场景三维目标高效检测
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18630
Miao Yongwei, Jiahui Chen, Xinjie Zhang, Ma Wenjuan, S. Sun
{"title":"Efficient 3D Object Detection of Indoor Scenes Based on RGB-D Video Stream","authors":"Miao Yongwei, Jiahui Chen, Xinjie Zhang, Ma Wenjuan, S. Sun","doi":"10.3724/sp.j.1089.2021.18630","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18630","url":null,"abstract":": For indoor object detection, the input complex scenes often have some defects such as incomplete RGB-D scanning data or mutual occlusion of its objects. Meanwhile, due to the limitations of frame in the RGB-D video stream. Using SUN RGB-D dataset to train the object detection network of key frame, the detection result of proposed method is accurate, and the overall detection time is greatly reduced if com-paring with the VoteNet based frame-by-frame detection scheme. Experimental results demonstrate that proposed method is effective and efficient.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41602686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Information Hiding Scheme Based on Quantum Generative Adversarial Network 基于量子生成对抗网络的信息隐藏方案
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18617
Jia Luo, Rigui Zhou, Yaochong Li, Guangzhong Liu
{"title":"Information Hiding Scheme Based on Quantum Generative Adversarial Network","authors":"Jia Luo, Rigui Zhou, Yaochong Li, Guangzhong Liu","doi":"10.3724/sp.j.1089.2021.18617","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18617","url":null,"abstract":": Due to the insecurity of quantum image information hiding technology in the face of statisti-cal-based steganalysis algorithm detection, an information hiding scheme based on quantum generative adversarial network (QGAN) is proposed. This scheme first uses the mapping rules to map the secret information into the single qubit gate to prepare for the input state of the parameterized quantum circuit of the gen-erator G . Then the stego quantum image is generated by the generating circuit in QGAN. Finally, the sample data obtained by measuring the stego image and the real data are used as the input of the discriminator D . The iterative optimization is performed so that G can obtain a stego image close to the target image. The ex-perimental results show that proposed scheme can generate stego images that fit the target image distribution well and achieve the non-embedded hiding of information.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46509410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Poetry Generation Based on Ancient Chinese Paintings 基于中国古代绘画的诗歌自动生成
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18633
Jiazhou Chen, Keyu Huang, Yingchaojie Feng, Wei Zhang, Siwei Tan, Wei Chen
{"title":"Automatic Poetry Generation Based on Ancient Chinese Paintings","authors":"Jiazhou Chen, Keyu Huang, Yingchaojie Feng, Wei Zhang, Siwei Tan, Wei Chen","doi":"10.3724/sp.j.1089.2021.18633","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18633","url":null,"abstract":": The Chinese painting poem is a very special art form in the history of world art. It combines ancient Chinese literature and fine arts, complements each other and blends together. In order to obtain com-puter-based painting poetry, an automatic poetry generation is proposed based on ancient Chinese paintings. It extracts multiple sentences from ancient paintings, which improves the literary expression ability of ancient poems in paintings. Firstly, a multi-sentence annotation data set for ancient paintings is established, and then semantic features of ancient paintings are extracted through an improved image captioning method. Finally, these modern text descriptions are converted into a four-character poem through a two-way LSTM encoding and decoding framework. The experiment on the paintings of the Song Dynasty demonstrates that the coherent and prosodic poems generated by our method are consistent with the original content and con-text of the ancient paintings. User study shows that the content consistency and user satisfaction of our method are better than keyword-based methods, which proves the validity of the proposed method","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69686226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
L0 Optimization Using Laplacian Operator for Image Smoothing 基于拉普拉斯算子的图像平滑L0优化
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18627
Menghang Li, Shanshan Gao, Huijian Han, Caiming Zhang
{"title":"L0 Optimization Using Laplacian Operator for Image Smoothing","authors":"Menghang Li, Shanshan Gao, Huijian Han, Caiming Zhang","doi":"10.3724/sp.j.1089.2021.18627","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18627","url":null,"abstract":": Image smoothing often leads to the loss of image details and distortion because of over smoothing. An image smoothing method is presented which combines 0 L optimization and the second-order Laplacian operator. Laplacian operator is used to constrain the color change of the image, and 0 L optimization is used to minimize the change of the color gradient, so as to achieve the purpose of smooth color transition of the image. In order to keep the edge features of the image better in the process of smoothing, Sobel operator is introduced as the regular term of energy function, and the alternating solution strategy is adopted to solve the energy function. In the ex-periment, using the classical image in the field of image smoothing and the image obtained through network en-gine, the proposed method is compared qualitatively and quantitatively with 6 smoothing methods and 7 denois-第 ing methods. The experimental results show that the proposed method can reduce the loss of image details while smoothing the image, effectively deal with the phenomenon of stepped edges and color block distribution in the image smoothing, and effectively remove various noises in the image. And the peak signal-to-noise ratio and run-ning time of the proposed method are improved compared with other methods.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42793534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Real-Time Semantic Segmentation Approach for Autonomous Driving Scenes 一种自动驾驶场景的实时语义分割方法
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18631
Feiwei Qin, Xiyue Shen, Yong Peng, Yanli Shao, Wenqiang Yuan, Zhongping Ji, Jing Bai
{"title":"A Real-Time Semantic Segmentation Approach for Autonomous Driving Scenes","authors":"Feiwei Qin, Xiyue Shen, Yong Peng, Yanli Shao, Wenqiang Yuan, Zhongping Ji, Jing Bai","doi":"10.3724/sp.j.1089.2021.18631","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18631","url":null,"abstract":"An important part of autonomous driving is the perception of the driving environment of the car, which has created a strong demand for high precision semantic segmentation algorithms that can be run in real time on low-power mobile devices. However, when analyzing the factors that affect the accuracy and speed of the semantic segmentation network, it can be found that in the structure of the previous semantic segmentation algorithm, spatial information and context features are difficult to take into account at the same time, and using two networks to obtain spatial information and context information separately will increase the amount of calculation and storage. Therefore, a new structure is proposed that divides the spatial path and context path from the network based on the residual structure, and a two-path real-time semantic segmentation network is designed based on this structure. The network contains a feature fusion module and an attention refinement module, which are used to realize the function of fusing the multi-scale features of two 第 7 期 秦飞巍, 等: 无人驾驶中的场景实时语义分割方法 1027 paths and optimizing the output results of context path. The network is based on the PyTorch framework and uses NVIDIA 1080Ti graphics cards for experiments. On the road scene data set Cityscapes, mIoU reached 78.8%, and the running speed reached 27.5 fps.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49617302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dongba Painting Few-Shot Classification Based on Graph Neural Network 基于图神经网络的东巴绘画少镜头分类
计算机辅助设计与图形学学报 Pub Date : 2021-07-01 DOI: 10.3724/sp.j.1089.2021.18618
Ke Li, Wenhua Qian, Chengxue Wang, Dan Xu
{"title":"Dongba Painting Few-Shot Classification Based on Graph Neural Network","authors":"Ke Li, Wenhua Qian, Chengxue Wang, Dan Xu","doi":"10.3724/sp.j.1089.2021.18618","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18618","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48234455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Real-Time Bearing Fault Diagnosis Method Combined Image Method 半实时轴承故障诊断方法——组合图像法
计算机辅助设计与图形学学报 Pub Date : 2021-06-01 DOI: 10.3724/sp.j.1089.2021.18579
Pengzhi Wang, Mandun Zhang, Yahong Han, Xu Zhao, Zhengjun Wang
{"title":"Semi-Real-Time Bearing Fault Diagnosis Method Combined Image Method","authors":"Pengzhi Wang, Mandun Zhang, Yahong Han, Xu Zhao, Zhengjun Wang","doi":"10.3724/sp.j.1089.2021.18579","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18579","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46889049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial Positioning Method of Vehicle in Cross-Camera Traffic Scene 跨摄像头交通场景中车辆空间定位方法
计算机辅助设计与图形学学报 Pub Date : 2021-06-01 DOI: 10.3724/sp.j.1089.2021.18612
Wen Wang, Xinyao Tang, Chaoyang Zhang, Huansheng Song, Hua Cui
{"title":"Spatial Positioning Method of Vehicle in Cross-Camera Traffic Scene","authors":"Wen Wang, Xinyao Tang, Chaoyang Zhang, Huansheng Song, Hua Cui","doi":"10.3724/sp.j.1089.2021.18612","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18612","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47016303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Pix2Pix-Based Grayscale Image Coloring Method 基于Pix2Pix的灰度图像着色方法
计算机辅助设计与图形学学报 Pub Date : 2021-06-01 DOI: 10.3724/sp.j.1089.2021.18596
Hong Li, Qiaoxue Zheng, Jing Zhang, Zhuo-Ming Du, Zhanli Li, Baosheng Kang
{"title":"Pix2Pix-Based Grayscale Image Coloring Method","authors":"Hong Li, Qiaoxue Zheng, Jing Zhang, Zhuo-Ming Du, Zhanli Li, Baosheng Kang","doi":"10.3724/sp.j.1089.2021.18596","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18596","url":null,"abstract":": In this study, a grayscale image coloring method combining the Pix2Pix model is proposed to solve the problem of unclear object boundaries and low image coloring quality in colorization neural net-works. First, an improved U-Net structure, using eight down-sampling and up-sampling layers, is adopted to extract features and predict the image color, which improves the network model’s ability to extract deep image features. Second, the coloring image quality is tested under different loss functions, 1 L loss and smooth 1 L loss, to measure the distance between the generated image and ground truth. Finally, gradient penalty is added to improve the network stability of the training process. The gradient of each input data is penalized by constructing a new data distribution between the generated and real image distribution to limit the dis-criminator gradient. In the same experimental environment, the Pix2Pix model and summer2winter data are utilized for comparative analysis. The experiments demonstrate that the improved U-Net using the smooth 1 L loss as generator loss generates better colored images, whereas the 1 L loss better maintains the structural information of the image. Furthermore, the gradient penalty accelerates the model convergence speed, and improves the model stability and image quality. The proposed image coloring method learns deep image features and reduces the image blurs. The model raises the image quality while effectively maintaining the image structure similarity.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47201506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信