{"title":"t-SNE for Complex Multi-Manifold High-Dimensional Data","authors":"Rongzhen Bian, Jian Zhang, Liang Zhou, Peng Jiang, Baoquan Chen, Yunhai Wang","doi":"10.3724/sp.j.1089.2021.18806","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18806","url":null,"abstract":"To solve the problem that the t-SNE method cannot distinguish multiple manifolds that intersect each other well, a visual dimensionality reduction method is proposed. Based on the t-SNE method, Euclidean metric and local PCA are considered when calculating high-dimensional probability to distinguish different manifolds. Then the t-SNE gradient solution method can be directly used to get the dimensionality reduction result. Finally, three generated data and two real data are used to test proposed method, and quantitatively evaluate the discrimination of different manifolds and the degree of neighborhood preservation within the manifold in the dimensionality reduction results. These results show that proposed method is more useful when processing multi-manifold data, and can keep the neighborhood structure of each manifold well.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45016896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18770
Liubing Jiang, Dian Zhang, Bo Pan, Peng Zheng, L. Che
{"title":"Multi-Focus Image Fusion Based on Generative Adversarial Network","authors":"Liubing Jiang, Dian Zhang, Bo Pan, Peng Zheng, L. Che","doi":"10.3724/sp.j.1089.2021.18770","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18770","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42451378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18796
Wenhui Kang, Jin Huang, Feng Tian, Xiangmin Fan, Jie Liu, G. Dai
{"title":"Human-in-the-Loop Based Online Handwriting Mathematical Expressions Recognition","authors":"Wenhui Kang, Jin Huang, Feng Tian, Xiangmin Fan, Jie Liu, G. Dai","doi":"10.3724/sp.j.1089.2021.18796","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18796","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44590528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18774
Zhongliang Yang, Ruihong Huang, Yumiao Chen, Song Zhang, Xinhua Mao
{"title":"Freehand-Sketched Part Recognition Using VGG-CapsNet","authors":"Zhongliang Yang, Ruihong Huang, Yumiao Chen, Song Zhang, Xinhua Mao","doi":"10.3724/sp.j.1089.2021.18774","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18774","url":null,"abstract":": To solve the problem that the existing CAD system is difficult to match the corresponding parts accurately through the freehand sketch in the conceptual design, a recognition model (VGG-CapsNet) for freehand sketch of part is proposed, which combining the pre-trained network (VGG) and capsule network (CapsNet). Five designers are recruited to sketch parts, and build 23 kinds of freehand sketch of parts in-cluding standard parts and non-standard parts. The between-group experiment and within-group experiment are designed, and then the recognition models of VGG-CapsNet are constructed respectively. The recognition results of the VGG-CapsNet models are compared with the rVGG-13 models and the rCNN-13 models. The experimental results show that the mean accuracy of VGG-CapsNet model is higher than the other two models, which provides technical support for the retrieval and reuse of part design knowledge.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42714045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18779
G. Zhang, Jun Gong, Junsong Chen, Zhidong Zhang, Ce Zhu, Kai Liu
{"title":"Automatic Measurement of Elongation at Break of Cable Sheath Based on Binocular Vision","authors":"G. Zhang, Jun Gong, Junsong Chen, Zhidong Zhang, Ce Zhu, Kai Liu","doi":"10.3724/sp.j.1089.2021.18779","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18779","url":null,"abstract":"","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45781114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18777
Kun Liu, L. Mi
{"title":"Ship Target Recognition Under Different Sunlight Intensity","authors":"Kun Liu, L. Mi","doi":"10.3724/sp.j.1089.2021.18777","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18777","url":null,"abstract":": In the case of surface target monitoring, the clarity of the ship target often varies with the reflec-tion intensity of the sea surface under different sunlight intensity, which will lead to the unstable recognition rate of the ship target and increase the false alarm rate. For this reason, the ship target recognition algorithm based on ResNet-50 is proposed. Firstly, it uses ResNet-50 network to extract image feature information and applies sunlight robust loss constraint to the features before and after sunlight intensity change to reduce the feature difference. Then, it uses gray-scale histogram to calculate the statistical matrices of features to obtain six features: light contrast, brightness, smoothness, information, third-order matrices and entropy, and gen-erates new feature vector to apply sunlight robust loss constraint to the features before and after sunlight intensity change again. Finally, the two constraints are combined to form a loss function and trained to opti-mize the optimal weights using Bayesian adaptive hyperparameters. The experimental results show that the average recognition rate of the database for ship sunlight variation reaches 90.47%, which is about 4.00% the and the recognition rate of ship images with sunlight variation of and increases by 3.14%, 6.07% and 16.41%, shows that the algorithm has a good constraint effect on sunlight variation and the recognition rate is significantly improved.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43438275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
计算机辅助设计与图形学学报Pub Date : 2021-11-01DOI: 10.3724/sp.j.1089.2021.18778
Qin Xu, Yulian Liang, Dongyue Wang, B. Luo
{"title":"Hyperspectral Image Classification Based on SE-Res2Net and Multi-Scale Spatial Spectral Fusion Attention Mechanism","authors":"Qin Xu, Yulian Liang, Dongyue Wang, B. Luo","doi":"10.3724/sp.j.1089.2021.18778","DOIUrl":"https://doi.org/10.3724/sp.j.1089.2021.18778","url":null,"abstract":"In order to extract more discriminative features for hyperspectral image and prevent the network from degrading caused by deepening, a novel multi-scale feature extraction module SE-Res2Net based on the new dimensional residual network (Res2Net) and squeeze and exception network (SENet), and a multi-scale spectral-spatial fusion attention module is developed for hyperspectral image classification. In order to overcome the degradation problem caused by network deepening, the SE-Res2Net module uses channel grouping to extract fine-grained multi-scale features of hyperspectral images, and gets multiple receptive fields of different granularity. Then, the channel optimization module is employed to quantify the importance of the feature maps at the channel level. In order to optimize the features from spatial and spectral dimensions simultaneously, a multi-scale spectral-spatial fusion attention module is designed to mine the relationship between different spatial positions and different spectral dimensions at different scales using 第 11 期 徐沁, 等: 基于 SE-Res2Net 与多尺度空谱融合注意力机制的高光谱图像分类 1727 asymmetric convolution, which can not only reduce the computation, but also effectively extract the discriminative spectral-spatial fusion features, and further improve the accuracy of hyperspectral image classification. Comparison experiments on three public datasets, Indian Pines, University of Pavia and Grss_dfc_2013 show that the proposed method has higher overall accuracy (OA), average accuracy (AA) and Kappa coefficient compared to other state-of-the-art deep networks.","PeriodicalId":52442,"journal":{"name":"计算机辅助设计与图形学学报","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45625351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}