Signal Processing-Image Communication最新文献

筛选
英文 中文
An adaptive contextual learning network for image inpainting
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-21 DOI: 10.1016/j.image.2025.117326
Feilong Cao , Xinru Shao , Rui Zhang , Chenglin Wen
{"title":"An adaptive contextual learning network for image inpainting","authors":"Feilong Cao ,&nbsp;Xinru Shao ,&nbsp;Rui Zhang ,&nbsp;Chenglin Wen","doi":"10.1016/j.image.2025.117326","DOIUrl":"10.1016/j.image.2025.117326","url":null,"abstract":"<div><div>Deep-learning-based methods for image inpainting have been intensively researched because of deep neural networks’ powerful approximation capabilities. In particular, the context-reasoning-based methods have shown significant success. Nonetheless, images generated using these methods tend to suffer from visually inappropriate content. This is due to the fact that their context reasoning processes are weakly adaptive, limiting the flexibility of generation. To this end, this paper presents an adaptive contextual learning network (ACLNet) for image inpainting. The main contribution of the proposed method is to significantly improve the adaptive capability of the context reasoning. The method can adaptively weigh the importance of known contexts for filling missing regions, ensuring that the filled content is finely filtered rather than raw, which improves the reliability of the generated content. Specifically, a modular hybrid dilated residual unit and an adaptive region affinity learning attention are created, which can adaptively choose and aggregate contexts based on the sample itself through gating mechanism and similarity filtering respectively. The extensive comparisons reveal that ACLNet exceeds the state-of-the-art, improving peak signal-to-noise ratio (PSNR) by 0.25 dB and structural similarity index measure (SSIM) by 0.017 on average and that it can generate more aesthetically realistic images than other approaches. The implemented ablation experiments also confirm the effectiveness of ACLNet.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117326"},"PeriodicalIF":3.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMFMER: A multimodal full transformer for unifying aesthetic assessment tasks
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-21 DOI: 10.1016/j.image.2025.117320
Jin Qi , Can Su , Xiaoxuan Hu , Mengwei Chen , Yanfei Sun , Zhenjiang Dong , Tianliang Liu , Jiebo Luo
{"title":"AMFMER: A multimodal full transformer for unifying aesthetic assessment tasks","authors":"Jin Qi ,&nbsp;Can Su ,&nbsp;Xiaoxuan Hu ,&nbsp;Mengwei Chen ,&nbsp;Yanfei Sun ,&nbsp;Zhenjiang Dong ,&nbsp;Tianliang Liu ,&nbsp;Jiebo Luo","doi":"10.1016/j.image.2025.117320","DOIUrl":"10.1016/j.image.2025.117320","url":null,"abstract":"<div><div>Computational aesthetics aims to simulate the human visual perception process via the computers to automatically evaluate aesthetic quality with automatic methods. This topic has been widely studied by numerous researchers. However, existing research mostly focuses on image content while disregarding high-level semantics in the related image comments. In addition, most major assessment methods are based on convolutional neural networks (CNNs) for learning the distinctive features, which lack representational power and modeling capabilities for multimodal assessment requirement. Furthermore, many transformer-based model approaches suffer from limited information flow between different parts of the assumed model, and many multimodal fusion methods are used to extract image features and text features, and cannot handle multi-modal information well. Inspired by the above questions, in this paper, A novel Multimodal Full transforMER (AMFMER) evaluation model without aesthetic style information is proposed, consisting of three components: visual stream, textual stream and multimodal fusion layer. Firstly, the visual stream exploits the improved Swin transformer to extract the distinctive layer features of the input image. Secondly, the textual stream is based on the robustly optimized bidirectional encoder representations from transformers (RoBERTa) text encoder to extract semantic information from the corresponding comments. Thirdly, the multimodal fusion layer fuses visual features, textual features and low-layer salient features in a cross-attention manner to extract the multimodal distinctive features. Experimental results show that the proposed AMFMER approach in this paper outperforms current mainstream methods in a unified aesthetic prediction task, especially in terms of the correlation between the objective model evaluation and subjective human evaluation.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117320"},"PeriodicalIF":3.4,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143868669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Higher-order motion calibration and sparsity based outlier correction for video FRUC
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-17 DOI: 10.1016/j.image.2025.117327
Jiale He , Qunbing Xia , Gaobo Yang , Xiangling Ding
{"title":"Higher-order motion calibration and sparsity based outlier correction for video FRUC","authors":"Jiale He ,&nbsp;Qunbing Xia ,&nbsp;Gaobo Yang ,&nbsp;Xiangling Ding","doi":"10.1016/j.image.2025.117327","DOIUrl":"10.1016/j.image.2025.117327","url":null,"abstract":"<div><div>For frame rate up-conversion (FRUC), one of the key challenges is to deal with irregular and large motions that are widely existed in video scenes. However, most existing FRUC works make constant brightness and linear motion assumptions, easily leading to undesirable artifacts such as motion blurriness and frame flickering. In this work, we propose an advanced FRUC work by using a high-order model for motion calibration and a sparse sampling strategy for outlier correction. Unidirectional motion estimation is used to accurately locate object from the previous frame to the following frame in a coarse-to-fine pyramid structure. Then, object motion trajectory is fine-tuned to approximate real motion, and possible outlier regions are located and recorded. Moreover, image sparsity is exploited as the prior knowledge for outlier correction, and the outlier index map is used to design the measurement matrix. Based on the theory of sparse sampling, the outlier regions are reconstructed to eliminate the side effects such as overlapping, holes and blurring. Extensive experimental results demonstrate that the proposed approach outperforms the state-of-the-art FRUC works in terms of both objective and subjective qualities of interpolated frames.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117327"},"PeriodicalIF":3.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FANet: Feature attention network for semantic segmentation FANet:用于语义分割的特征注意网络
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-17 DOI: 10.1016/j.image.2025.117330
Lin Zhu, Linxi Li, Mingwei Tang, Wenrui Niu, Jianhua Xie, Hongyun Mao
{"title":"FANet: Feature attention network for semantic segmentation","authors":"Lin Zhu,&nbsp;Linxi Li,&nbsp;Mingwei Tang,&nbsp;Wenrui Niu,&nbsp;Jianhua Xie,&nbsp;Hongyun Mao","doi":"10.1016/j.image.2025.117330","DOIUrl":"10.1016/j.image.2025.117330","url":null,"abstract":"<div><div>Semantic segmentation based on scene parsing specifies a category label for each pixel in the image. Existing neural network models are useful tools for understanding the objects in the scene. However, they ignore the heterogeneity of information carried by individual features, leading to pixel classification confusion and unclear boundaries. Therefore, this paper proposes a novel Feature Attention Network (FANet). Firstly, the adjustment algorithm is presented to capture attention feature matrices that can effectively cherry-pick feature dependencies. Secondly, the hybrid extraction module (HEM) is constructed to aggregate long-term dependencies based on proposed adjustment algorithm. Finally, the proposed adaptive hierarchical fusion module (AHFM) is employed to aggregated multi-scale features by learning spatially filtering conflictive information, which improves the scale invariance of features. Experimental results on popular Benchmarks (such as PASCAL VOC 2012, Cityscapes and ADE20K) indicate that our algorithm achieves better performance than other algorithms.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117330"},"PeriodicalIF":3.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143851552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive cross-channel transformation based on self-modulation for learned image compression
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-17 DOI: 10.1016/j.image.2025.117325
Wen Tan , Youneng Bao , Fanyang Meng , Chao Li , Yongsheng Liang
{"title":"Adaptive cross-channel transformation based on self-modulation for learned image compression","authors":"Wen Tan ,&nbsp;Youneng Bao ,&nbsp;Fanyang Meng ,&nbsp;Chao Li ,&nbsp;Yongsheng Liang","doi":"10.1016/j.image.2025.117325","DOIUrl":"10.1016/j.image.2025.117325","url":null,"abstract":"<div><div>Recently learned image compression has achieved excellent rate–distortion performance, and nonlinear transformation becomes a critical component for performance improvement. While Generalized Divisible Normalization (GDN) is a widely used method that exploits channel correlation for effective nonlinear representation, its utilization of cross-channel relationship for each element of features remains limited. In this paper, we propose a novel cross-channel transformation based on self-modulation, named SMCCT. The SMCCT takes the intermediate feature maps as input to capture cross-channel correlation and generate affine transformation parameters for element-wise feature modulation. The proposed transformation enables adaptive weighting and fine-grained control over the features, which helps to learn expressive features and further reduce redundancies. The SMCCT can be flexibly employed into learned image compression models. Experimental results demonstrate that the proposed method can achieve superior rate–distortion performance with the existing learned image compression methods and outperform traditional codecs under the quality metric such as PSNR and MS-SSIM. Specifically, when using the PSNR metric, our proposed method outperforms latest codec VTM-12.1 by 5.47%, 10.25% in BD-rate on Kodak and Tecnick datasets. When using the MS-SSIM metric, it outperforms latest codec VTM-12.1 by 50.97%, 49.81% in BD-rate on Kodak and Tecnick datasets.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117325"},"PeriodicalIF":3.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical contrastive learning for unsupervised 3D action recognition
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-17 DOI: 10.1016/j.image.2025.117329
Haoyuan Zhang , Qingquan Li
{"title":"Hierarchical contrastive learning for unsupervised 3D action recognition","authors":"Haoyuan Zhang ,&nbsp;Qingquan Li","doi":"10.1016/j.image.2025.117329","DOIUrl":"10.1016/j.image.2025.117329","url":null,"abstract":"<div><div>Unsupervised contrastive 3D action representation learning has made great progress recently. However, most works rely on only the direct instance-level comparison with unreasonable positive/negative constraint, which degrades the learning performance. In this paper, we propose a Hierarchical Contrastive Scheme (HCS) for unsupervised skeleton 3D action representation learning, which takes advantage of multi-level contrast. Specifically, we keep the instance-level contrast to draw the different augmentations of the same instance close, targets to learn intra-instance consistency. Then we extend the contrastive objective from individual instances to clusters by enforcing consistency between cluster assignment from different instance of same category, aims at learning inter-instance consistency. Compared with previous methods, HCS enables intra/inter-instance consistency pursuit via multi-level contrast, without inflexible positive/negative constraint, which leads to a more discriminative feature space. Experimental results validate that the proposed framework outperforms the previous state-of-the-art methods on the challenging NTU RGB+D and PKU-MMD datasets.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117329"},"PeriodicalIF":3.4,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143855689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera calibration using property of asymptotes with application to sports scenes
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-12 DOI: 10.1016/j.image.2025.117331
Fengli Yang, Xuechun Wang, Yue Zhao
{"title":"Camera calibration using property of asymptotes with application to sports scenes","authors":"Fengli Yang,&nbsp;Xuechun Wang,&nbsp;Yue Zhao","doi":"10.1016/j.image.2025.117331","DOIUrl":"10.1016/j.image.2025.117331","url":null,"abstract":"<div><div>Inspired by Ying's work on the calibration technique, this study proposes a new planar pattern (referred to as the phi-type model hereinafter), which includes a circle and diameter, as the calibration scene. In sports scenarios, such as a soccer match or basketball court, most existing methods require information of the scene points in a three-dimensional space. However, an interesting observation in the midfield is that the centre circle and the halfway line form a phi-type template. A new automatic method using the properties of asymptotes is proposed based on the images of the midfield. All intrinsic parameters of the camera can be determined without any assumptions such as zero skew or unitary aspect ratio. The main advantages of our technique are that it neither involves point or line matching nor does it require the metric information of the model plane. The feasibility and validity of the proposed algorithm were verified by testing the noise sensitivity and performing image metric rectification.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117331"},"PeriodicalIF":3.4,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143859340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hidden dangerous object detection for terahertz body security check images based on adaptive multi-scale decomposition convolution
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-10 DOI: 10.1016/j.image.2025.117323
Zijie Guo , Heng Wu , Shaojuan Luo , Genping Zhao , Chunhua He , Tao Wang
{"title":"Hidden dangerous object detection for terahertz body security check images based on adaptive multi-scale decomposition convolution","authors":"Zijie Guo ,&nbsp;Heng Wu ,&nbsp;Shaojuan Luo ,&nbsp;Genping Zhao ,&nbsp;Chunhua He ,&nbsp;Tao Wang","doi":"10.1016/j.image.2025.117323","DOIUrl":"10.1016/j.image.2025.117323","url":null,"abstract":"<div><div>Recently, detecting hidden dangerous objects with the terahertz technique has attracted extensive attention. Many convolutional neural network-based object detection methods can achieve excellent results in common object detection. However, the existing object detection methods generally have low detection accuracy and large model parameter issues for hidden dangerous objects in terahertz body security check images due to the blurring and poor quality of terahertz images and ignoring the global context information. To address these issues, we propose an enhanced You Only Look Once network (YOLO-AMDC), which is integrated with an adaptive multi-scale large-kernel decomposition convolution (AMDC) module. Specifically, we design an AMDC module to enhance the feature expression ability of the YOLO framework. Moreover, we develop the Bi-Level Routing Attention (BRA) mechanism and a simple parameter-free attention module (SimAM) to integrate and utilize contextual information to improve the performance of dangerous object detection. Additionally, we adopt a model pruning approach to reduce the number of model parameters. The experimental results show that YOLO-AMDC outperforms other state-of-the-art methods. Compared with YOLOv8s, YOLO-AMDC reduces the parameters by 3.9 M and improves mAP@50 by 5 %. The detection performance is still competitive when the number of parameters is significantly reduced by model pruning.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"137 ","pages":"Article 117323"},"PeriodicalIF":3.4,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143834656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ORB-SLAM3 and dense mapping algorithm based on improved feature matching
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-10 DOI: 10.1016/j.image.2025.117322
Delin Zhang , Guangxiang Yang , Guangling Yu , Baofeng Yang , Xiaoheng Wang
{"title":"ORB-SLAM3 and dense mapping algorithm based on improved feature matching","authors":"Delin Zhang ,&nbsp;Guangxiang Yang ,&nbsp;Guangling Yu ,&nbsp;Baofeng Yang ,&nbsp;Xiaoheng Wang","doi":"10.1016/j.image.2025.117322","DOIUrl":"10.1016/j.image.2025.117322","url":null,"abstract":"<div><div>ORB-SLAM3 is currently the mainstream visual SLAM system, which uses feature matching based on ORB keypoints. However, ORB-SLAM3 faces two main issues: Firstly, feature matching is time-consuming, and the insufficient number of feature point matches results in lower algorithmic localization accuracy. Secondly, it lacks the capability to construct dense point cloud maps, therefore limiting its applicability in high-demand scenarios such as path planning. To address these issues, this paper proposes an ORB-SLAM3 and dense mapping algorithm based on improved feature matching. In the feature matching process of ORB-SLAM3, motion smoothness constraints are introduced and the image is gridded. The feature points that are at the edge of the grid are divided into multiple adjacent grids to solve the problems, which are unable to correctly partition the feature points to the corresponding grid and algorithm time consumption. This reduces matched time and increases the number of matched pairs, improving the positioning accuracy of ORB-SLAM3. Moreover, a dense mapping construction thread has been added to construct dense point cloud maps in real-time using keyframes and corresponding poses filtered from the feature matching stage. Finally, simulation experiments were conducted using the TUM dataset for validation. The results demonstrate that the improved algorithm reduced feature matching time by 75.71 % compared to ORB-SLAM3, increased the number of feature point matches by 88.69 %, and improved localization accuracy by 9.44 %. Furthermore, the validation confirmed that the improved algorithm is capable of constructing dense maps in real-time. In conclusion, the improved algorithm demonstrates excellent performance in terms of localization accuracy and dense mapping.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"137 ","pages":"Article 117322"},"PeriodicalIF":3.4,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143829426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camouflaged instance segmentation based on multi-scale feature contour fusion swin transformer
IF 3.4 3区 工程技术
Signal Processing-Image Communication Pub Date : 2025-04-09 DOI: 10.1016/j.image.2025.117328
Yin-Fu Huang, Feng-Yen Jen
{"title":"Camouflaged instance segmentation based on multi-scale feature contour fusion swin transformer","authors":"Yin-Fu Huang,&nbsp;Feng-Yen Jen","doi":"10.1016/j.image.2025.117328","DOIUrl":"10.1016/j.image.2025.117328","url":null,"abstract":"<div><div>Camouflaged instance segmentation is the latest detection issue for finding hidden objects in an image. Since camouflaged objects hide with similar background colors, it is difficult to detect objects' existence. In this paper, we proposed an instance segmentation model called Multi-scale Feature Contour Fusion Swin Transformer (MFCFSwinT) consisting of seven modules; i.e., Swin Transformer as the backbone for feature extraction, Pyramid of Kernel with Dilation (PKD) and Multi-Feature Fusion (MFF) for multi-scale features, Contour Branch and Contour Feature Fusion (CFF) for feature fusion, and Region Proposal Network (RPN) and Cascade Head for bounding boxes and masks detection. In the experiments, four datasets are used to evaluate the proposed model; i.e., COCO (Common Objects in Context), LVIS v1.0 (Large Vocabulary Instance Segmentation), COD10K (Camouflaged Object Detection), and NC4K. Finally, the experimental results show that MFCFSwinT can achieve better performances than most state-of-the-art models.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"137 ","pages":"Article 117328"},"PeriodicalIF":3.4,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143826394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信