Digital Signal Processing最新文献

筛选
英文 中文
TMN: Transformer in matrix network for single image super-resolution with enhanced shallow feature preservation
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-31 DOI: 10.1016/j.dsp.2025.105207
Ou Ao , Zhenhong Shang
{"title":"TMN: Transformer in matrix network for single image super-resolution with enhanced shallow feature preservation","authors":"Ou Ao ,&nbsp;Zhenhong Shang","doi":"10.1016/j.dsp.2025.105207","DOIUrl":"10.1016/j.dsp.2025.105207","url":null,"abstract":"<div><div>Transformer-based image super-resolution has witnessed remarkable advancements in recent years. However, as transformer networks grow in depth, numerous existing super-resolution methods encounter challenges in effectively preserving shallow features, which play a crucial role in single image super-resolution. The low-resolution input image contains crucial structural and contextual information, and the shallow features serve as the carriers of this information. To address the challenge of preserving shallow features, we propose the Transformer in Matrix Network (TMN), a novel architecture specifically tailored for single image super-resolution. TMN incorporates a redesigned and optimized matrix mapping module, which arranges transformer blocks in a matrix structure to preserve and effectively exploit shallow features while facilitating the efficient reuse of hierarchical feature representations across the network. Additionally, TMN refines the efficient transformer to augment its capacity for modelling long-range dependencies, thereby enabling enhanced integration of information from spatially correlated regions within the image. To further enhance the reconstruction performance, TMN incorporates the structural loss into the loss function. By constraining the relevant statistical quantities, it improves the perceptual fidelity and preserves the intricate details. Experimental results show that TMN achieves competitive performance, with a reduction in computational costs by approximately one-third compared to leading methods like SwinIR. TMN's efficient design and high-quality reconstruction make it particularly suitable for deployment on resource-constrained devices, addressing a critical need in practical applications. The implementation code is publicly available at <span><span>https://github.com/13752849314/TMN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105207"},"PeriodicalIF":2.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance analysis of active STAR-RIS-assisted NOMA with hardware impairments at finite blocklength
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-31 DOI: 10.1016/j.dsp.2025.105206
Shiv Kumar, Brijesh Kumbhani
{"title":"Performance analysis of active STAR-RIS-assisted NOMA with hardware impairments at finite blocklength","authors":"Shiv Kumar,&nbsp;Brijesh Kumbhani","doi":"10.1016/j.dsp.2025.105206","DOIUrl":"10.1016/j.dsp.2025.105206","url":null,"abstract":"<div><div>The active simultaneously transmitting and reflecting-reconfigurable intelligent surface's (ASTAR-RIS) potential to avoid multiplicative fading loss by utilizing integrated reflection-type amplifiers has drawn considerable interest. This paper investigates the finite blocklength (FBL) analysis of ASTAR-RIS-assisted non-orthogonal multiple access (NOMA) with perfect and imperfect successive interference cancellation (SIC) in the presence of hardware impairments over cascaded Rician fading channels. Firstly, we derive the statistical distribution of cascaded Rician fading channels with the help of Laguerre polynomial series approximation. Secondly, we derive the novel analytical expression for average block error rate (ABLER), ergodic rate (ER), and system throughput with the help of the Gauss Chebyshev quadrature and the Gauss Laguerre quadrature method. Thirdly, the asymptotic expression for ABLER is derived to gain useful insights. Finally, the Monte Carlo simulations are used to verify the analytical results. Numerical results verify the correctness and superior performance of ASTAR-RIS-assisted NOMA (ASTAR-RIS-NOMA) over passive STAR-RIS-assisted NOMA (PSTAR-RIS-NOMA) and ASTAR-RIS-assisted orthogonal multiple access (OMA) (ASTAR-RIS-OMA). Additionally, the impact of other system parameters like imperfect SIC, hardware impairments, and block length are analyzed.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105206"},"PeriodicalIF":2.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-stream feature pyramid network with task interaction for underwater object detection
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-31 DOI: 10.1016/j.dsp.2025.105199
Wenming Zhang , Haobo Wang , Haibin Li , Yaqian Li , Tao Song , Haoran Liu , GuanCheng Wang
{"title":"Dual-stream feature pyramid network with task interaction for underwater object detection","authors":"Wenming Zhang ,&nbsp;Haobo Wang ,&nbsp;Haibin Li ,&nbsp;Yaqian Li ,&nbsp;Tao Song ,&nbsp;Haoran Liu ,&nbsp;GuanCheng Wang","doi":"10.1016/j.dsp.2025.105199","DOIUrl":"10.1016/j.dsp.2025.105199","url":null,"abstract":"<div><div>Underwater object detection presents significant challenges due to factors such as image degradation caused by light absorption and scattering, insufficient multi-scale feature representation, and the high similarity between objects and the background. This paper proposes a novel underwater object detection method that integrates the Dual-Stream Feature Pyramid Network (DS-FPN) with a Task Interaction Module (TIM) to enhance detection performance in complex underwater environments. The DS-FPN effectively captures multi-scale features through a parallel path fusion strategy, significantly improving both semantic feature extraction and fine-grained object representation. The TIM module dynamically adjusts feature weights by facilitating feature interaction between tasks, allowing the classification and regression tasks to work in tandem and mitigating the trade-off between classification accuracy and object localization precision. Extensive experiments on multiple public datasets demonstrate the effectiveness of the proposed method, which achieves improvements of 2.2%, 2.5%, and 3.5% in mAP on the DUO, UTDAC, and TrashCan datasets, respectively, outperforming existing state-of-the-art methods. Moreover, applying the DS-FPN and TIM modules to various detection frameworks further validates the method's strong generalization capability and its applicability in multi-task scenarios. These results confirm that the proposed approach provides an efficient and versatile solution for advancing underwater object detection performance.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105199"},"PeriodicalIF":2.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143759200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Desmoke-VCU: Improved unpaired image-to-image translation for removing smoke from laparoscopic images
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-28 DOI: 10.1016/j.dsp.2025.105177
Wenjie Wang , Qi Yuan , Pengtai Huang , Xiaohua Wang , Huajian Song
{"title":"Desmoke-VCU: Improved unpaired image-to-image translation for removing smoke from laparoscopic images","authors":"Wenjie Wang ,&nbsp;Qi Yuan ,&nbsp;Pengtai Huang ,&nbsp;Xiaohua Wang ,&nbsp;Huajian Song","doi":"10.1016/j.dsp.2025.105177","DOIUrl":"10.1016/j.dsp.2025.105177","url":null,"abstract":"<div><div>In laparoscopic surgery, maintaining a clear field of view is crucial; however, smoke generated during the procedure can impair surgical judgment. Cycle-consistent generative adversarial network (CycleGAN) has been widely applied to image dehazing tasks as it does not require paired smoke and clear images for training. However, it has limited intrinsic smoke removal effectiveness. To improve the smoke removal performance, this paper proposes a smoke removal model named the Desmoke-VCU based on the DeSmoke-LAP. This method combines the UNet and vision transformer (ViT) as generators within the CycleGAN framework and employs self-supervised pre-training techniques to pre-train the generator, aiming to generate realistic smoke-free images. Furthermore, a structural similarity index metric (SSIM) loss function is introduced to preserve textures and details in the generated images. Experimental results demonstrate that the proposed approach enhances the quality and realism of generated images, maintains strong correlations between smokey and smoke-free images, and exhibits superiority over recent models proposed in this field.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105177"},"PeriodicalIF":2.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Orbital angular momentum radar detection characteristics analysis based on ambiguity function
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-28 DOI: 10.1016/j.dsp.2025.105197
Yin Dang, Deyu Li, Yusheng Fu, Yi Liao
{"title":"Orbital angular momentum radar detection characteristics analysis based on ambiguity function","authors":"Yin Dang,&nbsp;Deyu Li,&nbsp;Yusheng Fu,&nbsp;Yi Liao","doi":"10.1016/j.dsp.2025.105197","DOIUrl":"10.1016/j.dsp.2025.105197","url":null,"abstract":"<div><div>As a new radio wave carrying orbital angular momentum (OAM), the vortex beam brings a breakthrough to the radar system. To better apply the vortex wave to the radar imaging and remote sensing fields, it is necessary to analyze the nature of the OAM signal. This paper utilizes the radar ambiguity function (AF) as an effective tool to evaluate the resolution performance of the vortex radio beam. The theoretical AF formulations of the OAM signal are deduced. The corresponding properties are given as well. On this basis, the AFs of single- and multi-mode OAM waveforms are compared with classical linear frequency modulation (LFM), nonlinear frequency modulation (NLFM), and orthogonal frequency division multiplexing (OFDM) waveforms. Simulation results are provided to validate the resolution performance of the OAM signals.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105197"},"PeriodicalIF":2.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Minimum error entropy with affine projection algorithm for robust adaptive filtering
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-28 DOI: 10.1016/j.dsp.2025.105198
Huaiyuan Zhang , Guoliang Li , Yunxian Hou , Hongbin Zhang , Shan Zhong
{"title":"Minimum error entropy with affine projection algorithm for robust adaptive filtering","authors":"Huaiyuan Zhang ,&nbsp;Guoliang Li ,&nbsp;Yunxian Hou ,&nbsp;Hongbin Zhang ,&nbsp;Shan Zhong","doi":"10.1016/j.dsp.2025.105198","DOIUrl":"10.1016/j.dsp.2025.105198","url":null,"abstract":"<div><div>Considering the similarity and correlation between the method of affine projection (AP) and minimum error entropy (MEE), we find that AP and MEE can be combined well, so a new robust adaptive filtering algorithm named APMEE was proposed. By optimizing the non-parametric estimator of quadratic Renyi's entropy of two individual errors in an observation interval with a <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span>-norm constraint on the weight vector, APMEE greatly improves the convergence speed and steady-state accuracy of adaptive filtering. Numerical simulation experiments of system identification and echo cancellation indicate that APMEE outperforms other algorithms based on affine projection.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105198"},"PeriodicalIF":2.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RIE-GAN: A retrievable image encryption method based on GAN
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-28 DOI: 10.1016/j.dsp.2025.105202
Yang Nan, Yan Wo
{"title":"RIE-GAN: A retrievable image encryption method based on GAN","authors":"Yang Nan,&nbsp;Yan Wo","doi":"10.1016/j.dsp.2025.105202","DOIUrl":"10.1016/j.dsp.2025.105202","url":null,"abstract":"<div><div>With the development of cloud computing, people tend to encrypt images and upload them to the cloud for saving storage space and protecting privacy. However, these image encryption methods will hinder the availability of images such as similarity retrieval. To address the issue, this paper proposes a retrievable image encryption method based on GAN(RIE-GAN), which ensures the high security and good retrieval performance of the ciphertext. RIE-GAN uses the convolutional neural network to extract image feature and utilizes the trained weight vector from the Group Normalization (GN) layer to evaluate the importance of each channel for retrieval and divide the feature into two subsets of different importance, the important subset for retrieval and the non-important subset. To achieve similarity retrieval of the encrypted important subset, we utilize Variable Feature Thumbnail Preserving Encryption (VF-TPE) to ensure that the mean value within a block of the encrypted important subset remains unchanged for retrieval. To further enhance security, we use a three-dimensional Lorenz chaotic system to encrypt the non-important subset. The two encrypted subsets are then merged into a whole, serving as the target domain for Cycle-GAN. By training the Cycle-GAN, we perform the feature transformation from the source domain to the target domain, thus ensuring both the security and retrieval functionality of the encrypted data. Experimental results on the Corel-1000 dataset demonstrate that our method can reach 0.886 in the accuracy of ciphertext retrieval, significantly outperforming other method and the security of the ciphertext is close to that of other encryption methods.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105202"},"PeriodicalIF":2.9,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contrastive activation maps with superpixel rectification for weakly supervised semantic segmentation: When superpixels meet CAMs
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-27 DOI: 10.1016/j.dsp.2025.105196
Huilin Shi, Yukun Liu, Shaofan Wang, Yanfeng Sun, Baocai Yin
{"title":"Contrastive activation maps with superpixel rectification for weakly supervised semantic segmentation: When superpixels meet CAMs","authors":"Huilin Shi,&nbsp;Yukun Liu,&nbsp;Shaofan Wang,&nbsp;Yanfeng Sun,&nbsp;Baocai Yin","doi":"10.1016/j.dsp.2025.105196","DOIUrl":"10.1016/j.dsp.2025.105196","url":null,"abstract":"<div><div>The weakly supervised semantic segmentation (WSSS) task amounts to segmenting all pixels by using weaker annotations instead of pixel-level ones. It suffers from two ubiquitous issues: over-activation and under-activation, incurred from unsatisfactory class activation maps (CAMs). Existing methods cannot balance this dilemma due to two reasons. (a) Most of methods learn less discriminative parts iteratively, leading to discontinuous parts of activation regions. (b) Most of methods refine CAMs by learning inter-pixel affinity from a global perspective which takes high complexity. We propose <u>C</u>ontrastive <u>A</u>ctivation <u>M</u>aps with <u>S</u>uperpixel <u>R</u>ectification (SRCAM) for WSSS by incorporating two effective tools: superpixel segmentation and contrastive learning with CAMs together. Typically, superpixels rectify those incomplete vanilla CAMs by examining each activation region based on its proportion with respect to each superpixel. After learning CAMs, SRCAM disentangles foreground objects and background from local patches by learning a cross-patch foreground-background contrast, and then preserves the invariance of activation maps from patch level to image level by learning a local-to-global activation map contrast. Experiments show that SRCAM achieves the state-of-the-art performance (72.8% mIoU) on the PASCAL VOC 2012 among 54 WSSS methods, and achieves the top-1 performance (44.8% mIoU) on the MS COCO 2014 among 27 WSSS methods. Code is available at <span><span>https://github.com/wangsfan/SRCAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"162 ","pages":"Article 105196"},"PeriodicalIF":2.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143738296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-shot specific emitter identification via asymmetric dual-path masked autoencoder
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-27 DOI: 10.1016/j.dsp.2025.105201
Sen Yang , Shui Yu , Qi Li , Keyang Xia , Hongna Zhu
{"title":"Few-shot specific emitter identification via asymmetric dual-path masked autoencoder","authors":"Sen Yang ,&nbsp;Shui Yu ,&nbsp;Qi Li ,&nbsp;Keyang Xia ,&nbsp;Hongna Zhu","doi":"10.1016/j.dsp.2025.105201","DOIUrl":"10.1016/j.dsp.2025.105201","url":null,"abstract":"<div><div>Specific emitter identification (SEI) occupies a pivotal position in device management and communication security. Due to hardware variations, each radiation source produces a distinctive radio frequency fingerprint, acting as a fundamental means for signal feature extraction in deep learning based on SEI. Traditional methods necessitate substantial labeled data, which are expensive and challenging. However, we propose a few-shot SEI (FS-SEI) method, based on an asymmetric dual-path masked autoencoder framework (ADPMAE). Especially, the encoder architecture incorporates two asymmetric branches, the main branch utilizing a time attention mechanism that integrating masking block to accurately capture local features of masked data. The other serves as the auxiliary branch and is applied to extracting global features of unmasked data. During the fine-tuning phase, a center loss function is introduced to optimize the encoder, which further enhances the identification performance. To evaluate the effectiveness of the proposed approach, we conduct several experiments on large-scale automatic dependent surveillance-broadcast (ADS-B) dataset and Wi-Fi dataset. The experiment results demonstrate that the ADPMAE outperforms other FS-SEI methods. Note that it can achieve 97.29 % identification accuracy, when 10 samples per ADS-B device. In addition, ADPMAE obtains an accuracy of 95.75 % under an identification task of Wi-Fi data.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105201"},"PeriodicalIF":2.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143746769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Poisson multi-Bernoulli filter for multiple extended targets with Gamma and Beta estimator
IF 2.9 3区 工程技术
Digital Signal Processing Pub Date : 2025-03-27 DOI: 10.1016/j.dsp.2025.105204
Cheng Chen, Jinlong Yang, Jianjun Liu
{"title":"Adaptive Poisson multi-Bernoulli filter for multiple extended targets with Gamma and Beta estimator","authors":"Cheng Chen,&nbsp;Jinlong Yang,&nbsp;Jianjun Liu","doi":"10.1016/j.dsp.2025.105204","DOIUrl":"10.1016/j.dsp.2025.105204","url":null,"abstract":"<div><div>The Poisson multi-Bernoulli (PMB) filter has been proven to be an effective method for multiple target tracking (MTT), however, some parameters such as clutter rate and detection probability are usually unknown in practical tracking scenarios, which can affect the tracking accuracy of the algorithm. To solve this problem, we propose a robust Poisson multi-Bernoulli filter with independent clutter rate estimator and detection probability estimator, referred to as GBePMB, which can online estimate the unknown parameters for extended target tracking (ETT) scenario. The closed-form solution to the clutter rate estimator is derived by using the maximum likelihood estimation (MLE) technique and Gamma conjugate prior. The detection probability estimator uses the Beta distribution to describe the unknown detection probability, and the Beta variational approximation is proposed to adapt to the iterative requirements of PMB. Finally, simulation results show that the proposed algorithm has a good performance and robustness under unknown clutter rate and detection probability.</div></div>","PeriodicalId":51011,"journal":{"name":"Digital Signal Processing","volume":"163 ","pages":"Article 105204"},"PeriodicalIF":2.9,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143807290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信