IEEE open journal of signal processing最新文献

筛选
英文 中文
Unsupervised Action Anticipation Through Action Cluster Prediction 通过动作聚类预测的无监督动作预测
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-06-09 DOI: 10.1109/OJSP.2025.3578300
Jiuxu Chen;Nupur Thakur;Sachin Chhabra;Baoxin Li
{"title":"Unsupervised Action Anticipation Through Action Cluster Prediction","authors":"Jiuxu Chen;Nupur Thakur;Sachin Chhabra;Baoxin Li","doi":"10.1109/OJSP.2025.3578300","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3578300","url":null,"abstract":"Predicting near-future human actions in videos has become a focal point of research, driven by applications such as human-helping robotics, collaborative AI services, and surveillance video analysis. However, the inherent challenge lies in deciphering the complex spatial-temporal dynamics inherent in typical video feeds. While existing works excel in constrained settings with fine-grained action ground-truth labels, the general unavailability of such labeling at the frame level poses a significant hurdle. In this paper, we present an innovative solution to anticipate future human actions without relying on any form of supervision. Our approach involves generating pseudo-labels for video frames through the clustering of frame-wise visual features. These pseudo-labels are then input into a temporal sequence modeling module that learns to predict future actions in terms of pseudo-labels. Apart from the action anticipation method, we propose an innovative evaluation scheme GreedyMapper, a unique many-to-one mapping scheme that provides a practical solution to the many-to-one mapping challenge, a task that existing mapping algorithms struggle to address. Through comprehensive experimentation conducted on demanding real-world cooking datasets, our unsupervised method demonstrates superior performance compared to weakly-supervised approaches by a significant margin on the 50Salads dataset. When applied to the Breakfast dataset, our approach yields strong performance compared to the baselines in an unsupervised setting and delivers competitive results to (weakly) supervised methods under a similar setting.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"641-650"},"PeriodicalIF":2.9,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11029147","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multidimensional Polynomial Phase Estimation 多维多项式相位估计
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-06-06 DOI: 10.1109/OJSP.2025.3577503
Heedong Do;Namyoon Lee;Angel Lozano
{"title":"Multidimensional Polynomial Phase Estimation","authors":"Heedong Do;Namyoon Lee;Angel Lozano","doi":"10.1109/OJSP.2025.3577503","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3577503","url":null,"abstract":"An estimation method is presented for polynomial phase signals, i.e., those adopting the form of a complex exponential whose phase is polynomial in its indices. Transcending the scope of existing techniques, the proposed estimator can handle an arbitrary number of dimensions and an arbitrary set of polynomial degrees along each dimension; the only requirement is that the number of observations per dimension exceeds the highest degree thereon. Embodied by a highly compact sequential algorithm, this estimator is efficient at high signal-to-noise ratios (SNRs), exhibiting a computational complexity that is strictly linear in the number of observations and at most quadratic in the number of polynomial terms. To reinforce the performance at low and medium SNRs, where any phase estimator is bound to be hampered by the inherent ambiguity caused by phase wrappings, suitable functionalities are incorporated and shown to be highly effective.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"651-681"},"PeriodicalIF":2.9,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11027552","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144367013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Learning-Based Cross-Modality Prediction for Lossless Medical Imaging Compression
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-28 DOI: 10.1109/OJSP.2025.3564830
Daniel S. Nicolau;Lucas A. Thomaz;Luis M. N. Tavora;Sergio M. M. Faria
{"title":"Enhancing Learning-Based Cross-Modality Prediction for Lossless Medical Imaging Compression","authors":"Daniel S. Nicolau;Lucas A. Thomaz;Luis M. N. Tavora;Sergio M. M. Faria","doi":"10.1109/OJSP.2025.3564830","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3564830","url":null,"abstract":"Multimodal medical imaging, which involves the simultaneous acquisition of different modalities, enhances diagnostic accuracy and provides comprehensive visualization of anatomy and physiology. However, this significantly increases data size, posing storage and transmission challenges. Standard image codecs fail to properly exploit cross-modality redundancies, limiting coding efficiency. In this paper, a novel approach is proposed to enhance the compression gain and to reduce the computational complexity of a lossless cross-modality coding scheme for multimodal image pairs. The scheme uses a deep learning-based approach with Image-to-Image translation based on a Generative Adversarial Network architecture to generate an estimated image of one modality from its cross-modal pair. Two different approaches for inter-modal prediction are considered: one using the original and the estimated images for the inter-prediction scheme and another considering a weighted sum of both images. Subsequently, a decider based on a Convolutional Neural Network is employed to estimate the best coding approach to be selected among the two alternatives, before the coding step. A novel loss function that considers the decision accuracy and the compression gain of the chosen prediction approach is applied to improve the decision-making task. The experimental results on PET-CT and PET-MRI datasets demonstrate that the proposed approach improves by 11.76% and 4.61% the compression efficiency when compared with the single modality intra-coding of the Versatile Video Coding. Additionally, this approach allows to reduce the computational complexity by almost half in comparison to selecting the most compression-efficient after testing both schemes.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"489-497"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10978054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Content-Adaptive Inference for State-of-the-Art Learned Video Compression
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-28 DOI: 10.1109/OJSP.2025.3564817
Ahmet Bilican;M. Akın Yılmaz;A. Murat Tekalp
{"title":"Content-Adaptive Inference for State-of-the-Art Learned Video Compression","authors":"Ahmet Bilican;M. Akın Yılmaz;A. Murat Tekalp","doi":"10.1109/OJSP.2025.3564817","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3564817","url":null,"abstract":"While the BD-rate performance of recent learned video codec models in both low-delay and random-access modes exceed that of respective modes of traditional codecs on average over common benchmarks, the performance improvements for individual videos with complex/large motions is much smaller compared to scenes with simple motion. This is related to the inability of a learned encoder model to generalize to motion vector ranges that have not been seen in the training set, which causes loss of performance in both coding of flow fields as well as frame prediction and coding. As a remedy, we propose a generic (model-agnostic) framework to control the scale of motion vectors in a scene during inference (encoding) to approximately match the range of motion vectors in the test and training videos by adaptively downsampling frames. This results in down-scaled motion vectors enabling: i) better flow estimation; hence, frame prediction and ii) more efficient flow compression. We show that the proposed framework for content-adaptive inference improves the BD-rate performance of already state-of-the-art low-delay video codec DCVC-FM by up to 41% on individual videos without any model fine tuning. We present ablation studies to show measures of motion and scene complexity can be used to predict the effectiveness of the proposed framework.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"498-506"},"PeriodicalIF":2.9,"publicationDate":"2025-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10978087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Robustness of Self-Supervised Learning Features 自监督学习特征的对抗鲁棒性
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-21 DOI: 10.1109/OJSP.2025.3562797
Nicholas Mehlman;Shri Narayanan
{"title":"Adversarial Robustness of Self-Supervised Learning Features","authors":"Nicholas Mehlman;Shri Narayanan","doi":"10.1109/OJSP.2025.3562797","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3562797","url":null,"abstract":"As deep learning models have proliferated, concerns about their reliability and security have also increased. One significant challenge is understanding adversarial perturbations, which can alter a model's predictions despite being very small in magnitude. Prior work has proposed that this phenomenon results from a fundamental deficit in supervised learning, by which classifiers exploit whatever input features are more predictive, regardless of whether or not these features are robust to adversarial attacks. In this paper, we consider feature robustness in the context of contrastive self-supervised learning methods that have become especially common in recent years. Our findings suggest that the features learned during self-supervision are, in fact, more resistant to adversarial perturbations than those generated from supervised learning. However, we also find that these self-supervised features exhibit poorer inter-class disentanglement, limiting their contribution to overall classifier robustness.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"468-477"},"PeriodicalIF":2.9,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10971198","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Array Design for Angle of Arrival Estimation Using the Worst-Case Two-Target Cramér-Rao Bound 基于最坏情况双目标cram<s:1> - rao界的到达角估计阵列设计
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-07 DOI: 10.1109/OJSP.2025.3558686
Costas A. Kokke;Mario Coutino;Richard Heusdens;Geert Leus
{"title":"Array Design for Angle of Arrival Estimation Using the Worst-Case Two-Target Cramér-Rao Bound","authors":"Costas A. Kokke;Mario Coutino;Richard Heusdens;Geert Leus","doi":"10.1109/OJSP.2025.3558686","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3558686","url":null,"abstract":"Sparse array design is used to help reduce computational, hardware, and power requirements compared to uniform arrays while maintaining acceptable performance. Although minimizing the Cramér-Rao bound has been adopted previously for sparse sensing, it did not consider multiple targets and unknown target directions. To handle the unknown target directions when optimizing the Cramér-Rao bound, we propose to use the worst-case Cramér-Rao bound of two uncorrelated equal power sources with arbitrary angles. This new worst-case two-target Cramér-Rao bound metric has some resemblance to the peak sidelobe level metric which is commonly used in unknown multi-target scenarios. We cast the sensor selection problem for 3-D arrays using the worst-case two-target Cramér-Rao bound as a convex semi-definite program and obtain the binary selection by randomized rounding. We illustrate the proposed method through numerical examples, comparing it to solutions obtained by minimizing the single-target Cramér-Rao bound, minimizing the Cramér-Rao bound for known target angles, the concentric rectangular array and the boundary array. We show that our method selects a combination of edge and center elements, which contrasts with solutions obtained by minimizing the single-target Cramér-Rao bound. The proposed selections also exhibit lower peak sidelobe levels without the need for sidelobe level constraints.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"453-467"},"PeriodicalIF":2.9,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10955272","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143896495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Analysis of Decentralized Gradient Descent: A Contraction Mapping Framework 分散梯度下降的统一分析:一个收缩映射框架
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-02 DOI: 10.1109/OJSP.2025.3557332
Erik G. Larsson;Nicolò Michelusi
{"title":"Unified Analysis of Decentralized Gradient Descent: A Contraction Mapping Framework","authors":"Erik G. Larsson;Nicolò Michelusi","doi":"10.1109/OJSP.2025.3557332","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3557332","url":null,"abstract":"The decentralized gradient descent (DGD) algorithm, and its sibling, diffusion, are workhorses in decentralized machine learning, distributed inference and estimation, and multi-agent coordination. We propose a novel, principled framework for the analysis of DGD and diffusion for strongly convex, smooth objectives, and arbitrary undirected topologies, using contraction mappings coupled with a result called the mean Hessian theorem (MHT). The use of these tools yields tight convergence bounds, both in the noise-free and noisy regimes. While these bounds are qualitatively similar to results found in the literature, our approach using contractions together with the MHT decouples the algorithm dynamics (how quickly the algorithm converges to its fixed point) from its asymptotic convergence properties (how far the fixed point is from the global optimum). This yields a simple, intuitive analysis that is accessible to a broader audience. Extensions are provided to multiple local gradient updates, time-varying step sizes, noisy gradients (stochastic DGD and diffusion), communication noise, and random topologies.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"507-529"},"PeriodicalIF":2.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10947567","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VAMP-Based Kalman Filtering Under Non-Gaussian Process Noise 非高斯过程噪声下基于vamp的卡尔曼滤波
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-04-02 DOI: 10.1109/OJSP.2025.3557271
Tiancheng Gao;Mohamed Akrout;Faouzi Bellili;Amine Mezghani
{"title":"VAMP-Based Kalman Filtering Under Non-Gaussian Process Noise","authors":"Tiancheng Gao;Mohamed Akrout;Faouzi Bellili;Amine Mezghani","doi":"10.1109/OJSP.2025.3557271","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3557271","url":null,"abstract":"Estimating time-varying signals becomes particularly challenging in the face of non-Gaussian (e.g., sparse) and/or rapidly time-varying process noise. By building upon the recent progress in the approximate message passing (AMP) paradigm, this paper unifies the vector variant of AMP (i.e., VAMP) with the Kalman filter (KF) into a unified message passing framework. The new algorithm (coined VAMP-KF) does not restrict the process noise to a specific structure (e.g., same support over time), thereby accounting for non-Gaussian process noise sources that are uncorrelated both component-wise and over time. For the sake of theoretical performance prediction, we conduct a state evolution (SE) analysis of the proposed algorithm and show its consistency with the asymptotic empirical mean-squared error (MSE). Numerical results using sparse noise dynamics with different sparsity ratios demonstrate unambiguously the effectiveness of the proposed VAMP-KF algorithm and its superiority over state-of-the-art algorithms both in terms of reconstruction accuracy and computational complexity.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"434-452"},"PeriodicalIF":2.9,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10947573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143908383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swin Transformer With Spatial and Local Context Augmentation for Enhanced Semantic Segmentation of Remote Sensing Images 基于空间和局部上下文增强的Swin变压器增强遥感图像语义分割
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-03-23 DOI: 10.1109/OJSP.2025.3573202
Rong-Xing Ding;Yi-Han Xu;Gang Yu;Wen Zhou;Ding Zhou
{"title":"Swin Transformer With Spatial and Local Context Augmentation for Enhanced Semantic Segmentation of Remote Sensing Images","authors":"Rong-Xing Ding;Yi-Han Xu;Gang Yu;Wen Zhou;Ding Zhou","doi":"10.1109/OJSP.2025.3573202","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3573202","url":null,"abstract":"Semantic segmentation of remote sensing images is extensively used in crop cover and type analysis, and environmental monitoring. In the semantic segmentation of remote sensing images, owning to the specificity of remote sensing images, not only the local context is required, but also the global context information makes an important role in it. Inspired by the powerful global modelling capability of Swin Transformer, we propose the LSENet network, which follows the encoder-decoder architecture of the UNet network. In encoding phase, we propose spatial enhancement module (SEM), which helps Swin Transformer further enhance feature extraction by encoding spatial information. In decoding stage, we propose local enhancement module (LEM), which is embedded in the Swin Transformer to improve the Swin Transformer to assist the network to obtain more local semantic information so as to classify pixels more accurately, especially in the edge region, the adding of LEM enables to obtain smoother edges. The experimental results on the Vaihingen and Potsdam datasets demonstrate the effectiveness of our proposed method. Specifically, the mIoU metric is 78.58% on the Potsdam dataset, 72.59% on the Vaihingen dataset and 64.49% on the OpenEarthMap dataset.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"608-620"},"PeriodicalIF":2.9,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11011931","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144299229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Streaming LiDAR Scene Flow Estimation 流激光雷达场景流估计
IF 2.9
IEEE open journal of signal processing Pub Date : 2025-03-23 DOI: 10.1109/OJSP.2025.3572759
Mazen Abdelfattah;Z. Jane Wang;Rabab Ward
{"title":"Streaming LiDAR Scene Flow Estimation","authors":"Mazen Abdelfattah;Z. Jane Wang;Rabab Ward","doi":"10.1109/OJSP.2025.3572759","DOIUrl":"https://doi.org/10.1109/OJSP.2025.3572759","url":null,"abstract":"Safe navigation of autonomous vehicles requires accurate and rapid understanding of their dynamic 3D environment. Scene flow estimation models this dynamic environment by predicting point motion between sequential point cloud scans, and is crucial for safe navigation. Existing state-of-the-art scene flow estimation methods, based on test-time optimization, achieve high accuracy but suffer from significant latency, limiting their applicability in real-time onboard systems. This latency stems from both the iterative test-time optimization process and the inherent delay of waiting for the LiDAR to acquire a complete <inline-formula><tex-math>$360^circ$</tex-math></inline-formula> scan. To overcome this bottleneck, we introduce a novel <italic>streaming</i> scene flow framework leveraging the sequential nature of LiDAR slice acquisition, demonstrating a dramatic reduction in end-to-end latency. Instead of waiting for the full <inline-formula><tex-math>$360^circ$</tex-math></inline-formula> scan, our method immediately estimates scene flow using each LiDAR slice once it is captured. To mitigate the reduced context of individual slices, we propose a novel contextual augmentation technique that expands the target slice by a small angular margin, incorporating crucial slice boundary information. Furthermore, to enhance test-time optimization within our streaming framework, our novel initialization scheme ’warm-starts' the current optimization using optimized parameters from the preceding slice. This achieves substantial speedups while maintaining, and in some cases surpassing, full-scan accuracy. We rigorously evaluate our approach on the challenging Waymo and Argoverse datasets, demonstrating significant latency reduction without compromising scene flow quality. This work paves the way for deploying high-accuracy, real-time scene flow algorithms in autonomous driving, advancing the field towards more responsive and safer autonomous systems.","PeriodicalId":73300,"journal":{"name":"IEEE open journal of signal processing","volume":"6 ","pages":"590-598"},"PeriodicalIF":2.9,"publicationDate":"2025-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11012710","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144243669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信