Computer Speech and Language最新文献

筛选
英文 中文
Complementary regional energy features for spoofed speech detection 用于欺骗性语音检测的互补区域能量特征
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-12-16 DOI: 10.1016/j.csl.2023.101602
Gökay Dişken
{"title":"Complementary regional energy features for spoofed speech detection","authors":"Gökay Dişken","doi":"10.1016/j.csl.2023.101602","DOIUrl":"10.1016/j.csl.2023.101602","url":null,"abstract":"<div><p><span><span>Automatic speaker verification systems are found to be vulnerable to spoof attacks such as voice conversion, text-to-speech, and replayed speech. As the security of </span>biometric<span> systems is vital, many countermeasures have been developed for spoofed speech detection. To satisfy the recent developments on </span></span>speech synthesis<span>, publicly available datasets became more and more challenging (e.g., ASVspoof 2019 and 2021 datasets). A variety of replay attack configurations were also considered in those datasets, as they do not require expertise, hence easily performed. This work utilizes regional energy features, which are experimentally proven to be more effective than the traditional frame-based energy features. The proposed energy features are independent from the utterance length and are extracted over nonoverlapping time-frequency regions of the magnitude spectrum. Different configurations are considered in the experiments to verify the regional energy features’ contribution to the performance. First, light convolutional neural network<span> – long short-term memory (LCNN – LSTM) model with linear frequency cepstral coefficients<span> is used to determine the optimal number of regional energy features. Then, SE-Res2Net model with log power spectrogram features is used, which achieved comparable results to the state-of-the-art for ASVspoof 2019 logical access condition. Physical access condition from ASVspoof 2019 dataset, logical access and deep fake conditions from ASVspoof 2021 dataset are also used in the experiments. The regional energy features achieved improvements for all conditions with almost no additional computational or memory loads (less than 1% increase in the model size for SE-Res2Net). The main advantages of the regional energy features can be summarized as i) capturing nonspeech segments, ii) extracting band-limited information. Both aspects are found to be discriminative for spoofed speech detection.</span></span></span></p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138745937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rep-MCA-former: An efficient multi-scale convolution attention encoder for text-independent speaker verification Rep-MCA-former:用于独立于文本的说话人验证的高效多尺度卷积注意力编码器
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-12-10 DOI: 10.1016/j.csl.2023.101600
Xiaohu Liu, Defu Chen, Xianbao Wang, Sheng Xiang, Xuwen Zhou
{"title":"Rep-MCA-former: An efficient multi-scale convolution attention encoder for text-independent speaker verification","authors":"Xiaohu Liu,&nbsp;Defu Chen,&nbsp;Xianbao Wang,&nbsp;Sheng Xiang,&nbsp;Xuwen Zhou","doi":"10.1016/j.csl.2023.101600","DOIUrl":"10.1016/j.csl.2023.101600","url":null,"abstract":"<div><p><span>In many speaker verification tasks, the quality of speaker embedding is an important factor in affecting speaker verification systems. Advanced speaker embedding extraction networks aim to capture richer speaker features through the multi-branch </span>network architecture. Recently, speaker verification systems based on transformer encoders have received much attention, and many satisfactory results have been achieved because transformer encoders can efficiently extract the global features of the speaker (e.g., MFA-Conformer). However, the large number of model parameters and computational latency are common problems faced by the above approaches, which make them difficult to apply to resource-constrained edge terminals. To address this issue, this paper proposes an effective, lightweight transformer model (MCA-former) with multi-scale convolutional self-attention (MCA), which can perform multi-scale modeling and channel modeling in the temporal direction of the input with low computational cost. In addition, in the inference phase of the model, we further develop a systematic re-parameterization method to convert the multi-branch network structure into the single-path topology, effectively improving the inference speed. We investigate the performance of the MCA-former for speaker verification under the VoxCeleb1 test set. The results show that the MCA-based transformer model is more advantageous in terms of the number of parameters and inference efficiency. By applying the re-parameterization, the inference speed of the model is increased by about 30%, and the memory consumption is significantly improved.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138569851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New research on monaural speech segregation based on quality assessment 基于质量评价的单音语音分离新研究
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-12-05 DOI: 10.1016/j.csl.2023.101601
Xiaoping Xie, Can Li, Dan Tian, Rufeng Shen, Fei Ding
{"title":"New research on monaural speech segregation based on quality assessment","authors":"Xiaoping Xie,&nbsp;Can Li,&nbsp;Dan Tian,&nbsp;Rufeng Shen,&nbsp;Fei Ding","doi":"10.1016/j.csl.2023.101601","DOIUrl":"10.1016/j.csl.2023.101601","url":null,"abstract":"<div><p>Speech enhancement (SE) is a pivotal technology in enhancing the quality and intelligibility of speech signals. Nevertheless, when processing speech signals under conditions of high signal-to-noise ratio (SNR), conventional SE techniques may inadvertently lead to a diminution in the perceptual evaluation of speech quality (PESQ) and short-time objective intelligibility (STOI). This article introduces the innovative incorporation of the Non-Intrusive Speech Quality Assessment (NISQA) algorithm into SE systems. Through the comparison of pre and post-enhancement speech quality scores, it discerns whether the speech signal under consideration warrants enhancement processing, thereby mitigating potential deterioration in PESQ and STOI. Furthermore, this study delves into the ramifications of five prevalent speech features, namely, Mel Frequency Cepstral Coefficients<span> (MFCC), Gammatone Frequency Cepstral Coefficients (GFCC), Relative Spectral Trans-formed Perceptual Linear Prediction coefficients (RASTA-PLP), Amplitude Modulation<span> Spectrogram<span> (AMS), and Multi-Resolution Cochleagram (MRCG), on PESQ and STOI under varying noise conditions. Experimental outcomes underscore that MRCG consistently emerges as the optimal and most stable feature for STOI, while the feature yielding the highest PESQ score exhibits intricate correlations with the background noise type, SNR level, and noise compatibility with the speech signal. Consequently, we propose an SE methodology founded on quality assessment and feature selection, facilitating the adaptive selection of optimal features tailored to distinct background noise scenarios, thereby always maintain the highest caliber enhancement effect with regard to PESQ metrics.</span></span></span></p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138528319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating frame-level boundary detection and deepfake detection for locating manipulated regions in partially spoofed audio forgery attacks 结合帧级边界检测和深度伪造检测定位部分欺骗音频伪造攻击中的被操纵区域
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-12-05 DOI: 10.1016/j.csl.2023.101597
Zexin Cai , Ming Li
{"title":"Integrating frame-level boundary detection and deepfake detection for locating manipulated regions in partially spoofed audio forgery attacks","authors":"Zexin Cai ,&nbsp;Ming Li","doi":"10.1016/j.csl.2023.101597","DOIUrl":"10.1016/j.csl.2023.101597","url":null,"abstract":"<div><p><span><span><span>Partially fake audio, a variant of deep fake that involves manipulating audio utterances through the incorporation of fake or externally-sourced bona fide audio clips, constitutes a growing threat as an audio forgery attack impacting both human and </span>artificial intelligence applications. Researchers have recently developed valuable databases to aid in the development of effective </span>countermeasures against such attacks. While existing countermeasures mainly focus on identifying partially fake audio at the level of entire utterances or segments, this paper introduces a paradigm shift by proposing frame-level systems. These systems are designed to detect manipulated utterances and pinpoint the specific regions within partially fake audio where the manipulation occurs. Our approach leverages acoustic features extracted from large-scale self-supervised pre-training models, delivering promising results evaluated on diverse, publicly accessible databases. Additionally, we study the integration of boundary and </span>deepfake<span> detection systems, exploring their potential synergies and shortcomings. Importantly, our techniques have yielded impressive results. We have achieved state-of-the-art performance on the test dataset<span> of the Track 2 of ADD 2022 challenge with an equal error rate of 4.4%. Furthermore, our methods exhibit remarkable performance in locating manipulated regions in Track 2 of the ADD 2023 challenge, resulting in a final ADD score of 0.6713 and securing the top position.</span></span></p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138528327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A knowledge-augmented heterogeneous graph convolutional network for aspect-level multimodal sentiment analysis 面向方面级多模态情感分析的知识增强异构图卷积网络
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-11-23 DOI: 10.1016/j.csl.2023.101587
Yujie Wan, Yuzhong Chen, Jiali Lin, Jiayuan Zhong, Chen Dong
{"title":"A knowledge-augmented heterogeneous graph convolutional network for aspect-level multimodal sentiment analysis","authors":"Yujie Wan,&nbsp;Yuzhong Chen,&nbsp;Jiali Lin,&nbsp;Jiayuan Zhong,&nbsp;Chen Dong","doi":"10.1016/j.csl.2023.101587","DOIUrl":"https://doi.org/10.1016/j.csl.2023.101587","url":null,"abstract":"<div><p>Aspect-level multimodal sentiment analysis<span><span><span> has also become a new challenge in the field of sentiment analysis. Although there has been significant progress in the task based on image–text data, existing works do not fully deal with the implicit sentiment expression in data. In addition, they do not fully exploit the important information from external knowledge and image tags. To address these problems, we propose a knowledge-augmented heterogeneous graph convolutional network (KAHGCN). First, we propose a dynamic knowledge </span>selection algorithm to select the most relevant external knowledge, thereby enhancing KAHGCN’s ability of understanding the implicit sentiment expression in review texts. Second, we propose a </span>graph construction strategy to construct a heterogeneous graph that aggregates review text, image tags and external knowledge. Third, we propose a multilayer heterogeneous graph convolutional network to strengthen the interaction between information from external knowledge, review texts and image tags. Experimental results on two datasets demonstrate the effectiveness of the KAHGCN.</span></p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138465995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A semi-supervised high-quality pseudo labels algorithm based on multi-constraint optimization for speech deception detection 基于多约束优化的半监督高质量伪标签语音欺骗检测算法
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-11-22 DOI: 10.1016/j.csl.2023.101586
Huawei Tao , Hang Yu , Man Liu , Hongliang Fu , Chunhua Zhu , Yue Xie
{"title":"A semi-supervised high-quality pseudo labels algorithm based on multi-constraint optimization for speech deception detection","authors":"Huawei Tao ,&nbsp;Hang Yu ,&nbsp;Man Liu ,&nbsp;Hongliang Fu ,&nbsp;Chunhua Zhu ,&nbsp;Yue Xie","doi":"10.1016/j.csl.2023.101586","DOIUrl":"https://doi.org/10.1016/j.csl.2023.101586","url":null,"abstract":"<div><p>Deep learning-based speech deception detection research relies on a large amount of labeled data. However, in the process of collecting speech deception detection data, the identification of truth and lies requires researchers to have a professional knowledge reserve, which greatly limits the number of annotated samples. Improving the accuracy of lie detection with insufficient annotation data is the focus of this study at this stage. In this paper, we propose a semi-supervised high-quality pseudo-label algorithm based on multi-constraint optimization (HQPL-MC) for speech deception detection. Firstly, the algorithm exploits the potential feature information of unlabeled data by using deep auto-encoder networks; secondly, it achieves entropy minimization with the help of the pseudo labeling technique to reduce the class overlap distribution of truth and deception data; finally, it improves the quality of pseudo labels by optimizing the unlabeled loss and reconstruction loss to further enhance the classification performance of the model when the labeled data is insufficient. We recorded an interview-style corpus by ourselves and used it in this paper for the experimental demonstration of the algorithm together with the Columbia/SRI/Colorado(CSC) corpus. The detection performance of the proposed algorithm is better than most state-of-the-art algorithms.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138467643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Representation learning strategies to model pathological speech: Effect of multiple spectral resolutions 病态言语模型的表征学习策略:多光谱分辨率的影响
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-11-15 DOI: 10.1016/j.csl.2023.101584
Gabriel Figueiredo Miller , Juan Camilo Vásquez-Correa , Juan Rafael Orozco-Arroyave , Elmar Nöth
{"title":"Representation learning strategies to model pathological speech: Effect of multiple spectral resolutions","authors":"Gabriel Figueiredo Miller ,&nbsp;Juan Camilo Vásquez-Correa ,&nbsp;Juan Rafael Orozco-Arroyave ,&nbsp;Elmar Nöth","doi":"10.1016/j.csl.2023.101584","DOIUrl":"https://doi.org/10.1016/j.csl.2023.101584","url":null,"abstract":"<div><p><span>This paper considers a representation learning<span> strategy to model speech signals from patients with Parkinson’s disease, with the goal of predicting the presence of the disease, and evaluating the level of degradation of a patient’s speech. In particular, we propose a novel fusion strategy that combines wideband and narrowband spectral resolutions using a representation learning strategy based on </span></span>autoencoders<span>, called the multi-spectral autoencoder. The proposed model is able to classify the speech from Parkinson’s disease patients with accuracy up to 97%. The proposed model is also able to assess the dysarthria severity of Parkinson’s disease patients with a Spearman correlation up to 0.79. These results outperform those observed in literature where the same problem was addressed with the same corpus.</span></p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138396186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Though this be hesitant, yet there is method in ’t: Effects of disfluency patterns in neural speech synthesis for cultural heritage presentations 虽然这一点尚不明确,但已有方法研究非流利模式对文化遗产展示神经语音合成的影响
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-11-11 DOI: 10.1016/j.csl.2023.101585
Loredana Schettino , Antonio Origlia , Francesco Cutugno
{"title":"Though this be hesitant, yet there is method in ’t: Effects of disfluency patterns in neural speech synthesis for cultural heritage presentations","authors":"Loredana Schettino ,&nbsp;Antonio Origlia ,&nbsp;Francesco Cutugno","doi":"10.1016/j.csl.2023.101585","DOIUrl":"10.1016/j.csl.2023.101585","url":null,"abstract":"<div><p>This study presents the results of two perception experiments aimed at evaluating the effect that specific patterns of disfluencies have on people listening to synthetic speech. We consider the particular case of Cultural Heritage presentations and propose a linguistic model to support the positioning of disfluencies throughout the utterances in the Italian language. A state-of-the-art speech synthesizer, based on Deep Neural Networks, is used to prepare a set of experimental stimuli and two different experiments are presented to provide both subjective evaluations and behavioural assessments from human subjects. Results show that synthetic utterances including disfluencies, predicted by a linguistic model, are identified as more natural and that the presence of disfluencies benefits the listeners’ recall of the provided information.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885230823001043/pdfft?md5=a4164bb5197e44acfbbd8fcc357b3559&pid=1-s2.0-S0885230823001043-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135670297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Knowledge Distillation for neural machine translation 神经机器翻译的双知识蒸馏
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-11-09 DOI: 10.1016/j.csl.2023.101583
Yuxian Wan , Wenlin Zhang , Zhen Li , Hao Zhang , Yanxia Li
{"title":"Dual Knowledge Distillation for neural machine translation","authors":"Yuxian Wan ,&nbsp;Wenlin Zhang ,&nbsp;Zhen Li ,&nbsp;Hao Zhang ,&nbsp;Yanxia Li","doi":"10.1016/j.csl.2023.101583","DOIUrl":"https://doi.org/10.1016/j.csl.2023.101583","url":null,"abstract":"<div><p><span>Existing knowledge distillation methods use large amount of bilingual data and focus on mining the corresponding knowledge distribution between the source language and the target language. However, for some languages, bilingual data is not abundant. In this paper, to make better use of both monolingual and limited bilingual data, we propose a new knowledge distillation method called Dual Knowledge Distillation (DKD). For monolingual data, we use a self-distillation strategy which combines self-training and knowledge distillation for the encoder to extract more consistent monolingual representation. For bilingual data, on top of the k Nearest Neighbor Knowledge Distillation (kNN-KD) method, a similar self-distillation strategy is adopted as a consistency </span>regularization method to force the decoder to produce consistent output. Experiments on standard datasets, multi-domain translation datasets, and low-resource datasets show that DKD achieves consistent improvements over state-of-the-art baselines including kNN-KD.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138087260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speaking to remember: Model-based adaptive vocabulary learning using automatic speech recognition 口语记忆:使用自动语音识别的基于模型的自适应词汇学习
IF 4.3 3区 计算机科学
Computer Speech and Language Pub Date : 2023-10-31 DOI: 10.1016/j.csl.2023.101578
Thomas Wilschut , Florian Sense , Hedderik van Rijn
{"title":"Speaking to remember: Model-based adaptive vocabulary learning using automatic speech recognition","authors":"Thomas Wilschut ,&nbsp;Florian Sense ,&nbsp;Hedderik van Rijn","doi":"10.1016/j.csl.2023.101578","DOIUrl":"https://doi.org/10.1016/j.csl.2023.101578","url":null,"abstract":"<div><p>Memorizing vocabulary is a crucial aspect of learning a new language. While personalized learning- or intelligent tutoring systems can assist learners in memorizing vocabulary, the majority of such systems are limited to typing-based learning and do not allow for speech practice. Here, we aim to compare the efficiency of typing- and speech based vocabulary learning. Furthermore, we explore the possibilities of improving such speech-based learning using an adaptive algorithm based on a cognitive model of memory retrieval. We combined a response time-based algorithm for adaptive item scheduling that was originally developed for typing-based learning with automatic speech recognition technology and tested the system with 50 participants. We show that typing- and speech-based learning result in similar learning outcomes and that using a model-based, adaptive scheduling algorithm improves recall performance relative to traditional learning in both modalities, both immediately after learning and on follow-up tests. These results can inform the development of vocabulary learning applications that–unlike traditional systems–allow for speech-based input.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":4.3,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885230823000979/pdfft?md5=193f674d81842a617a595d4386cfe454&pid=1-s2.0-S0885230823000979-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138087255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信