arXiv - CS - Sound最新文献

筛选
英文 中文
Enhancing Modal Fusion by Alignment and Label Matching for Multimodal Emotion Recognition 通过对齐和标签匹配加强模态融合,实现多模态情感识别
arXiv - CS - Sound Pub Date : 2024-08-18 DOI: arxiv-2408.09438
Qifei Li, Yingming Gao, Yuhua Wen, Cong Wang, Ya Li
{"title":"Enhancing Modal Fusion by Alignment and Label Matching for Multimodal Emotion Recognition","authors":"Qifei Li, Yingming Gao, Yuhua Wen, Cong Wang, Ya Li","doi":"arxiv-2408.09438","DOIUrl":"https://doi.org/arxiv-2408.09438","url":null,"abstract":"To address the limitation in multimodal emotion recognition (MER) performance\u0000arising from inter-modal information fusion, we propose a novel MER framework\u0000based on multitask learning where fusion occurs after alignment, called\u0000Foal-Net. The framework is designed to enhance the effectiveness of modality\u0000fusion and includes two auxiliary tasks: audio-video emotion alignment (AVEL)\u0000and cross-modal emotion label matching (MEM). First, AVEL achieves alignment of\u0000emotional information in audio-video representations through contrastive\u0000learning. Then, a modal fusion network integrates the aligned features.\u0000Meanwhile, MEM assesses whether the emotions of the current sample pair are the\u0000same, providing assistance for modal information fusion and guiding the model\u0000to focus more on emotional information. The experimental results conducted on\u0000IEMOCAP corpus show that Foal-Net outperforms the state-of-the-art methods and\u0000emotion alignment is necessary before modal fusion.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142198488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Dataset, Notation Software, and Representation for Computational Schenkerian Analysis 用于计算申克分析的新数据集、符号软件和表示法
arXiv - CS - Sound Pub Date : 2024-08-13 DOI: arxiv-2408.07184
Stephen Ni-Hahn, Weihan Xu, Jerry Yin, Rico Zhu, Simon Mak, Yue Jiang, Cynthia Rudin
{"title":"A New Dataset, Notation Software, and Representation for Computational Schenkerian Analysis","authors":"Stephen Ni-Hahn, Weihan Xu, Jerry Yin, Rico Zhu, Simon Mak, Yue Jiang, Cynthia Rudin","doi":"arxiv-2408.07184","DOIUrl":"https://doi.org/arxiv-2408.07184","url":null,"abstract":"Schenkerian Analysis (SchA) is a uniquely expressive method of music\u0000analysis, combining elements of melody, harmony, counterpoint, and form to\u0000describe the hierarchical structure supporting a work of music. However,\u0000despite its powerful analytical utility and potential to improve music\u0000understanding and generation, SchA has rarely been utilized by the computer\u0000music community. This is in large part due to the paucity of available\u0000high-quality data in a computer-readable format. With a larger corpus of\u0000Schenkerian data, it may be possible to infuse machine learning models with a\u0000deeper understanding of musical structure, thus leading to more \"human\"\u0000results. To encourage further research in Schenkerian analysis and its\u0000potential benefits for music informatics and generation, this paper presents\u0000three main contributions: 1) a new and growing dataset of SchAs, the largest in\u0000human- and computer-readable formats to date (>140 excerpts), 2) a novel\u0000software for visualization and collection of SchA data, and 3) a novel,\u0000flexible representation of SchA as a heterogeneous-edge graph data structure.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142198489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIDI-to-Tab: Guitar Tablature Inference via Masked Language Modeling MIDI-to-Tab:通过掩码语言建模进行吉他谱推理
arXiv - CS - Sound Pub Date : 2024-08-09 DOI: arxiv-2408.05024
Drew Edwards, Xavier Riley, Pedro Sarmento, Simon Dixon
{"title":"MIDI-to-Tab: Guitar Tablature Inference via Masked Language Modeling","authors":"Drew Edwards, Xavier Riley, Pedro Sarmento, Simon Dixon","doi":"arxiv-2408.05024","DOIUrl":"https://doi.org/arxiv-2408.05024","url":null,"abstract":"Guitar tablatures enrich the structure of traditional music notation by\u0000assigning each note to a string and fret of a guitar in a particular tuning,\u0000indicating precisely where to play the note on the instrument. The problem of\u0000generating tablature from a symbolic music representation involves inferring\u0000this string and fret assignment per note across an entire composition or\u0000performance. On the guitar, multiple string-fret assignments are possible for\u0000most pitches, which leads to a large combinatorial space that prevents\u0000exhaustive search approaches. Most modern methods use constraint-based dynamic\u0000programming to minimize some cost function (e.g. hand position movement). In\u0000this work, we introduce a novel deep learning solution to symbolic guitar\u0000tablature estimation. We train an encoder-decoder Transformer model in a masked\u0000language modeling paradigm to assign notes to strings. The model is first\u0000pre-trained on DadaGP, a dataset of over 25K tablatures, and then fine-tuned on\u0000a curated set of professionally transcribed guitar performances. Given the\u0000subjective nature of assessing tablature quality, we conduct a user study\u0000amongst guitarists, wherein we ask participants to rate the playability of\u0000multiple versions of tablature for the same four-bar excerpt. The results\u0000indicate our system significantly outperforms competing algorithms.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141943336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiM-Gesture: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2 framework DiM-Gesture:利用自适应层归一化技术生成协同语音手势 Mamba-2 框架
arXiv - CS - Sound Pub Date : 2024-08-01 DOI: arxiv-2408.00370
Fan Zhang, Naye Ji, Fuxing Gao, Bozuo Zhao, Jingmei Wu, Yanbing Jiang, Hui Du, Zhenqing Ye, Jiayang Zhu, WeiFan Zhong, Leyao Yan, Xiaomeng Ma
{"title":"DiM-Gesture: Co-Speech Gesture Generation with Adaptive Layer Normalization Mamba-2 framework","authors":"Fan Zhang, Naye Ji, Fuxing Gao, Bozuo Zhao, Jingmei Wu, Yanbing Jiang, Hui Du, Zhenqing Ye, Jiayang Zhu, WeiFan Zhong, Leyao Yan, Xiaomeng Ma","doi":"arxiv-2408.00370","DOIUrl":"https://doi.org/arxiv-2408.00370","url":null,"abstract":"Speech-driven gesture generation is an emerging domain within virtual human\u0000creation, where current methods predominantly utilize Transformer-based\u0000architectures that necessitate extensive memory and are characterized by slow\u0000inference speeds. In response to these limitations, we propose\u0000textit{DiM-Gestures}, a novel end-to-end generative model crafted to create\u0000highly personalized 3D full-body gestures solely from raw speech audio,\u0000employing Mamba-based architectures. This model integrates a Mamba-based fuzzy\u0000feature extractor with a non-autoregressive Adaptive Layer Normalization\u0000(AdaLN) Mamba-2 diffusion architecture. The extractor, leveraging a Mamba\u0000framework and a WavLM pre-trained model, autonomously derives implicit,\u0000continuous fuzzy features, which are then unified into a singular latent\u0000feature. This feature is processed by the AdaLN Mamba-2, which implements a\u0000uniform conditional mechanism across all tokens to robustly model the interplay\u0000between the fuzzy features and the resultant gesture sequence. This innovative\u0000approach guarantees high fidelity in gesture-speech synchronization while\u0000maintaining the naturalness of the gestures. Employing a diffusion model for\u0000training and inference, our framework has undergone extensive subjective and\u0000objective evaluations on the ZEGGS and BEAT datasets. These assessments\u0000substantiate our model's enhanced performance relative to contemporary\u0000state-of-the-art methods, demonstrating competitive outcomes with the DiTs\u0000architecture (Persona-Gestors) while optimizing memory usage and accelerating\u0000inference speed.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141885341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation 利用对比表征在音频分类中实现稳健的少量类增量学习
arXiv - CS - Sound Pub Date : 2024-07-27 DOI: arxiv-2407.19265
Riyansha SinghIIT Kanpur, India, Parinita NemaIISER Bhopal, India, Vinod K KurmiIISER Bhopal, India
{"title":"Towards Robust Few-shot Class Incremental Learning in Audio Classification using Contrastive Representation","authors":"Riyansha SinghIIT Kanpur, India, Parinita NemaIISER Bhopal, India, Vinod K KurmiIISER Bhopal, India","doi":"arxiv-2407.19265","DOIUrl":"https://doi.org/arxiv-2407.19265","url":null,"abstract":"In machine learning applications, gradual data ingress is common, especially\u0000in audio processing where incremental learning is vital for real-time\u0000analytics. Few-shot class-incremental learning addresses challenges arising\u0000from limited incoming data. Existing methods often integrate additional\u0000trainable components or rely on a fixed embedding extractor post-training on\u0000base sessions to mitigate concerns related to catastrophic forgetting and the\u0000dangers of model overfitting. However, using cross-entropy loss alone during\u0000base session training is suboptimal for audio data. To address this, we propose\u0000incorporating supervised contrastive learning to refine the representation\u0000space, enhancing discriminative power and leading to better generalization\u0000since it facilitates seamless integration of incremental classes, upon arrival.\u0000Experimental results on NSynth and LibriSpeech datasets with 100 classes, as\u0000well as ESC dataset with 50 and 10 classes, demonstrate state-of-the-art\u0000performance.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation and Applications of WakeWords Integrated with Speaker Recognition: A Case Study WakeWords 与说话人识别整合的实施与应用:案例研究
arXiv - CS - Sound Pub Date : 2024-07-25 DOI: arxiv-2407.18985
Alexandre Costa Ferro Filho, Elisa Ayumi Masasi de Oliveira, Iago Alves Brito, Pedro Martins Bittencourt
{"title":"Implementation and Applications of WakeWords Integrated with Speaker Recognition: A Case Study","authors":"Alexandre Costa Ferro Filho, Elisa Ayumi Masasi de Oliveira, Iago Alves Brito, Pedro Martins Bittencourt","doi":"arxiv-2407.18985","DOIUrl":"https://doi.org/arxiv-2407.18985","url":null,"abstract":"This paper explores the application of artificial intelligence techniques in\u0000audio and voice processing, focusing on the integration of wake words and\u0000speaker recognition for secure access in embedded systems. With the growing\u0000prevalence of voice-activated devices such as Amazon Alexa, ensuring secure and\u0000user-specific interactions has become paramount. Our study aims to enhance the\u0000security framework of these systems by leveraging wake words for initial\u0000activation and speaker recognition to validate user permissions. By\u0000incorporating these AI-driven methodologies, we propose a robust solution that\u0000restricts system usage to authorized individuals, thereby mitigating\u0000unauthorized access risks. This research delves into the algorithms and\u0000technologies underpinning wake word detection and speaker recognition,\u0000evaluates their effectiveness in real-world applications, and discusses the\u0000potential for their implementation in various embedded systems, emphasizing\u0000security and user convenience. The findings underscore the feasibility and\u0000advantages of employing these AI techniques to create secure, user-friendly\u0000voice-activated systems.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhanced Classification of Abnormal Lung sound in Multi-breath: A Light Weight Multi-label and Multi-head Attention Classification Method 增强多重呼吸中异常肺音的分类:轻量级多标签和多头注意力分类方法
arXiv - CS - Sound Pub Date : 2024-07-15 DOI: arxiv-2407.10828
Yi-Wei Chua, Yun-Chien Cheng
{"title":"Towards Enhanced Classification of Abnormal Lung sound in Multi-breath: A Light Weight Multi-label and Multi-head Attention Classification Method","authors":"Yi-Wei Chua, Yun-Chien Cheng","doi":"arxiv-2407.10828","DOIUrl":"https://doi.org/arxiv-2407.10828","url":null,"abstract":"This study aims to develop an auxiliary diagnostic system for classifying\u0000abnormal lung respiratory sounds, enhancing the accuracy of automatic abnormal\u0000breath sound classification through an innovative multi-label learning approach\u0000and multi-head attention mechanism. Addressing the issue of class imbalance and\u0000lack of diversity in existing respiratory sound datasets, our study employs a\u0000lightweight and highly accurate model, using a two-dimensional label set to\u0000represent multiple respiratory sound characteristics. Our method achieved a\u000059.2% ICBHI score in the four-category task on the ICBHI2017 dataset,\u0000demonstrating its advantages in terms of lightweight and high accuracy. This\u0000study not only improves the accuracy of automatic diagnosis of lung respiratory\u0000sound abnormalities but also opens new possibilities for clinical applications.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards zero-shot amplifier modeling: One-to-many amplifier modeling via tone embedding control 迈向零射频放大器建模:通过音调嵌入控制进行一对多放大器建模
arXiv - CS - Sound Pub Date : 2024-07-15 DOI: arxiv-2407.10646
Yu-Hua Chen, Yen-Tung Yeh, Yuan-Chiao Cheng, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang
{"title":"Towards zero-shot amplifier modeling: One-to-many amplifier modeling via tone embedding control","authors":"Yu-Hua Chen, Yen-Tung Yeh, Yuan-Chiao Cheng, Jui-Te Wu, Yu-Hsiang Ho, Jyh-Shing Roger Jang, Yi-Hsuan Yang","doi":"arxiv-2407.10646","DOIUrl":"https://doi.org/arxiv-2407.10646","url":null,"abstract":"Replicating analog device circuits through neural audio effect modeling has\u0000garnered increasing interest in recent years. Existing work has predominantly\u0000focused on a one-to-one emulation strategy, modeling specific devices\u0000individually. In this paper, we tackle the less-explored scenario of\u0000one-to-many emulation, utilizing conditioning mechanisms to emulate multiple\u0000guitar amplifiers through a single neural model. For condition representation,\u0000we use contrastive learning to build a tone embedding encoder that extracts\u0000style-related features of various amplifiers, leveraging a dataset of\u0000comprehensive amplifier settings. Targeting zero-shot application scenarios, we\u0000also examine various strategies for tone embedding representation, evaluating\u0000referenced tone embedding against two retrieval-based embedding methods for\u0000amplifiers unseen in the training time. Our findings showcase the efficacy and\u0000potential of the proposed methods in achieving versatile one-to-many amplifier\u0000modeling, contributing a foundational step towards zero-shot audio modeling\u0000applications.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GROOT: Generating Robust Watermark for Diffusion-Model-Based Audio Synthesis GROOT:为基于扩散模型的音频合成生成鲁棒水印
arXiv - CS - Sound Pub Date : 2024-07-15 DOI: arxiv-2407.10471
Weizhi Liu, Yue Li, Dongdong Lin, Hui Tian, Haizhou Li
{"title":"GROOT: Generating Robust Watermark for Diffusion-Model-Based Audio Synthesis","authors":"Weizhi Liu, Yue Li, Dongdong Lin, Hui Tian, Haizhou Li","doi":"arxiv-2407.10471","DOIUrl":"https://doi.org/arxiv-2407.10471","url":null,"abstract":"Amid the burgeoning development of generative models like diffusion models,\u0000the task of differentiating synthesized audio from its natural counterpart\u0000grows more daunting. Deepfake detection offers a viable solution to combat this\u0000challenge. Yet, this defensive measure unintentionally fuels the continued\u0000refinement of generative models. Watermarking emerges as a proactive and\u0000sustainable tactic, preemptively regulating the creation and dissemination of\u0000synthesized content. Thus, this paper, as a pioneer, proposes the generative\u0000robust audio watermarking method (Groot), presenting a paradigm for proactively\u0000supervising the synthesized audio and its source diffusion models. In this\u0000paradigm, the processes of watermark generation and audio synthesis occur\u0000simultaneously, facilitated by parameter-fixed diffusion models equipped with a\u0000dedicated encoder. The watermark embedded within the audio can subsequently be\u0000retrieved by a lightweight decoder. The experimental results highlight Groot's\u0000outstanding performance, particularly in terms of robustness, surpassing that\u0000of the leading state-of-the-art methods. Beyond its impressive resilience\u0000against individual post-processing attacks, Groot exhibits exceptional\u0000robustness when facing compound attacks, maintaining an average watermark\u0000extraction accuracy of around 95%.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LiteFocus: Accelerated Diffusion Inference for Long Audio Synthesis LiteFocus:用于长音频合成的加速扩散推理
arXiv - CS - Sound Pub Date : 2024-07-15 DOI: arxiv-2407.10468
Zhenxiong Tan, Xinyin Ma, Gongfan Fang, Xinchao Wang
{"title":"LiteFocus: Accelerated Diffusion Inference for Long Audio Synthesis","authors":"Zhenxiong Tan, Xinyin Ma, Gongfan Fang, Xinchao Wang","doi":"arxiv-2407.10468","DOIUrl":"https://doi.org/arxiv-2407.10468","url":null,"abstract":"Latent diffusion models have shown promising results in audio generation,\u0000making notable advancements over traditional methods. However, their\u0000performance, while impressive with short audio clips, faces challenges when\u0000extended to longer audio sequences. These challenges are due to model's\u0000self-attention mechanism and training predominantly on 10-second clips, which\u0000complicates the extension to longer audio without adaptation. In response to\u0000these issues, we introduce a novel approach, LiteFocus that enhances the\u0000inference of existing audio latent diffusion models in long audio synthesis.\u0000Observed the attention pattern in self-attention, we employ a dual sparse form\u0000for attention calculation, designated as same-frequency focus and\u0000cross-frequency compensation, which curtails the attention computation under\u0000same-frequency constraints, while enhancing audio quality through\u0000cross-frequency refillment. LiteFocus demonstrates substantial reduction on\u0000inference time with diffusion-based TTA model by 1.99x in synthesizing\u000080-second audio clips while also obtaining improved audio quality.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141718911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信