arXiv - CS - Sound最新文献

筛选
英文 中文
Music Era Recognition Using Supervised Contrastive Learning and Artist Information 利用监督对比学习和艺术家信息识别音乐年代
arXiv - CS - Sound Pub Date : 2024-07-07 DOI: arxiv-2407.05368
Qiqi He, Xuchen Song, Weituo Hao, Ju-Chiang Wang, Wei-Tsung Lu, Wei Li
{"title":"Music Era Recognition Using Supervised Contrastive Learning and Artist Information","authors":"Qiqi He, Xuchen Song, Weituo Hao, Ju-Chiang Wang, Wei-Tsung Lu, Wei Li","doi":"arxiv-2407.05368","DOIUrl":"https://doi.org/arxiv-2407.05368","url":null,"abstract":"Does popular music from the 60s sound different than that of the 90s? Prior\u0000study has shown that there would exist some variations of patterns and\u0000regularities related to instrumentation changes and growing loudness across\u0000multi-decadal trends. This indicates that perceiving the era of a song from\u0000musical features such as audio and artist information is possible. Music era\u0000information can be an important feature for playlist generation and\u0000recommendation. However, the release year of a song can be inaccessible in many\u0000circumstances. This paper addresses a novel task of music era recognition. We\u0000formulate the task as a music classification problem and propose solutions\u0000based on supervised contrastive learning. An audio-based model is developed to\u0000predict the era from audio. For the case where the artist information is\u0000available, we extend the audio-based model to take multimodal inputs and\u0000develop a framework, called MultiModal Contrastive (MMC) learning, to enhance\u0000the training. Experimental result on Million Song Dataset demonstrates that the\u0000audio-based model achieves 54% in accuracy with a tolerance of 3-years range;\u0000incorporating the artist information with the MMC framework for training leads\u0000to 9% improvement further.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Morse Code-Enabled Speech Recognition for Individuals with Visual and Hearing Impairments 针对视力和听力障碍人士的摩斯密码语音识别技术
arXiv - CS - Sound Pub Date : 2024-07-07 DOI: arxiv-2407.14525
Ritabrata Roy Choudhury
{"title":"Morse Code-Enabled Speech Recognition for Individuals with Visual and Hearing Impairments","authors":"Ritabrata Roy Choudhury","doi":"arxiv-2407.14525","DOIUrl":"https://doi.org/arxiv-2407.14525","url":null,"abstract":"The proposed model aims to develop a speech recognition technology for\u0000hearing, speech, or cognitively disabled people. All the available technology\u0000in the field of speech recognition doesn't come with an interface for\u0000communication for people with hearing, speech, or cognitive disabilities. The\u0000proposed model proposes the speech from the user, is transmitted to the speech\u0000recognition layer where it is converted into text and then that text is then\u0000transmitted to the morse code conversion layer where the morse code of the\u0000corresponding speech is given as the output. The accuracy of the model is\u0000completely dependent on speech recognition, as the morse code conversion is a\u0000process. The model is tested with recorded audio files with different\u0000parameters. The proposed model's WER and accuracy are both determined to be\u000010.18% and 89.82%, respectively.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141773475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens CosyVoice:基于有监督语义标记的可扩展多语言零镜头文本到语音合成器
arXiv - CS - Sound Pub Date : 2024-07-07 DOI: arxiv-2407.05407
Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhijie Yan
{"title":"CosyVoice: A Scalable Multilingual Zero-shot Text-to-speech Synthesizer based on Supervised Semantic Tokens","authors":"Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, Zhijie Yan","doi":"arxiv-2407.05407","DOIUrl":"https://doi.org/arxiv-2407.05407","url":null,"abstract":"Recent years have witnessed a trend that large language model (LLM) based\u0000text-to-speech (TTS) emerges into the mainstream due to their high naturalness\u0000and zero-shot capacity. In this paradigm, speech signals are discretized into\u0000token sequences, which are modeled by an LLM with text as prompts and\u0000reconstructed by a token-based vocoder to waveforms. Obviously, speech tokens\u0000play a critical role in LLM-based TTS models. Current speech tokens are learned\u0000in an unsupervised manner, which lacks explicit semantic information and\u0000alignment to the text. In this paper, we propose to represent speech with\u0000supervised semantic tokens, which are derived from a multilingual speech\u0000recognition model by inserting vector quantization into the encoder. Based on\u0000the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice,\u0000which consists of an LLM for text-to-token generation and a conditional flow\u0000matching model for token-to-speech synthesis. Experimental results show that\u0000supervised semantic tokens significantly outperform existing unsupervised\u0000tokens in terms of content consistency and speaker similarity for zero-shot\u0000voice cloning. Moreover, we find that utilizing large-scale data further\u0000improves the synthesis performance, indicating the scalable capacity of\u0000CosyVoice. To the best of our knowledge, this is the first attempt to involve\u0000supervised speech tokens into TTS models.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition 增强跨语言语音情感识别的层添加策略
arXiv - CS - Sound Pub Date : 2024-07-06 DOI: arxiv-2407.04966
Shreya G. Upadhyay, Carlos Busso, Chi-Chun Lee
{"title":"A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition","authors":"Shreya G. Upadhyay, Carlos Busso, Chi-Chun Lee","doi":"arxiv-2407.04966","DOIUrl":"https://doi.org/arxiv-2407.04966","url":null,"abstract":"Cross-lingual speech emotion recognition (SER) is important for a wide range\u0000of everyday applications. While recent SER research relies heavily on large\u0000pretrained models for emotion training, existing studies often concentrate\u0000solely on the final transformer layer of these models. However, given the\u0000task-specific nature and hierarchical architecture of these models, each\u0000transformer layer encapsulates different levels of information. Leveraging this\u0000hierarchical structure, our study focuses on the information embedded across\u0000different layers. Through an examination of layer feature similarity across\u0000different languages, we propose a novel strategy called a layer-anchoring\u0000mechanism to facilitate emotion transfer in cross-lingual SER tasks. Our\u0000approach is evaluated using two distinct language affective corpora\u0000(MSP-Podcast and BIIC-Podcast), achieving a best UAR performance of 60.21% on\u0000the BIIC-podcast corpus. The analysis uncovers interesting insights into the\u0000behavior of popular pretrained models.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Few-Shot Keyword Spotting from Mixed Speech 从混合语音中发现少量关键词
arXiv - CS - Sound Pub Date : 2024-07-05 DOI: arxiv-2407.06078
Junming Yuan, Ying Shi, LanTian Li, Dong Wang, Askar Hamdulla
{"title":"Few-Shot Keyword Spotting from Mixed Speech","authors":"Junming Yuan, Ying Shi, LanTian Li, Dong Wang, Askar Hamdulla","doi":"arxiv-2407.06078","DOIUrl":"https://doi.org/arxiv-2407.06078","url":null,"abstract":"Few-shot keyword spotting (KWS) aims to detect unknown keywords with limited\u0000training samples. A commonly used approach is the pre-training and fine-tuning\u0000framework. While effective in clean conditions, this approach struggles with\u0000mixed keyword spotting -- simultaneously detecting multiple keywords blended in\u0000an utterance, which is crucial in real-world applications. Previous research\u0000has proposed a Mix-Training (MT) approach to solve the problem, however, it has\u0000never been tested in the few-shot scenario. In this paper, we investigate the\u0000possibility of using MT and other relevant methods to solve the two practical\u0000challenges together: few-shot and mixed speech. Experiments conducted on the\u0000LibriSpeech and Google Speech Command corpora demonstrate that MT is highly\u0000effective on this task when employed in either the pre-training phase or the\u0000fine-tuning phase. Moreover, combining SSL-based large-scale pre-training\u0000(HuBert) and MT fine-tuning yields very strong results in all the test\u0000conditions.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141576057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUSIC-lite: Efficient MUSIC using Approximate Computing: An OFDM Radar Case Study MUSIC-lite:使用近似计算的高效 MUSIC:OFDM 雷达案例研究
arXiv - CS - Sound Pub Date : 2024-07-05 DOI: arxiv-2407.04849
Rajat Bhattacharjya, Arnab Sarkar, Biswadip Maity, Nikil Dutt
{"title":"MUSIC-lite: Efficient MUSIC using Approximate Computing: An OFDM Radar Case Study","authors":"Rajat Bhattacharjya, Arnab Sarkar, Biswadip Maity, Nikil Dutt","doi":"arxiv-2407.04849","DOIUrl":"https://doi.org/arxiv-2407.04849","url":null,"abstract":"Multiple Signal Classification (MUSIC) is a widely used Direction of Arrival\u0000(DoA)/Angle of Arrival (AoA) estimation algorithm applied to various\u0000application domains such as autonomous driving, medical imaging, and astronomy.\u0000However, MUSIC is computationally expensive and challenging to implement in\u0000low-power hardware, requiring exploration of trade-offs between accuracy, cost,\u0000and power. We present MUSIC-lite, which exploits approximate computing to\u0000generate a design space exploring accuracy-area-power trade-offs. This is\u0000specifically applied to the computationally intensive singular value\u0000decomposition (SVD) component of the MUSIC algorithm in an orthogonal\u0000frequency-division multiplexing (OFDM) radar use case. MUSIC-lite incorporates\u0000approximate adders into the iterative CORDIC algorithm that is used for\u0000hardware implementation of MUSIC, generating interesting accuracy-area-power\u0000trade-offs. Our experiments demonstrate MUSIC-lite's ability to save an average\u0000of 17.25% on-chip area and 19.4% power with a minimal 0.14% error for efficient\u0000MUSIC implementations.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Grouping Network for Audio Source Separation 用于音源分离的语义分组网络
arXiv - CS - Sound Pub Date : 2024-07-04 DOI: arxiv-2407.03736
Shentong Mo, Yapeng Tian
{"title":"Semantic Grouping Network for Audio Source Separation","authors":"Shentong Mo, Yapeng Tian","doi":"arxiv-2407.03736","DOIUrl":"https://doi.org/arxiv-2407.03736","url":null,"abstract":"Recently, audio-visual separation approaches have taken advantage of the\u0000natural synchronization between the two modalities to boost audio source\u0000separation performance. They extracted high-level semantics from visual inputs\u0000as the guidance to help disentangle sound representation for individual\u0000sources. Can we directly learn to disentangle the individual semantics from the\u0000sound itself? The dilemma is that multiple sound sources are mixed together in\u0000the original space. To tackle the difficulty, in this paper, we present a novel\u0000Semantic Grouping Network, termed as SGN, that can directly disentangle sound\u0000representations and extract high-level semantic information for each source\u0000from input audio mixture. Specifically, SGN aggregates category-wise source\u0000features through learnable class tokens of sounds. Then, the aggregated\u0000semantic features can be used as the guidance to separate the corresponding\u0000audio sources from the mixture. We conducted extensive experiments on\u0000music-only and universal sound separation benchmarks: MUSIC, FUSS, MUSDB18, and\u0000VGG-Sound. The results demonstrate that our SGN significantly outperforms\u0000previous audio-only methods and audio-visual models without utilizing\u0000additional visual cues.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141578116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What Does it Take to Generalize SER Model Across Datasets? A Comprehensive Benchmark 如何在不同数据集之间推广 SER 模型?综合基准
arXiv - CS - Sound Pub Date : 2024-06-14 DOI: arxiv-2406.09933
Adham Ibrahim, Shady Shehata, Ajinkya Kulkarni, Mukhtar Mohamed, Muhammad Abdul-Mageed
{"title":"What Does it Take to Generalize SER Model Across Datasets? A Comprehensive Benchmark","authors":"Adham Ibrahim, Shady Shehata, Ajinkya Kulkarni, Mukhtar Mohamed, Muhammad Abdul-Mageed","doi":"arxiv-2406.09933","DOIUrl":"https://doi.org/arxiv-2406.09933","url":null,"abstract":"Speech emotion recognition (SER) is essential for enhancing human-computer\u0000interaction in speech-based applications. Despite improvements in specific\u0000emotional datasets, there is still a research gap in SER's capability to\u0000generalize across real-world situations. In this paper, we investigate\u0000approaches to generalize the SER system across different emotion datasets. In\u0000particular, we incorporate 11 emotional speech datasets and illustrate a\u0000comprehensive benchmark on the SER task. We also address the challenge of\u0000imbalanced data distribution using over-sampling methods when combining SER\u0000datasets for training. Furthermore, we explore various evaluation protocols for\u0000adeptness in the generalization of SER. Building on this, we explore the\u0000potential of Whisper for SER, emphasizing the importance of thorough\u0000evaluation. Our approach is designed to advance SER technology by integrating\u0000speaker-independent methods.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Separating the "Chirp" from the "Chat": Self-supervised Visual Grounding of Sound and Language 从 "聊天 "中分离 "啁啾":声音和语言的自我监督视觉基础
arXiv - CS - Sound Pub Date : 2024-06-09 DOI: arxiv-2406.05629
Mark Hamilton, Andrew Zisserman, John R. Hershey, William T. Freeman
{"title":"Separating the \"Chirp\" from the \"Chat\": Self-supervised Visual Grounding of Sound and Language","authors":"Mark Hamilton, Andrew Zisserman, John R. Hershey, William T. Freeman","doi":"arxiv-2406.05629","DOIUrl":"https://doi.org/arxiv-2406.05629","url":null,"abstract":"We present DenseAV, a novel dual encoder grounding architecture that learns\u0000high-resolution, semantically meaningful, and audio-visually aligned features\u0000solely through watching videos. We show that DenseAV can discover the\u0000``meaning'' of words and the ``location'' of sounds without explicit\u0000localization supervision. Furthermore, it automatically discovers and\u0000distinguishes between these two types of associations without supervision. We\u0000show that DenseAV's localization abilities arise from a new multi-head feature\u0000aggregation operator that directly compares dense image and audio\u0000representations for contrastive learning. In contrast, many other systems that\u0000learn ``global'' audio and video representations cannot localize words and\u0000sound. Finally, we contribute two new datasets to improve the evaluation of AV\u0000representations through speech and sound prompted semantic segmentation. On\u0000these and other datasets we show DenseAV dramatically outperforms the prior art\u0000on speech and sound prompted semantic segmentation. DenseAV outperforms the\u0000previous state-of-the-art, ImageBind, on cross-modal retrieval using fewer than\u0000half of the parameters. Project Page:\u0000href{https://aka.ms/denseav}{https://aka.ms/denseav}","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers VALL-E 2:神经编解码语言模型是人类平等的零镜头文本到语音合成器
arXiv - CS - Sound Pub Date : 2024-06-08 DOI: arxiv-2406.05370
Sanyuan Chen, Shujie Liu, Long Zhou, Yanqing Liu, Xu Tan, Jinyu Li, Sheng Zhao, Yao Qian, Furu Wei
{"title":"VALL-E 2: Neural Codec Language Models are Human Parity Zero-Shot Text to Speech Synthesizers","authors":"Sanyuan Chen, Shujie Liu, Long Zhou, Yanqing Liu, Xu Tan, Jinyu Li, Sheng Zhao, Yao Qian, Furu Wei","doi":"arxiv-2406.05370","DOIUrl":"https://doi.org/arxiv-2406.05370","url":null,"abstract":"This paper introduces VALL-E 2, the latest advancement in neural codec\u0000language models that marks a milestone in zero-shot text-to-speech synthesis\u0000(TTS), achieving human parity for the first time. Based on its predecessor,\u0000VALL-E, the new iteration introduces two significant enhancements: Repetition\u0000Aware Sampling refines the original nucleus sampling process by accounting for\u0000token repetition in the decoding history. It not only stabilizes the decoding\u0000but also circumvents the infinite loop issue. Grouped Code Modeling organizes\u0000codec codes into groups to effectively shorten the sequence length, which not\u0000only boosts inference speed but also addresses the challenges of long sequence\u0000modeling. Our experiments on the LibriSpeech and VCTK datasets show that VALL-E\u00002 surpasses previous systems in speech robustness, naturalness, and speaker\u0000similarity. It is the first of its kind to reach human parity on these\u0000benchmarks. Moreover, VALL-E 2 consistently synthesizes high-quality speech,\u0000even for sentences that are traditionally challenging due to their complexity\u0000or repetitive phrases. The advantages of this work could contribute to valuable\u0000endeavors, such as generating speech for individuals with aphasia or people\u0000with amyotrophic lateral sclerosis. Demos of VALL-E 2 will be posted to\u0000https://aka.ms/valle2.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141509130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信