arXiv - CS - Sound最新文献

筛选
英文 中文
Comparative Study of Recurrent Neural Networks for Virtual Analog Audio Effects Modeling 用于虚拟模拟音频效果建模的递归神经网络比较研究
arXiv - CS - Sound Pub Date : 2024-05-07 DOI: arxiv-2405.04124
Riccardo Simionato, Stefano Fasciani
{"title":"Comparative Study of Recurrent Neural Networks for Virtual Analog Audio Effects Modeling","authors":"Riccardo Simionato, Stefano Fasciani","doi":"arxiv-2405.04124","DOIUrl":"https://doi.org/arxiv-2405.04124","url":null,"abstract":"Analog electronic circuits are at the core of an important category of\u0000musical devices. The nonlinear features of their electronic components give\u0000analog musical devices a distinctive timbre and sound quality, making them\u0000highly desirable. Artificial neural networks have rapidly gained popularity for\u0000the emulation of analog audio effects circuits, particularly recurrent\u0000networks. While neural approaches have been successful in accurately modeling\u0000distortion circuits, they require architectural improvements that account for\u0000parameter conditioning and low latency response. In this article, we explore\u0000the application of recent machine learning advancements for virtual analog\u0000modeling. We compare State Space models and Linear Recurrent Units against the\u0000more common Long Short Term Memory networks. These have shown promising ability\u0000in sequence to sequence modeling tasks, showing a notable improvement in signal\u0000history encoding. Our comparative study uses these black box neural modeling\u0000techniques with a variety of audio effects. We evaluate the performance and\u0000limitations using multiple metrics aiming to assess the models' ability to\u0000accurately replicate energy envelopes, frequency contents, and transients in\u0000the audio signal. To incorporate control parameters we employ the Feature wise\u0000Linear Modulation method. Long Short Term Memory networks exhibit better\u0000accuracy in emulating distortions and equalizers, while the State Space model,\u0000followed by Long Short Term Memory networks when integrated in an encoder\u0000decoder structure, outperforms others in emulating saturation and compression.\u0000When considering long time variant characteristics, the State Space model\u0000demonstrates the greatest accuracy. The Long Short Term Memory and, in\u0000particular, Linear Recurrent Unit networks present more tendency to introduce\u0000audio artifacts.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140927682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
POPDG: Popular 3D Dance Generation with PopDanceSet POPDG:使用 PopDanceSet 生成流行 3D 舞蹈
arXiv - CS - Sound Pub Date : 2024-05-06 DOI: arxiv-2405.03178
Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, Li Yao
{"title":"POPDG: Popular 3D Dance Generation with PopDanceSet","authors":"Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, Li Yao","doi":"arxiv-2405.03178","DOIUrl":"https://doi.org/arxiv-2405.03178","url":null,"abstract":"Generating dances that are both lifelike and well-aligned with music\u0000continues to be a challenging task in the cross-modal domain. This paper\u0000introduces PopDanceSet, the first dataset tailored to the preferences of young\u0000audiences, enabling the generation of aesthetically oriented dances. And it\u0000surpasses the AIST++ dataset in music genre diversity and the intricacy and\u0000depth of dance movements. Moreover, the proposed POPDG model within the iDDPM\u0000framework enhances dance diversity and, through the Space Augmentation\u0000Algorithm, strengthens spatial physical connections between human body joints,\u0000ensuring that increased diversity does not compromise generation quality. A\u0000streamlined Alignment Module is also designed to improve the temporal alignment\u0000between dance and music. Extensive experiments show that POPDG achieves SOTA\u0000results on two datasets. Furthermore, the paper also expands on current\u0000evaluation metrics. The dataset and code are available at\u0000https://github.com/Luke-Luo1/POPDG.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transhuman Ansambl - Voice Beyond Language 超人类 Ansambl - 超越语言的声音
arXiv - CS - Sound Pub Date : 2024-05-06 DOI: arxiv-2405.03134
Lucija Ivsic, Jon McCormack, Vince Dziekan
{"title":"Transhuman Ansambl - Voice Beyond Language","authors":"Lucija Ivsic, Jon McCormack, Vince Dziekan","doi":"arxiv-2405.03134","DOIUrl":"https://doi.org/arxiv-2405.03134","url":null,"abstract":"In this paper we present the design and development of the Transhuman\u0000Ansambl, a novel interactive singing-voice interface which senses its\u0000environment and responds to vocal input with vocalisations using human voice.\u0000Designed for live performance with a human performer and as a standalone sound\u0000installation, the ansambl consists of sixteen bespoke virtual singers arranged\u0000in a circle. When performing live, the virtual singers listen to the human\u0000performer and respond to their singing by reading pitch, intonation and volume\u0000cues. In a standalone sound installation mode, singers use ultrasonic distance\u0000sensors to sense audience presence. Developed as part of the 1st author's\u0000practice-based PhD and artistic practice as a live performer, this work employs\u0000the singing-voice to explore voice interactions in HCI beyond language, and\u0000innovative ways of live performing. How is technology supporting the effect of\u0000intimacy produced through voice? Does the act of surrounding the audience with\u0000responsive virtual singers challenge the traditional roles of\u0000performer-listener? To answer these questions, we draw upon the 1st author's\u0000experience with the system, and the interdisciplinary field of voice studies\u0000that consider the voice as the sound medium independent of language, capable of\u0000enacting a reciprocal connection between bodies.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determined Multichannel Blind Source Separation with Clustered Source Model 利用聚类声源模型确定多通道盲声源分离技术
arXiv - CS - Sound Pub Date : 2024-05-06 DOI: arxiv-2405.03118
Jianyu Wang, Shanzheng Guan
{"title":"Determined Multichannel Blind Source Separation with Clustered Source Model","authors":"Jianyu Wang, Shanzheng Guan","doi":"arxiv-2405.03118","DOIUrl":"https://doi.org/arxiv-2405.03118","url":null,"abstract":"The independent low-rank matrix analysis (ILRMA) method stands out as a\u0000prominent technique for multichannel blind audio source separation. It\u0000leverages nonnegative matrix factorization (NMF) and nonnegative canonical\u0000polyadic decomposition (NCPD) to model source parameters. While it effectively\u0000captures the low-rank structure of sources, the NMF model overlooks\u0000inter-channel dependencies. On the other hand, NCPD preserves intrinsic\u0000structure but lacks interpretable latent factors, making it challenging to\u0000incorporate prior information as constraints. To address these limitations, we\u0000introduce a clustered source model based on nonnegative block-term\u0000decomposition (NBTD). This model defines blocks as outer products of vectors\u0000(clusters) and matrices (for spectral structure modeling), offering\u0000interpretable latent vectors. Moreover, it enables straightforward integration\u0000of orthogonality constraints to ensure independence among source images.\u0000Experimental results demonstrate that our proposed method outperforms ILRMA and\u0000its extensions in anechoic conditions and surpasses the original ILRMA in\u0000simulated reverberant environments.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whispy: Adapting STT Whisper Models to Real-Time Environments Whispy:根据实时环境调整 STT Whisper 模型
arXiv - CS - Sound Pub Date : 2024-05-06 DOI: arxiv-2405.03484
Antonio Bevilacqua, Paolo Saviano, Alessandro Amirante, Simon Pietro Romano
{"title":"Whispy: Adapting STT Whisper Models to Real-Time Environments","authors":"Antonio Bevilacqua, Paolo Saviano, Alessandro Amirante, Simon Pietro Romano","doi":"arxiv-2405.03484","DOIUrl":"https://doi.org/arxiv-2405.03484","url":null,"abstract":"Large general-purpose transformer models have recently become the mainstay in\u0000the realm of speech analysis. In particular, Whisper achieves state-of-the-art\u0000results in relevant tasks such as speech recognition, translation, language\u0000identification, and voice activity detection. However, Whisper models are not\u0000designed to be used in real-time conditions, and this limitation makes them\u0000unsuitable for a vast plethora of practical applications. In this paper, we\u0000introduce Whispy, a system intended to bring live capabilities to the Whisper\u0000pretrained models. As a result of a number of architectural optimisations,\u0000Whispy is able to consume live audio streams and generate high level, coherent\u0000voice transcriptions, while still maintaining a low computational cost. We\u0000evaluate the performance of our system on a large repository of publicly\u0000available speech datasets, investigating how the transcription mechanism\u0000introduced by Whispy impacts on the Whisper output. Experimental results show\u0000how Whispy excels in robustness, promptness, and accuracy.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Space Separable Distillation for Lightweight Acoustic Scene Classification 用于轻量级声学场景分类的深空可分离蒸馏技术
arXiv - CS - Sound Pub Date : 2024-05-06 DOI: arxiv-2405.03567
ShuQi Ye, Yuan Tian
{"title":"Deep Space Separable Distillation for Lightweight Acoustic Scene Classification","authors":"ShuQi Ye, Yuan Tian","doi":"arxiv-2405.03567","DOIUrl":"https://doi.org/arxiv-2405.03567","url":null,"abstract":"Acoustic scene classification (ASC) is highly important in the real world.\u0000Recently, deep learning-based methods have been widely employed for acoustic\u0000scene classification. However, these methods are currently not lightweight\u0000enough as well as their performance is not satisfactory. To solve these\u0000problems, we propose a deep space separable distillation network. Firstly, the\u0000network performs high-low frequency decomposition on the log-mel spectrogram,\u0000significantly reducing computational complexity while maintaining model\u0000performance. Secondly, we specially design three lightweight operators for ASC,\u0000including Separable Convolution (SC), Orthonormal Separable Convolution (OSC),\u0000and Separable Partial Convolution (SPC). These operators exhibit highly\u0000efficient feature extraction capabilities in acoustic scene classification\u0000tasks. The experimental results demonstrate that the proposed method achieves a\u0000performance gain of 9.8% compared to the currently popular deep learning\u0000methods, while also having smaller parameter count and computational\u0000complexity.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models 莫扎特的触摸基于预训练大型模型的轻量级多模态音乐生成框架
arXiv - CS - Sound Pub Date : 2024-05-05 DOI: arxiv-2405.02801
Tianze Xu, Jiajun Li, Xuesong Chen, Yinrui Yao, Shuchang Liu
{"title":"Mozart's Touch: A Lightweight Multi-modal Music Generation Framework Based on Pre-Trained Large Models","authors":"Tianze Xu, Jiajun Li, Xuesong Chen, Yinrui Yao, Shuchang Liu","doi":"arxiv-2405.02801","DOIUrl":"https://doi.org/arxiv-2405.02801","url":null,"abstract":"In recent years, AI-Generated Content (AIGC) has witnessed rapid\u0000advancements, facilitating the generation of music, images, and other forms of\u0000artistic expression across various industries. However, researches on general\u0000multi-modal music generation model remain scarce. To fill this gap, we propose\u0000a multi-modal music generation framework Mozart's Touch. It could generate\u0000aligned music with the cross-modality inputs, such as images, videos and text.\u0000Mozart's Touch is composed of three main components: Multi-modal Captioning\u0000Module, Large Language Model (LLM) Understanding & Bridging Module, and Music\u0000Generation Module. Unlike traditional approaches, Mozart's Touch requires no\u0000training or fine-tuning pre-trained models, offering efficiency and\u0000transparency through clear, interpretable prompts. We also introduce\u0000\"LLM-Bridge\" method to resolve the heterogeneous representation problems\u0000between descriptive texts of different modalities. We conduct a series of\u0000objective and subjective evaluations on the proposed model, and results\u0000indicate that our model surpasses the performance of current state-of-the-art\u0000models. Our codes and examples is availble at:\u0000https://github.com/WangTooNaive/MozartsTouch","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction 利用频率自适应声场预测进行视听导航的 Sim2Real 传输
arXiv - CS - Sound Pub Date : 2024-05-05 DOI: arxiv-2405.02821
Changan Chen, Jordi Ramos, Anshul Tomar, Kristen Grauman
{"title":"Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction","authors":"Changan Chen, Jordi Ramos, Anshul Tomar, Kristen Grauman","doi":"arxiv-2405.02821","DOIUrl":"https://doi.org/arxiv-2405.02821","url":null,"abstract":"Sim2real transfer has received increasing attention lately due to the success\u0000of learning robotic tasks in simulation end-to-end. While there has been a lot\u0000of progress in transferring vision-based navigation policies, the existing\u0000sim2real strategy for audio-visual navigation performs data augmentation\u0000empirically without measuring the acoustic gap. The sound differs from light in\u0000that it spans across much wider frequencies and thus requires a different\u0000solution for sim2real. We propose the first treatment of sim2real for\u0000audio-visual navigation by disentangling it into acoustic field prediction\u0000(AFP) and waypoint navigation. We first validate our design choice in the\u0000SoundSpaces simulator and show improvement on the Continuous AudioGoal\u0000navigation benchmark. We then collect real-world data to measure the spectral\u0000difference between the simulation and the real world by training AFP models\u0000that only take a specific frequency subband as input. We further propose a\u0000frequency-adaptive strategy that intelligently selects the best frequency band\u0000for prediction based on both the measured spectral difference and the energy\u0000distribution of the received audio, which improves the performance on the real\u0000data. Lastly, we build a real robot platform and show that the transferred\u0000policy can successfully navigate to sounding objects. This work demonstrates\u0000the potential of building intelligent agents that can see, hear, and act\u0000entirely from simulation, and transferring them to the real world.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RepAugment: Input-Agnostic Representation-Level Augmentation for Respiratory Sound Classification RepAugment:用于呼吸声分类的输入诊断表征级增强技术
arXiv - CS - Sound Pub Date : 2024-05-05 DOI: arxiv-2405.02996
June-Woo Kim, Miika Toikkanen, Sangmin Bae, Minseok Kim, Ho-Young Jung
{"title":"RepAugment: Input-Agnostic Representation-Level Augmentation for Respiratory Sound Classification","authors":"June-Woo Kim, Miika Toikkanen, Sangmin Bae, Minseok Kim, Ho-Young Jung","doi":"arxiv-2405.02996","DOIUrl":"https://doi.org/arxiv-2405.02996","url":null,"abstract":"Recent advancements in AI have democratized its deployment as a healthcare\u0000assistant. While pretrained models from large-scale visual and audio datasets\u0000have demonstrably generalized to this task, surprisingly, no studies have\u0000explored pretrained speech models, which, as human-originated sounds,\u0000intuitively would share closer resemblance to lung sounds. This paper explores\u0000the efficacy of pretrained speech models for respiratory sound classification.\u0000We find that there is a characterization gap between speech and lung sound\u0000samples, and to bridge this gap, data augmentation is essential. However, the\u0000most widely used augmentation technique for audio and speech, SpecAugment,\u0000requires 2-dimensional spectrogram format and cannot be applied to models\u0000pretrained on speech waveforms. To address this, we propose RepAugment, an\u0000input-agnostic representation-level augmentation technique that outperforms\u0000SpecAugment, but is also suitable for respiratory sound classification with\u0000waveform pretrained models. Experimental results show that our approach\u0000outperforms the SpecAugment, demonstrating a substantial improvement in the\u0000accuracy of minority disease classes, reaching up to 7.14%.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Steered Response Power for Sound Source Localization: A Tutorial Review 声源定位的转向响应功率:教程回顾
arXiv - CS - Sound Pub Date : 2024-05-05 DOI: arxiv-2405.02991
Eric Grinstein, Elisa Tengan, Bilgesu Çakmak, Thomas Dietzen, Leonardo Nunes, Toon van Waterschoot, Mike Brookes, Patrick A. Naylor
{"title":"Steered Response Power for Sound Source Localization: A Tutorial Review","authors":"Eric Grinstein, Elisa Tengan, Bilgesu Çakmak, Thomas Dietzen, Leonardo Nunes, Toon van Waterschoot, Mike Brookes, Patrick A. Naylor","doi":"arxiv-2405.02991","DOIUrl":"https://doi.org/arxiv-2405.02991","url":null,"abstract":"In the last three decades, the Steered Response Power (SRP) method has been\u0000widely used for the task of Sound Source Localization (SSL), due to its\u0000satisfactory localization performance on moderately reverberant and noisy\u0000scenarios. Many works have analyzed and extended the original SRP method to\u0000reduce its computational cost, to allow it to locate multiple sources, or to\u0000improve its performance in adverse environments. In this work, we review over\u0000200 papers on the SRP method and its variants, with emphasis on the SRP-PHAT\u0000method. We also present eXtensible-SRP, or X-SRP, a generalized and modularized\u0000version of the SRP algorithm which allows the reviewed extensions to be\u0000implemented. We provide a Python implementation of the algorithm which includes\u0000selected extensions from the literature.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信