2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)最新文献

筛选
英文 中文
MoEVC: A Mixture of Experts Voice Conversion System With Sparse Gating Mechanism for Online Computation Acceleration 基于稀疏门控机制的混合专家语音转换系统
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362072
Yu-Tao Chang, Yuan-Hong Yang, Yu-Huai Peng, Syu-Siang Wang, T. Chi, Yu Tsao, Hsin-Min Wang
{"title":"MoEVC: A Mixture of Experts Voice Conversion System With Sparse Gating Mechanism for Online Computation Acceleration","authors":"Yu-Tao Chang, Yuan-Hong Yang, Yu-Huai Peng, Syu-Siang Wang, T. Chi, Yu Tsao, Hsin-Min Wang","doi":"10.1109/ISCSLP49672.2021.9362072","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362072","url":null,"abstract":"Owing to the recent advancements in deep learning technology, the performance of voice conversion (VC) in terms of quality and similarity has significantly improved. However, complex computation is generally required for deep-learning-based VC systems. This can cause a notable latency, which limits the deployment of such VC systems in real-world applications. Therefore, increasing the efficiency of online computing has become an important task. In this study, we propose a novel mixture-of-experts (MoE) based VC system, termed MoEVC. The MoEVC system uses a gating mechanism to assign weights to feature maps to increase VC performance. In addition, applying sparse constraints on the gating mechanism can skip some convolution processes through elimination of redundant feature maps, thereby accelerating online computing. Experimental results show that by using proper sparse constraints, we can effectively reduce the FLOPs (floating-point operations) count by 70%, while improving VC performance in both objective evaluation and human subjective listening tests.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117106318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improves Neural Acoustic Word Embeddings Query by Example Spoken Term Detection with Wav2vec Pretraining and Circle Loss 基于Wav2vec预训练和圆损失的语音词检测改进神经声学词嵌入查询
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362065
Zhaoqi Li, Long Wu, Ta Li, Yonghong Yan
{"title":"Improves Neural Acoustic Word Embeddings Query by Example Spoken Term Detection with Wav2vec Pretraining and Circle Loss","authors":"Zhaoqi Li, Long Wu, Ta Li, Yonghong Yan","doi":"10.1109/ISCSLP49672.2021.9362065","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362065","url":null,"abstract":"Query by example spoken term detection (QbE-STD) is a popular keyword detection method in the absence of speech resources. It can build a keyword query system with decent performance when there are few labeled speeches and a lack of pronunciation dictionaries. In recent years, neural acoustic word embeddings (NAWEs) has become a commonly used QbE-STD method. To make the embedded features extracted by the neural network contain more accurate context information, we use wav2vec pre-training to improve the performance of the network. Compared with the Mel-frequency cepstral coefficients(MFCC) system, the average precision (AP) is relatively improved by 11.1%. We also find that the AP of the wav2vec and MFCC splicing system is better, demonstrating that wav2vec cannot contain all spectrum information. To accelerate the convergence speed of the splicing system, we use circle loss to replace the triplet loss, making the convergence about 40% epochs earlier on average. The circle loss also relatively increases AP by more than 4.9%. The AP of our best-performing system is 7.7% better than the wav2vec baseline system and 19.7% better than the MFCC baseline system.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132486314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speaker Charisma Analyzed through the Cultural Lens 文化视角下的说话人魅力分析
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362100
Anna Gutnyk, Oliver Niebuhr, Wentao Gu
{"title":"Speaker Charisma Analyzed through the Cultural Lens","authors":"Anna Gutnyk, Oliver Niebuhr, Wentao Gu","doi":"10.1109/ISCSLP49672.2021.9362100","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362100","url":null,"abstract":"Speaker charisma is conveyed through multiple aspects: ideas, visions, and perceivable verbal and non-verbal behaviors. Among these perceivable behaviors, probably the most prominent are the acoustic features of one’s voice. We present here a cross-cultural study on charismatic speech. We conducted a combination of acoustic-prosodic analysis and perceptual experiment on the speeches given by presenters from 6 countries on 4 continents to shed initial light on the cultural differences in producing and perceiving speaker charisma, as well as on the impact of speaker gender. Results show that charisma production and perception are affected by prosodic features to different extents across both countries and speaker genders.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127363974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
LDA-based Speaker Verification in Multi-Enrollment Scenario using Expected Vector Approach 多注册场景下基于lda的说话人验证
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362113
Meet H. Soni, Ashish Panda
{"title":"LDA-based Speaker Verification in Multi-Enrollment Scenario using Expected Vector Approach","authors":"Meet H. Soni, Ashish Panda","doi":"10.1109/ISCSLP49672.2021.9362113","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362113","url":null,"abstract":"Multi-Enrollment scoring scenario, where multiple utterances are available for an enrollment speaker, is one of the less explored problems in the Probabilistic Linear Discriminant Analysis (PLDA) scoring literature. Since the closed-form PLDA scoring formula for multi-enrollment scenario is impractical, alternate heuristic approaches are widely used for such scenarios in both i-vector and x-vector based speaker verification systems. In this paper, we describe an Expected Vector approach to obtain a vector from multiple enrollment utterances. Expected Vector approach uses a trained PLDA model to compute the expected class center given a set of vectors for that particular PLDA model. By using such an approach, a more meaningful class center representation can be obtained. This vector can be used to score a trial using two-vector scoring formula for a given PLDA model. We compare the performance of the proposed approach with various heuristic approaches and show that it provides significant improvements in terms of Equal Error Rate (EER) and minimum Detection Cost Function (minDCF). We show our results on x-vector system trained on Voxceleb dataset with various implementations of PLDA and trials designed on Voxceleb and Librispeech dataset.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122905193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Production of Tone 3 Sandhi by Advanced Korean Learners of Mandarin 高级韩语普通话学习者三调连调的制作
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362094
Xin Li, Yin Huang, Yunheng Xu, Linxin Yi, Yuming Yuan, Min Xiang
{"title":"Production of Tone 3 Sandhi by Advanced Korean Learners of Mandarin","authors":"Xin Li, Yin Huang, Yunheng Xu, Linxin Yi, Yuming Yuan, Min Xiang","doi":"10.1109/ISCSLP49672.2021.9362094","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362094","url":null,"abstract":"Many studies have found that native Mandarin speakers produce T3 sandhi in a non-neutralized way, with the sandhi tone not fully merged into its sandhi target. This makes it interesting to investigate whether advanced learners of Mandarin will produce T3 sandhi in a similar or different way as well as reasons underlying it. Results from this carefully controlled experimental study show that advanced Korean learners of Mandarin produce T3 sandhi in a neutralized way, in contrast to non-neutralized production by native Mandarin speakers. This is probably due to Korean speakers’ learning of the explicit T3 sandhi rule in their L2 learning program, as compared to the implicit learning of the T3 sandhi rule by Mandarin speakers. Though Korean speakers produce T3 sandhi in a neutralized way, their production of sandhi T3 is found to be overall flatter in f0 curve than that of native Mandarin speakers; this is consistent with their production of Mandarin lexical T2, the base tone of the sandhi target T2. This finding suggests that these advanced L2 learners’ sandhi production is restrained by the way they produce Mandarin lexical tones.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133528557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Investigation of Positional Encoding in Transformer-based End-to-end Speech Recognition 基于变换的端到端语音识别中的位置编码研究
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362093
Fengpeng Yue, Tom Ko
{"title":"An Investigation of Positional Encoding in Transformer-based End-to-end Speech Recognition","authors":"Fengpeng Yue, Tom Ko","doi":"10.1109/ISCSLP49672.2021.9362093","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362093","url":null,"abstract":"In the Transformer architecture, the model does not intrinsically learn the ordering information of the input frames and tokens due to its self-attention mechanism. In sequence-to-sequence learning tasks, the missing of ordering information is explicitly filled up by the use of positional representation. Currently, there are two major ways of using positional representation: the absolute way and relative way. In both ways, the positional in-formation is represented by positional vector. In this paper, we propose the use of positional matrix in the context of relative positional vector. Instead of adding the vectors to the key vectors in the self-attention layer, our method transforms the key vectors according to its position. Experiments on LibriSpeech dataset show that our approach outperforms the positional vector approach.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114276122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dialogue Act Recognition using Branch Architecture with Attention Mechanism for Imbalanced Data 基于分支架构的非平衡数据关注机制对话行为识别
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362103
Mengfei Wu, Longbiao Wang, Yuke Si, J. Dang
{"title":"Dialogue Act Recognition using Branch Architecture with Attention Mechanism for Imbalanced Data","authors":"Mengfei Wu, Longbiao Wang, Yuke Si, J. Dang","doi":"10.1109/ISCSLP49672.2021.9362103","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362103","url":null,"abstract":"Dialogue act recognition is a sequence labeling task that maps the dialogue act tag to each utterance in a conversation. Previous works on dialogue act recognition have investigated many methods, such as using Bi-LSTM-CRF model to improve accuracy. However, these methods ignore the problem caused by the imbalanced distribution of the data. In this paper, we target at dealing with the class imbalance problem on dialogue act recognition, and propose a branch architecture to predict different level data. The whole framework reflects a hierarchical pattern. The branches can induce global regularization, which is conducive to the utterance layer, help LSTM model to capture the features for minority classes. We also exploit self-attention mechanism after utterance layer to capture dependencies among words. Experimental results on a mandarin dialogue corpus, called CASIA-CASSIL corpus, show that our framework significantly outperforms other methods. And our experimental results also indicate the effectiveness of punctuation on the branch model and the interaction between two branches.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117126649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploring Cross-lingual Singing Voice Synthesis Using Speech Data 利用语音数据探索跨语言歌唱声音合成
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362077
Yuewen Cao, Songxiang Liu, Shiyin Kang, Na Hu, Peng Liu, Xunying Liu, Dan Su, Dong Yu, H. Meng
{"title":"Exploring Cross-lingual Singing Voice Synthesis Using Speech Data","authors":"Yuewen Cao, Songxiang Liu, Shiyin Kang, Na Hu, Peng Liu, Xunying Liu, Dan Su, Dong Yu, H. Meng","doi":"10.1109/ISCSLP49672.2021.9362077","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362077","url":null,"abstract":"State-of-the-art singing voice synthesis (SVS) models can generate natural singing voice of a target speaker, given his/her speaking/singing data in the same language. However, there may be challenging conditions where only speech data in a non-target language of the target speaker is available. In this paper, we present a cross-lingual SVS system that can synthesize an English speaker’s singing voice in Mandarin from musical scores with only her speech data in English. The pro-posed cross-lingual SVS system contains four parts: a BLSTM based duration model, a pitch model, a cross-lingual acoustic model and a neural vocoder. The acoustic model employs encoder-decoder architecture conditioned on pitch, phoneme duration, speaker information and language information. An adversarially-trained speaker classifier is employed to discourage the text encodings from capturing speaker information. Objective evaluation and subjective listening tests demonstrate that the proposed cross-lingual SVS system can generate singing voice with decent naturalness and fair speaker similarity. We also find that adding singing data or multi-speaker monolingual speech data further improves generalization on pronunciation and pitch accuracy.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":" 34","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Estimating Mutual Information in Prosody Representation for Emotional Prosody Transfer in Speech Synthesis 语音合成中情感韵律迁移的韵律表征互信息估计
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362098
Guangyan Zhang, Shirong Qiu, Ying Qin, Tan Lee
{"title":"Estimating Mutual Information in Prosody Representation for Emotional Prosody Transfer in Speech Synthesis","authors":"Guangyan Zhang, Shirong Qiu, Ying Qin, Tan Lee","doi":"10.1109/ISCSLP49672.2021.9362098","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362098","url":null,"abstract":"An end-to-end prosody transfer system aims to transfer the speech prosody from one speaker to another speaker. One major application is the generation of emotional speech with a new speaker’s voice. The end-to-end system uses an intermediate representation of prosody, which encompasses both speaker and emotion related information. The present study tackles the problem of estimating the mutual information between emotion and speaker-related factors in the prosody representation. A mutual information neural estimator (MINE) which could measure the mutual information between high-dimensional continuous prosody embedding and discrete speaker/emotion label is applied. The experimental results show that: 1) the prosody representation generated by the end-to-end system indeed contains both emotion and speaker information; 2) The mutual information would be determined by the type of input acoustic features to the reference encoder; 3) normalization for the log F0 feature is very effective in increasing emotion-related information in the prosody representation; 4) adversarial learning can be applied to reduce speaker information in the prosody representation. These results are useful to the further development of an optimal and practical emotional prosody transfer systems.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133902298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Transformer-based Empathetic Response Generation Using Dialogue Situation and Advanced-Level Definition of Empathy 基于对话情境的移情反应生成与移情的高级定义
2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP) Pub Date : 2021-01-24 DOI: 10.1109/ISCSLP49672.2021.9362067
Yi-Hsuan Wang, Jia-Hao Hsu, Chung-Hsien Wu, Tsung-Hsien Yang
{"title":"Transformer-based Empathetic Response Generation Using Dialogue Situation and Advanced-Level Definition of Empathy","authors":"Yi-Hsuan Wang, Jia-Hao Hsu, Chung-Hsien Wu, Tsung-Hsien Yang","doi":"10.1109/ISCSLP49672.2021.9362067","DOIUrl":"https://doi.org/10.1109/ISCSLP49672.2021.9362067","url":null,"abstract":"This study proposes an approach to transformer-based empathetic response generation using dialogue situation and advanced-level definition of empathy. First, SBERT is adopted to extract the dialog situation vector from the user's historical sentences. A BERT-based emotion detector, a topic detector and an information estimator are constructed for empathy-related feature extraction. The change of the emotional valance and the textual information gain, obtained from the emotion detector and the information estimator, are used for adversarial training of the transformer-based empathetic response generator. The loss function of the transformer is defined to measure how good the expected response in terms of fluency and empathy. The EmpatheticDialogues corpus was adopted for system training and evaluation on empathetic response generation. According to the experimental results, the BLEU score was increased to 7.821 after considering the dialogue situation feature and empathy definition, outperforming the comparison models. In terms of human subjective evaluation, three evaluation results of empathy, relevance and fluency for the proposed system are better than that for the baseline model.","PeriodicalId":279828,"journal":{"name":"2021 12th International Symposium on Chinese Spoken Language Processing (ISCSLP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116931075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信