Proceedings of the 30th ACM International Conference on Multimedia最新文献

筛选
英文 中文
Generalized Inter-class Loss for Gait Recognition 步态识别的广义类间损失
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548311
Weichen Yu, Hongyuan Yu, Yan Huang, Liang Wang
{"title":"Generalized Inter-class Loss for Gait Recognition","authors":"Weichen Yu, Hongyuan Yu, Yan Huang, Liang Wang","doi":"10.1145/3503161.3548311","DOIUrl":"https://doi.org/10.1145/3503161.3548311","url":null,"abstract":"Gait recognition is a unique biometric technique that can be performed at a long distance non-cooperatively and has broad applications in public safety and intelligent traffic systems. The previous gait works focus more on minimizing the intra-class variance while ignoring the significance of constraining inter-class variance. To this end, we propose a generalized inter-class loss that resolves the inter-class variance from both sample-level feature distribution and class-level feature distribution. Instead of equal penalty strength on pair scores, the proposed loss optimizes sample-level inter-class feature distribution by dynamically adjusting the pairwise weight. Further, in class-level distribution, the proposed loss adds a constraint on the uniformity of inter-class feature distribution, which forces the feature representations to approximate a hypersphere and keep maximal inter-class variance. In addition, the proposed method automatically adjusts the margin between classes which enables the inter-class feature distribution to be more flexible. The proposed method can be generalized to different gait recognition networks and achieves significant improvements. We conduct a series of experiments on CASIA-B and OUMVLP, and the experimental results show that the proposed loss can significantly improve the performance and achieves the state-of-the-art performances.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128546351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards High-Fidelity Face Normal Estimation 面向高保真人脸正态估计
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3547959
M. Wang, Chaoyue Wang, Xiaojie Guo, Jiawan Zhang
{"title":"Towards High-Fidelity Face Normal Estimation","authors":"M. Wang, Chaoyue Wang, Xiaojie Guo, Jiawan Zhang","doi":"10.1145/3503161.3547959","DOIUrl":"https://doi.org/10.1145/3503161.3547959","url":null,"abstract":"While existing face normal estimation methods have produced promising results on small datasets, they often suffer from severe performance degradation on diverse in-the-wild face images, especially for the high-fidelity face normal estimation. Training a high-fidelity face normal estimation model with generalization capability requires a large amount of training data with face normal ground truth. Since collecting such high-fidelity database is difficult in practice, which prevents current methods from recovering face normal with fine-grained geometric details. To mitigate this issue, we propose a coarse-to-fine framework to estimate face normal from an in-the-wild image with only a coarse exemplar reference. Specifically, we first train a model using limited training data to exploit the coarse normal of a real face image. Then, we leverage the estimated coarse normal as an exemplar and devise an exemplar-based normal estimation network to explore robust mapping from the input face image to the fine-grained normal. In this manner, our method can largely alleviate the negative impact caused by lacking training data, and focus on exploring the high-fidelity normal contained in natural images. Extensive experiments and ablation studies are conducted to demonstrate the efficacy of our design, and reveal its superiority over state-of-the-art methods in terms of both training data requirement and recovery quality of fine-grained face normal. Our code is available at urlhttps://github.com/AutoHDR/HFFNE.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129311944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking the Mechanism of the Pattern Pruning and the Circle Importance Hypothesis 模式修剪机制与循环重要性假说的再思考
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548290
Hengyi Zhou, Longjun Liu, Haonan Zhang, N. Zheng
{"title":"Rethinking the Mechanism of the Pattern Pruning and the Circle Importance Hypothesis","authors":"Hengyi Zhou, Longjun Liu, Haonan Zhang, N. Zheng","doi":"10.1145/3503161.3548290","DOIUrl":"https://doi.org/10.1145/3503161.3548290","url":null,"abstract":"Network pruning is an effective and widely-used model compression technique. Pattern pruning is a new sparsity dimension pruning approach whose compression ability has been proven in some prior works. However, a detailed study on \"pattern\" and pattern pruning is still lacking. In this paper, we analyze the mechanism behind pattern pruning. Our analysis reveals that the effectiveness of pattern pruning should be attributed to finding the less important weights even before training. Then, motivated by the fact that the retinal ganglion cells in the biological visual system have approximately concentric receptive fields, we further investigate and propose the Circle Importance Hypothesis to guide the design of efficient patterns. We also design two series of special efficient patterns - circle patterns and semicircle patterns. Moreover, inspired by the neural architecture search technique, we propose a novel one-shot gradient-based pattern pruning algorithm. Besides, we also expand depthwise convolutions with our circle patterns, which improves the accuracy of networks with little extra memory cost. Extensive experiments are performed to validate our hypotheses and the effectiveness of the proposed methods. For example, we reduce the 44.0% FLOPS of ResNet-56 while improving its accuracy to 94.38% on CIFAR-10. And we reduce the 41.0% FLOPS of ResNet-18 with only a 1.11% accuracy drop on ImageNet.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multimodal Analysis for Deep Video Understanding with Video Language Transformer 基于视频语言转换器的深度视频理解多模态分析
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3551600
Beibei Zhang, Yaqun Fang, Tongwei Ren, Gangshan Wu
{"title":"Multimodal Analysis for Deep Video Understanding with Video Language Transformer","authors":"Beibei Zhang, Yaqun Fang, Tongwei Ren, Gangshan Wu","doi":"10.1145/3503161.3551600","DOIUrl":"https://doi.org/10.1145/3503161.3551600","url":null,"abstract":"The Deep Video Understanding Challenge (DVUC) is aimed to use multiple modality information to build high-level understanding of video, involving tasks such as relationship recognition and interaction detection. In this paper, we use a joint learning framework to simultaneously predict multiple tasks with visual, text, audio and pose features. In addition, to answer the queries of DVUC, we design multiple answering strategies and use video language transformer which learns cross-modal information for matching videos with text choices. The final DVUC result shows that our method ranks first for group one of movie-level queries, and ranks third for both of group one and group two of scene-level queries.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130074327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CALM: Commen-Sense Knowledge Augmentation for Document Image Understanding CALM:用于文档图像理解的常识知识增强
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548321
Qinyi Du, Qingqing Wang, Keqian Li, Jidong Tian, Liqiang Xiao, Yaohui Jin
{"title":"CALM: Commen-Sense Knowledge Augmentation for Document Image Understanding","authors":"Qinyi Du, Qingqing Wang, Keqian Li, Jidong Tian, Liqiang Xiao, Yaohui Jin","doi":"10.1145/3503161.3548321","DOIUrl":"https://doi.org/10.1145/3503161.3548321","url":null,"abstract":"Performance of document image understanding has been significantly fueled by encoding multi-modal information in recent years. However, existing works heavily rely on the superficial appearance of the observed data, resulting in counter-intuitive model behavior in many critical cases. To overcome this issue, this paper proposes a common-sense knowledge augmented model CALM for document image understanding tasks. It firstly produces purified representations of document contents to extract key information and learn common-sense augmented representation for inputs. Then, relevant common-sense knowledge is extracted from the external ConceptNet knowledge base, and a derived knowledge graph is built to enhance the common-sense reasoning capability of CALM jointly. In order to further highlight the importance of common-sense knowledge in document image understanding, we propose the first question-answering dataset, CS-DVQA, focused on common-sense reasoning for document images, in which questions are answered by taking both document contents and common-sense knowledge into consideration. Through extensive evaluation, the proposed CALM approach outperforms the state-of-the-art models in three document image understanding tasks, including key information extraction(from 85.37 to 86.52), document image classification(from 96.08 to 96.17), document visual question answering(from 86.72 to 88.03).","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128959764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NarSUM '22: 1st Workshop on User-centric Narrative Summarization of Long Videos NarSUM '22:第一届以用户为中心的长视频叙事总结研讨会
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3554776
Mohan S. Kankanhalli, Jianquan Liu, Yongkang Wong, Karen Stephen, Rishabh Sheoran, Anusha Bhamidipati
{"title":"NarSUM '22: 1st Workshop on User-centric Narrative Summarization of Long Videos","authors":"Mohan S. Kankanhalli, Jianquan Liu, Yongkang Wong, Karen Stephen, Rishabh Sheoran, Anusha Bhamidipati","doi":"10.1145/3503161.3554776","DOIUrl":"https://doi.org/10.1145/3503161.3554776","url":null,"abstract":"With video capture devices becoming widely popular, the amount of video data generated per day has seen a rapid increase over the past few years. Browsing through hours of video data to retrieve useful information is a tedious and boring task. Video Summarization technology has played a crucial role in addressing this issue. It is a well-researched topic in the multimedia community. However, the focus so far has been limited to creating summary to videos which are short (only a few minutes). This workshop aims to call for researchers on relevant background to focus on novel solutions for user-centric narrative summarization of long videos. This workshop will also cover important aspects of video summarization research like what is \"important\" in a video, how to evaluate the goodness of a created summary, open challenges in video summarization etc.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130666530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AI Powered Re-Identification System for Real-time Contextual Multimedia Applications 实时上下文多媒体应用的人工智能再识别系统
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3547739
Giuseppe Becchi, Andrea Ferracani, F. Principi, A. Bimbo
{"title":"An AI Powered Re-Identification System for Real-time Contextual Multimedia Applications","authors":"Giuseppe Becchi, Andrea Ferracani, F. Principi, A. Bimbo","doi":"10.1145/3503161.3547739","DOIUrl":"https://doi.org/10.1145/3503161.3547739","url":null,"abstract":"In this demo we present a person re-identification system, based on cameras installed in an environment and featuring AI, that can be used in the design and development of human-centered multimedia applications intended to provide improved situational awareness and context-sensitive user-interfaces. Possible applications are related but not limited to user profiling and personalisation systems, multimedia recommendation, social context understanding and gamification, security, with objectives spanning from environment monitoring, cultural heritage fruition enhancement, retail trade promotion and assistive technologies provision in industry. In the context of the demonstration we are going to set up a system with several workstations equipped with cameras and contextual to paintings reproductions that simulate a museum exhibition in which users are tracked and re-identified at different locations. The data collected is used to enrich the user experience through a reactive voice interface that considers the user's visit, the artworks that most attracted the visitor's attention and the social context.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132998283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Transformer for Few-shot Instance Segmentation 动态转换器用于少镜头实例分割
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548227
Haochen Wang, Jie Liu, Yongtuo Liu, Subhransu Maji, J. Sonke, E. Gavves
{"title":"Dynamic Transformer for Few-shot Instance Segmentation","authors":"Haochen Wang, Jie Liu, Yongtuo Liu, Subhransu Maji, J. Sonke, E. Gavves","doi":"10.1145/3503161.3548227","DOIUrl":"https://doi.org/10.1145/3503161.3548227","url":null,"abstract":"Few-shot instance segmentation aims to train an instance segmentation model that can fast adapt to novel classes with only a few reference images. Existing methods are usually derived from standard detection models and tackle few-shot instance segmentation indirectly by conducting classification, box regression, and mask prediction on a large set of redundant proposals followed by indispensable post-processing, e.g., Non-Maximum Suppression. Such complicated hand-crafted procedures and hyperparameters lead to degraded optimization and insufficient generalization ability. In this work, we propose an end-to-end Dynamic Transformer Network, DTN for short, to directly segment all target object instances from arbitrary categories given by reference images, relieving the requirements of dense proposal generation and post-processing. Specifically, a small set of Dynamic Queries, conditioned on reference images, are exclusively assigned to target object instances and generate all the instance segmentation masks of reference categories simultaneously. Moreover, a Semantic-induced Transformer Decoder is introduced to constrain the cross-attention between dynamic queries and target images within the pixels of the reference category, which suppresses the noisy interaction with the background and irrelevant categories. Extensive experiments are conducted on the COCO-20 dataset. The experiment results demonstrate that our proposed Dynamic Transformer Network significantly outperforms the state-of-the-arts.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131948583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Dynamic Spatio-Temporal Modular Network for Video Question Answering 面向视频问答的动态时空模块化网络
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548061
Zi Qian, Xin Wang, Xuguang Duan, Hong Chen, Wenwu Zhu
{"title":"Dynamic Spatio-Temporal Modular Network for Video Question Answering","authors":"Zi Qian, Xin Wang, Xuguang Duan, Hong Chen, Wenwu Zhu","doi":"10.1145/3503161.3548061","DOIUrl":"https://doi.org/10.1145/3503161.3548061","url":null,"abstract":"Video Question Answering (VideoQA) aims to understand given videos and questions comprehensively by generating correct answers. However, existing methods usually rely on end-to-end black-box deep neural networks to infer the answers, which significantly differs from human logic reasoning, thus lacking the ability to explain. Besides, the performances of existing methods tend to drop when answering compositional questions involving realistic scenarios. To tackle these challenges, we propose a Dynamic Spatio-Temporal Modular Network (DSTN) model, which utilizes a spatio-temporal modular network to simulate the compositional reasoning procedure of human beings. Concretely, we divide the task of answering a given question into a set of sub-tasks focusing on certain key concepts in questions and videos such as objects, actions, temporal orders, etc. Each sub-task can be solved with a separately designed module, e.g., spatial attention module, temporal attention module, logic module, and answer module. Then we dynamically assemble different modules assigned with different sub-tasks to generate a tree-structured spatio-temporal modular neural network for human-like reasoning before producing the final answer for the question. We carry out extensive experiments on the AGQA dataset to demonstrate our proposed DSTN model can significantly outperform several baseline methods in various settings. Moreover, we evaluate intermediate results and visualize each reasoning step to verify the rationality of different modules and the explainability of the proposed DSTN model.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132326524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Hierarchical Scene Normality-Binding Modeling for Anomaly Detection in Surveillance Videos 用于监控视频异常检测的分层场景正态性绑定模型
Proceedings of the 30th ACM International Conference on Multimedia Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548199
Qianyue Bao, F. Liu, Yang Liu, Licheng Jiao, Xu Liu, Lingling Li
{"title":"Hierarchical Scene Normality-Binding Modeling for Anomaly Detection in Surveillance Videos","authors":"Qianyue Bao, F. Liu, Yang Liu, Licheng Jiao, Xu Liu, Lingling Li","doi":"10.1145/3503161.3548199","DOIUrl":"https://doi.org/10.1145/3503161.3548199","url":null,"abstract":"Anomaly detection in surveillance videos is an important topic in the multimedia community, which requires efficient scene context extraction and the capture of temporal information as a basis for decision. From the perspective of hierarchical modeling, we parse the surveillance scene from global to local and propose a Hierarchical Scene Normality-Binding Modeling framework (HSNBM) to handle anomaly detection. For the static background hierarchy, we design a Region Clustering-driven Multi-task Memory Autoencoder (RCM-MemAE), which can simultaneously perform region segmentation and scene reconstruction. The normal prototypes of each local region are stored, and the frame reconstruction error is subsequently amplified by global memory augmentation. For the dynamic foreground object hierarchy, we employ a Scene-Object Binding Frame Prediction module (SOB-FP) to bind all foreground objects in the frame with the prototypes stored in the background hierarchy according their positions, thus fully exploit the normality relationship between foreground and background. The bound features are then fed into the decoder to predict the future movement of the objects. With the binding mechanism between foreground and background, HSNBM effectively integrates the \"reconstruction\" and \"prediction\" tasks and builds a semantic bridge between the two hierarchies. Finally, HSNBM fuses the anomaly scores of the two hierarchies to make a comprehensive decision. Extensive empirical studies on three standard video anomaly detection datasets demonstrate the effectiveness of the proposed HSNBM framework.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130245129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信