Proceedings of the 2020 International Conference on Multimedia Retrieval最新文献

筛选
英文 中文
Medical Image Retrieval: Applications and Resources 医学图像检索:应用和资源
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390668
H. Müller
{"title":"Medical Image Retrieval: Applications and Resources","authors":"H. Müller","doi":"10.1145/3372278.3390668","DOIUrl":"https://doi.org/10.1145/3372278.3390668","url":null,"abstract":"Motivation: Medical imaging is one of the largest data producers in the world and over the last 30 years this production increased exponentially via a larger number of images and a higher resolution, plus totally new types of images. Most images are used only in the context of a single patient and a single time point, besides a few images that are used for publications or in teaching. Data are usually scattered across many institutions and cannot be combined even for the treatment of a single patient. Much knowledge is stored in these medical archives of images and other clinical information and content-based medical image retrieval has from the start aimed at making such knowledge accessible using visual information in combination with text or structured data. With the digitization of radiology that started in the mid 1990s the foundation for broader use was laid out. Problem statement: This keynote presentation aims at giving a historical perspective of how medical image retrieval has evolved from a few prototypes using first only text, then global visual features to the current multimodal systems that can index many types of images in large quantities and use deep learning as a basis for the tools [1,2,3,4]. It also aims at looking at what the place of image retrieval is in medicine, where it is currently still only sparsely used in clinical practice. It seems that it is mainly a tool for teaching and research. Certified medical tools for decision support rather make use of specific approaches for detection and classification. Approach: The presentation follows a systematic review of the domain that includes many examples of systems and approaches that changed over time when better performing tools became available. Medical mage retrieval has evolved strongly, and many tools linked to mage retrieval are now employed as clinical decision support but mainly for detection and classification. Retrieval remains useful but is often integrated with tools and thus has become almost invisible. A second aspect of the presentation includes a presentations of existing data sets and other resources that were difficult to obtain even ten years ago, but that have been shared via repositories such as TCGA (The Cancer Genome Atlas, https://www.cancer.gov/about-nci/organization/ccg/ research/structural-genomics/tcga), TCIA (The Cancer Imaging Archive, https://www.cancerimagingarchive.net), or via scientific challenges such ImageCLEF [5] or listed in the Grand Challenges web page (https://grand-challenge.org). Medical data are now easily accessible in many fields and often even in large quantities. Discussion: Medical retrieval has gone from single text or image retrieval to multimodal approaches [6], really aiming to use all data available for a case, similar to what a physician would do by looking at a patient holistically. The limiting factor in terms of data access is now rather linked to limited manual annotations, as the time of clinicians for annotations ","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127635542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Are You Watching Closely? Content-based Retrieval of Hand Gestures 你在密切关注吗?基于内容的手势检索
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390723
Mahnaz Parian, Luca Rossetto, H. Schuldt, S. Dupont
{"title":"Are You Watching Closely? Content-based Retrieval of Hand Gestures","authors":"Mahnaz Parian, Luca Rossetto, H. Schuldt, S. Dupont","doi":"10.1145/3372278.3390723","DOIUrl":"https://doi.org/10.1145/3372278.3390723","url":null,"abstract":"Gestures play an important role in our daily communications. However, recognizing and retrieving gestures in-the-wild is a challenging task which is not explored thoroughly in literature. In this paper, we explore the problem of identifying and retrieving gestures in a large-scale video dataset provided by the computer vision community and based on queries recorded in-the-wild. Our proposed pipeline, I3DEF, is based on the extraction of spatio-temporal features from intermediate layers of an I3D network, a state-of-the-art network for action recognition, and the fusion of the output of feature maps from RGB and optical flow input. The obtained embeddings are used to train a triplet network to capture the similarity between gestures. We further explore the effect of a person and body part masking step for improving both retrieval performance and recognition rate. Our experiments show the ability of I3DEF to recognize and retrieve gestures which are similar to the queries independently of the depth modality. This performance holds both for queries taken from the test data, and for queries using recordings from different people performing relevant gestures in a different setting.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122538179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Framework for Paper Submission Recommendation System 论文投稿推荐系统框架
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3391929
D. V. Cuong, Dac H. Nguyen, Son Huynh, Phong Huynh, C. Gurrin, Minh-Son Dao, Duc-Tien Dang-Nguyen, Binh T. Nguyen
{"title":"A Framework for Paper Submission Recommendation System","authors":"D. V. Cuong, Dac H. Nguyen, Son Huynh, Phong Huynh, C. Gurrin, Minh-Son Dao, Duc-Tien Dang-Nguyen, Binh T. Nguyen","doi":"10.1145/3372278.3391929","DOIUrl":"https://doi.org/10.1145/3372278.3391929","url":null,"abstract":"Nowadays, recommendation systems play an indispensable role in many fields, including e-commerce, finance, economy, and gaming. There is emerging research on publication venue recommendation systems to support researchers when submitting their scientific work. Several publishers such as IEEE, Springer, and Elsevier have implemented their submission recommendation systems only to help researchers choose appropriate conferences or journals for submission. In this work, we present a demo framework to construct an effective recommendation system for paper submission. With the input data (the title, the abstract, and the list of possible keywords) of a given manuscript, the system recommends the list of top relevant journals or conferences to authors. By using state-of-the-art techniques in natural language understanding, we combine the features extracted with other useful handcrafted features. We utilize deep learning models to build an efficient recommendation engine for the proposed system. Finally, we present the User Interface (UI) and the architecture of our paper submission recommendation system for later usage by researchers.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Coordinated Representation Learning Enhanced Multimodal Machine Translation Approach with Multi-Attention 协同表征学习增强多注意多模态机器翻译方法
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390717
Yifeng Han, Lin Li, Jianwei Zhang
{"title":"A Coordinated Representation Learning Enhanced Multimodal Machine Translation Approach with Multi-Attention","authors":"Yifeng Han, Lin Li, Jianwei Zhang","doi":"10.1145/3372278.3390717","DOIUrl":"https://doi.org/10.1145/3372278.3390717","url":null,"abstract":"In recent years, the application of machine translation has become more and more widely. Currently, the neural multimodal translation models have made attractive progress, which combines images into deep learning networks, such as Transformer and RNN. When considering images in translation models, they directly apply gate structure or image attention to introduce image feature to enhance the translation effect. We argue that it may mismatch the text and image features since they are in different semantic space. In this paper, we propose a coordinated representation learning enhanced multimodal machine translation approach with multimodal attention. Our approach accepts the text data and its relevant image data as the input. The image features are fed into the decoder side of the basic Transformer model. Moreover, the Coordinated Representation Learning is utilized to map the different text and image modal features into their semantic representations. The mapped representations are linearly related in a shared semantic space. Finally, the sum of the image and text representations, called Coordinated Visual-Semantic Representation (CVSR), will be sent to a Multimodal Attention Layer (MAL) in our Transformer based translation approach. Experimental results show that our approach achieves the state-of-art performance on the public Multi30k dataset.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"8 Pt 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126270087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Learning Fine-Grained Similarity Matching Networks for Visual Tracking 学习用于视觉跟踪的细粒度相似匹配网络
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390729
Dawei Zhang, Zhonglong Zheng, Xiaowei He, Liu Su, Liyuan Chen
{"title":"Learning Fine-Grained Similarity Matching Networks for Visual Tracking","authors":"Dawei Zhang, Zhonglong Zheng, Xiaowei He, Liu Su, Liyuan Chen","doi":"10.1145/3372278.3390729","DOIUrl":"https://doi.org/10.1145/3372278.3390729","url":null,"abstract":"Recently, siamese trackers have been increasingly popular in visual tracking community. Despite great success, it is still difficult to perform robust tracking in various challenging scenarios. In this paper, we propose a novel similarity matching network, that effectively extracts fine-grained semantic features by adding a Classification branch and a Category-Aware module into the classical Siamese framework (CCASiam). More specifically, the supervision module can fully utilize the class information to obtain a loss for classification and the whole network performs tracking loss, so that the network can extract more discriminative features for each specific target. During online tracking, the classification branch is removed and the category-aware module is designed to guide the selection of target-active features using a ridge regression network, which avoids unnecessary calculations and over-fitting. Furthermore, we introduce different types of attention mechanisms to selectively emphasize important semantic information. Due to the fine-grained and category-aware features, CCASiam can perform high performance tracking efficiently. Extensive experimental results on several tracking benchmarks, show that the proposed tracker obtains the state-of-the-art performance with a real-time speed.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127932031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Visible-infrared Person Re-identification via Colorization-based Siamese Generative Adversarial Network 基于颜色的暹罗生成对抗网络的可见-红外人物再识别
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390696
X. Zhong, Tianyou Lu, Wenxin Huang, Jingling Yuan, Wenxuan Liu, Chia-Wen Lin
{"title":"Visible-infrared Person Re-identification via Colorization-based Siamese Generative Adversarial Network","authors":"X. Zhong, Tianyou Lu, Wenxin Huang, Jingling Yuan, Wenxuan Liu, Chia-Wen Lin","doi":"10.1145/3372278.3390696","DOIUrl":"https://doi.org/10.1145/3372278.3390696","url":null,"abstract":"With explosive surveillance data during day and night, visible-infrared person re-identification (VI-ReID) is an emerging challenge due to the apparent cross-modality discrepancy between visible and infrared images. Existing VI-ReID work mainly focuses on learning a robust feature to represent a person in both modalities despite the modality gap cannot be effectively eliminated. Recent research works have proposed various generative adversarial network (GAN) models to transfer the visible modality to another unified modality, aiming to bridge the cross-modality gap. However, they neglect the information loss caused by transferring the domain of visible images which is significant for identification. To effectively address the problems, we observe that key information such as textures and semantics in an infrared image can help to color the image itself and the colored infrared image maintains rich information from infrared image while reducing the discrepancy with the visible image. We therefore propose a colorization-based Siamese generative adversarial network (CoSiGAN) for VI-ReID to bridge the cross-modality gap, by retaining the identity of the colored infrared image. Furthermore, we also propose a feature-level fusion model to supplement the transfer loss of colorization. The experiments conducted on two cross-modality person re-identification datasets demonstrate the superiority of the proposed method compared with the state-of-the-arts.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125788899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Automatic Evaluation of Iconic Image Retrieval based on Colour, Shape, and Texture 基于颜色、形状和纹理的图标图像检索的自动评价
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390741
Riku Togashi, Sumio Fujita, T. Sakai
{"title":"Automatic Evaluation of Iconic Image Retrieval based on Colour, Shape, and Texture","authors":"Riku Togashi, Sumio Fujita, T. Sakai","doi":"10.1145/3372278.3390741","DOIUrl":"https://doi.org/10.1145/3372278.3390741","url":null,"abstract":"Product image search is required to deal with large target image datasets which are frequently updated, and therefore it is not always practical to maintain exhaustive and up-to-date relevance assessments for tuning and evaluating the search engine. Moreover, in similar product image search where the query is also an image, it is difficult to identify the possible search intents behind it and thereby verbalise the relevance criteria for the assessors, especially if graded relevance assessments are required. In this study, we focus on similar product image search within a given product category (e.g., shoes), wherein each image is iconic (i.e., the image clearly shows what the product looks like and basically nothing else), and propose an initial approach to evaluating the task without relying on manual relevance assessments. More specifically, we build a simple probabilistic model that assumes that an image is generated from latent intents representing shape, texture, and colour, which enables us to estimate the relevance score of each image and thereby compute graded relevance measures for any image search engine result page. Through large-scale crowdsourcing experiments, we demonstrate that our proposed measures, InDCG (which is based on per-intent binary relevance) and D-InDCG (which is based on per-intent graded relevance), align reasonably well with human SERP preferences and with human image preferences. Hence, our automatic measures may be useful at least for rough tuning and evaluation of similar product image search.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128442655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Interactive Learning System for Large-Scale Multimedia Analytics 面向大规模多媒体分析的交互式学习系统
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3391935
O. Khan
{"title":"An Interactive Learning System for Large-Scale Multimedia Analytics","authors":"O. Khan","doi":"10.1145/3372278.3391935","DOIUrl":"https://doi.org/10.1145/3372278.3391935","url":null,"abstract":"Analyzing multimedia collections in order to gain insight is a common desire amongst industry and society. Recent research has shown that while machines are getting better at analyzing multimedia data, they still lack the understanding and flexibility of humans. A central conjecture in Multimedia Analytics is that interactive learning is a key method to bridge the gap between human and machine. We investigate the requirements and design of the Exquisitor system, a very large-scale interactive learning system that aims to verify the validity of this conjecture. We describe the architecture and initial scalability results for Exquisitor, and propose research directions related to both performance and result quality.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132936565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Color Scheme Extraction from Movies 从电影自动配色方案提取
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390685
Suzi Kim, Sunghee Choi
{"title":"Automatic Color Scheme Extraction from Movies","authors":"Suzi Kim, Sunghee Choi","doi":"10.1145/3372278.3390685","DOIUrl":"https://doi.org/10.1145/3372278.3390685","url":null,"abstract":"A color scheme is an association of colors, i.e., a subset of all possible colors, that represents a visual identity. We propose an automated method to extract a color scheme from a movie. Since a movie is a carefully edited video with different objects and heterogeneous content embodying the director's messages and values, it is a challenging task to extract a color scheme from a movie as opposed to a general video filmed at once without distinction of shots or scenes. Despite such challenges, color scheme extraction plays a very important role in film production and application. The color scheme is an interpretation of the scenario by the cinematographer and it can convey a mood or feeling that stays with the viewer after the movie has ended. It also acts as a contributing factor to describe a film, like the metadata fields of a film such as a genre, director, and casting. Moreover, it can be automatically tagged unlike metadata, so it can be directly applied to the existing movie database without much effort. Our method produces a color scheme from a movie in a bottom-up manner from segmented shots. We formulate the color extraction as a selection problem where perceptually important colors are selected using saliency. We introduce a semi-master-shot, an alternative unit defined as a combination of contiguous shots taken in the same place with similar colors. Using real movie videos, we demonstrate and validate the plausibility of the proposed technique.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133240156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Analysis of the Effect of Dataset Construction Methodology on Transferability of Music Emotion Recognition Models 数据集构建方法对音乐情感识别模型可转移性的影响分析
Proceedings of the 2020 International Conference on Multimedia Retrieval Pub Date : 2020-06-08 DOI: 10.1145/3372278.3390733
Sabina Hult, Line Bay Kreiberg, Sami Sebastian Brandt, B. Jónsson
{"title":"Analysis of the Effect of Dataset Construction Methodology on Transferability of Music Emotion Recognition Models","authors":"Sabina Hult, Line Bay Kreiberg, Sami Sebastian Brandt, B. Jónsson","doi":"10.1145/3372278.3390733","DOIUrl":"https://doi.org/10.1145/3372278.3390733","url":null,"abstract":"Indexing and retrieving music based on emotion is a powerful retrieval paradigm with many applications. Traditionally, studies in the field of music emotion recognition have focused on training and testing supervised machine learning models using a single music dataset. To be useful for today's vast music libraries, however, such machine learning models must be widely applicable beyond the dataset for which they were created. In this work, we analyze to what extent models trained on one music dataset can predict emotion in another dataset constructed using a different methodology, by conducting cross-dataset experiments with three publicly available datasets. Our results suggest that training a prediction model on a homogeneous dataset with carefully collected emotion annotations yields a better foundation than prediction models learned on a larger, more varied dataset, with less reliable annotations.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128871530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书