Machine learning in medical imaging. MLMI (Workshop)最新文献

筛选
英文 中文
SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection. 协同CBCT图像分割和地标检测的多阶段CNN框架。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_62
Qin Liu, Han Deng, Chunfeng Lian, Xiaoyang Chen, Deqiang Xiao, Lei Ma, Xu Chen, Tianshu Kuang, Jaime Gateno, Pew-Thian Yap, James J Xia
{"title":"SkullEngine: A Multi-Stage CNN Framework for Collaborative CBCT Image Segmentation and Landmark Detection.","authors":"Qin Liu,&nbsp;Han Deng,&nbsp;Chunfeng Lian,&nbsp;Xiaoyang Chen,&nbsp;Deqiang Xiao,&nbsp;Lei Ma,&nbsp;Xu Chen,&nbsp;Tianshu Kuang,&nbsp;Jaime Gateno,&nbsp;Pew-Thian Yap,&nbsp;James J Xia","doi":"10.1007/978-3-030-87589-3_62","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_62","url":null,"abstract":"<p><p>Accurate bone segmentation and landmark detection are two essential preparation tasks in computer-aided surgical planning for patients with craniomaxillofacial (CMF) deformities. Surgeons typically have to complete the two tasks manually, spending ~12 hours for each set of CBCT or ~5 hours for CT. To tackle these problems, we propose a multi-stage coarse-to-fine CNN-based framework, called SkullEngine, for high-resolution segmentation and large-scale landmark detection through a collaborative, integrated, and scalable JSD model and three segmentation and landmark detection refinement models. We evaluated our framework on a clinical dataset consisting of 170 CBCT/CT images for the task of segmenting 2 bones (midface and mandible) and detecting 175 clinically common landmarks on bones, teeth, and soft tissues. Experimental results show that SkullEngine significantly improves segmentation quality, especially in regions where the bone is thin. In addition, SkullEngine also efficiently and accurately detect all of the 175 landmarks. Both tasks were completed simultaneously within 3 minutes regardless of CBCT or CT with high segmentation quality. Currently, SkullEngine has been integrated into a clinical workflow to further evaluate its clinical efficiency.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"606-614"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8712093/pdf/nihms-1762341.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39631757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts 基于运动/吉布斯伪影的儿童脑磁共振图像分割的多尺度自监督学习
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_18
Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang
{"title":"Multi-scale Self-supervised Learning for Multi-site Pediatric Brain MR Image Segmentation with Motion/Gibbs Artifacts","authors":"Yue Sun, Kun Gao, W. Lin, Gang Li, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-87589-3_18","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_18","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"34 1","pages":"171-179"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78010116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment 利用词区对齐提高胸片与放射学报告的联合学习
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_12
Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao
{"title":"Improving Joint Learning of Chest X-Ray and Radiology Report by Word Region Alignment","authors":"Zhanghexuan Ji, Mohammad Abuzar Shaikh, Dana Moukheiber, S. Srihari, Yifan Peng, Mingchen Gao","doi":"10.1007/978-3-030-87589-3_12","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_12","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"1 1","pages":"110-119"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80401668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis. 诊断与预后视觉解释的信息瓶颈归因。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2021-09-01 DOI: 10.1007/978-3-030-87589-3_41
Ugur Demir, Ismail Irmakci, Elif Keles, Ahmet Topcu, Ziyue Xu, Concetto Spampinato, Sachin Jambawalikar, Evrim Turkbey, Baris Turkbey, Ulas Bagci
{"title":"Information Bottleneck Attribution for Visual Explanations of Diagnosis and Prognosis.","authors":"Ugur Demir,&nbsp;Ismail Irmakci,&nbsp;Elif Keles,&nbsp;Ahmet Topcu,&nbsp;Ziyue Xu,&nbsp;Concetto Spampinato,&nbsp;Sachin Jambawalikar,&nbsp;Evrim Turkbey,&nbsp;Baris Turkbey,&nbsp;Ulas Bagci","doi":"10.1007/978-3-030-87589-3_41","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_41","url":null,"abstract":"<p><p>Visual explanation methods have an important role in the prognosis of the patients where the annotated data is limited or unavailable. There have been several attempts to use gradient-based attribution methods to localize pathology from medical scans without using segmentation labels. This research direction has been impeded by the lack of robustness and reliability. These methods are highly sensitive to the network parameters. In this study, we introduce a robust visual explanation method to address this problem for medical applications. We provide an innovative visual explanation algorithm for general purpose and as an example application we demonstrate its effectiveness for quantifying lesions in the lungs caused by the Covid-19 with high accuracy and robustness without using dense segmentation labels. This approach overcomes the drawbacks of commonly used Grad-CAM and its extended versions. The premise behind our proposed strategy is that the information flow is minimized while ensuring the classifier prediction stays similar. Our findings indicate that the bottleneck condition provides a more stable severity estimation than the similar attribution methods. The source code will be publicly available upon publication.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12966 ","pages":"396-405"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921297/pdf/nihms-1871448.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10721276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings 医学成像中的机器学习:第十二届国际研讨会,MLMI 2021,与MICCAI 2021一起举行,斯特拉斯堡,法国,2021年9月27日,会议录
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2021-01-01 DOI: 10.1007/978-3-030-87589-3
{"title":"Machine Learning in Medical Imaging: 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings","authors":"","doi":"10.1007/978-3-030-87589-3","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77936797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Multiple Sclerosis Lesion Inpainting with Edge Prior. 鲁棒性多发性硬化症病灶的边缘预处理。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_13
Huahong Zhang, Rohit Bakshi, Francesca Bagnato, Ipek Oguz
{"title":"Robust Multiple Sclerosis Lesion Inpainting with Edge Prior.","authors":"Huahong Zhang,&nbsp;Rohit Bakshi,&nbsp;Francesca Bagnato,&nbsp;Ipek Oguz","doi":"10.1007/978-3-030-59861-7_13","DOIUrl":"10.1007/978-3-030-59861-7_13","url":null,"abstract":"<p><p>Inpainting lesions is an important preprocessing task for algorithms analyzing brain MRIs of multiple sclerosis (MS) patients, such as tissue segmentation and cortical surface reconstruction. We propose a new deep learning approach for this task. Unlike existing inpainting approaches which ignore the lesion areas of the input image, we leverage the edge information around the lesions as a prior to help the inpainting process. Thus, the input of this network includes the T1-w image, lesion mask and the edge map computed from the T1-w image, and the output is the lesion-free image. The introduction of the edge prior is based on our observation that the edge detection results of the MRI scans will usually contain the contour of white matter (WM) and grey matter (GM), even though some undesired edges appear near the lesions. Instead of losing all the information around the neighborhood of lesions, our approach preserves the local tissue shape (brain/WM/GM) with the guidance of the input edges. The qualitative results show that our pipeline inpaints the lesion areas in a realistic and shape-consistent way. Our quantitative evaluation shows that our approach outperforms the existing state-of-the-art inpainting methods in both image-based metrics and in FreeSurfer segmentation accuracy. Furthermore, our approach demonstrates robustness to inaccurate lesion mask inputs. This is important for practical usability, because it allows for a generous over-segmentation of lesions instead of requiring precise boundaries, while still yielding accurate results.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"120-129"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8692168/pdf/nihms-1752653.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39847994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI. 基于静息状态fMRI的时间自适应图卷积网络自动识别重度抑郁症。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_1
Dongren Yao, Jing Sui, Erkun Yang, Pew-Thian Yap, Dinggang Shen, Mingxia Liu
{"title":"Temporal-Adaptive Graph Convolutional Network for Automated Identification of Major Depressive Disorder Using Resting-State fMRI.","authors":"Dongren Yao,&nbsp;Jing Sui,&nbsp;Erkun Yang,&nbsp;Pew-Thian Yap,&nbsp;Dinggang Shen,&nbsp;Mingxia Liu","doi":"10.1007/978-3-030-59861-7_1","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_1","url":null,"abstract":"<p><p>Extensive studies focus on analyzing human brain functional connectivity from a network perspective, in which each network contains complex graph structures. Based on resting-state functional MRI (rs-fMRI) data, graph convolutional networks (GCNs) enable comprehensive mapping of brain functional connectivity (FC) patterns to depict brain activities. However, existing studies usually characterize static properties of the FC patterns, ignoring the time-varying dynamic information. In addition, previous GCN methods generally use fixed group-level (e.g., patients or controls) representation of FC networks, and thus, cannot capture subject-level FC specificity. To this end, we propose a Temporal-Adaptive GCN (TAGCN) framework that can not only take advantage of both spatial and temporal information using resting-state FC patterns and time-series but also explicitly characterize subject-level specificity of FC patterns. Specifically, we first segment each ROI-based time-series into multiple overlapping windows, then employ an adaptive GCN to mine topological information. We further model the temporal patterns for each ROI along time to learn the periodic brain status changes. Experimental results on 533 major depressive disorder (MDD) and health control (HC) subjects demonstrate that the proposed TAGCN outperforms several state-of-the-art methods in MDD vs. HC classification, and also can be used to capture dynamic FC alterations and learn valid graph representations.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"1-10"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9645786/pdf/nihms-1822329.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40687357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Informative Feature-Guided Siamese Network for Early Diagnosis of Autism 信息特征引导的暹罗网络早期诊断自闭症
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2020-10-01 DOI: 10.1007/978-3-030-59861-7_68
Kun Gao, Yue Sun, Sijie Niu, Li Wang
{"title":"Informative Feature-Guided Siamese Network for Early Diagnosis of Autism","authors":"Kun Gao, Yue Sun, Sijie Niu, Li Wang","doi":"10.1007/978-3-030-59861-7_68","DOIUrl":"https://doi.org/10.1007/978-3-030-59861-7_68","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"26 1","pages":"674-682"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74366783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation. 婴儿小脑组织分割的半监督迁移学习。
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_67
Yue Sun, Kun Gao, Sijie Niu, Weili Lin, Gang Li, Li Wang
{"title":"Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation.","authors":"Yue Sun, Kun Gao, Sijie Niu, Weili Lin, Gang Li, Li Wang","doi":"10.1007/978-3-030-59861-7_67","DOIUrl":"10.1007/978-3-030-59861-7_67","url":null,"abstract":"<p><p>To characterize early cerebellum development, accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues is one of the most pivotal steps. However, due to the weak tissue contrast, extremely folded tiny structures, and severe partial volume effect, infant cerebellum tissue segmentation is especially challenging, and the manual labels are hard to obtain and correct for learning-based methods. To the best of our knowledge, there is no work on the cerebellum segmentation for infant subjects less than 24 months of age. In this work, we develop a semi-supervised transfer learning framework guided by a confidence map for tissue segmentation of cerebellum MR images from 24-month-old to 6-month-old infants. Note that only 24-month-old subjects have reliable manual labels for training, due to their high tissue contrast. Through the proposed semi-supervised transfer learning, the labels from 24-month-old subjects are gradually propagated to the 18-, 12-, and 6-month-old subjects, which have a low tissue contrast. Comparison with the state-of-the-art methods demonstrates the superior performance of the proposed method, especially for 6-month-old subjects.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"663-673"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7885085/pdf/nihms-1666988.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25378350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI. 用于胎儿脑磁共振成像运动校正的解剖学引导卷积神经网络
Machine learning in medical imaging. MLMI (Workshop) Pub Date : 2020-10-01 Epub Date: 2020-09-29 DOI: 10.1007/978-3-030-59861-7_39
Yuchen Pei, Lisheng Wang, Fenqiang Zhao, Tao Zhong, Lufan Liao, Dinggang Shen, Gang Li
{"title":"Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI.","authors":"Yuchen Pei, Lisheng Wang, Fenqiang Zhao, Tao Zhong, Lufan Liao, Dinggang Shen, Gang Li","doi":"10.1007/978-3-030-59861-7_39","DOIUrl":"10.1007/978-3-030-59861-7_39","url":null,"abstract":"<p><p>Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"384-393"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7912521/pdf/nihms-1666981.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"25414975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信