Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

筛选
英文 中文
PoseSonic PoseSonic
Saif Mahmud, Ke Li, Guilin Hu, Hao Chen, Richard Jin, Ruidong Zhang, François Guimbretière, Cheng Zhang
{"title":"PoseSonic","authors":"Saif Mahmud, Ke Li, Guilin Hu, Hao Chen, Richard Jin, Ruidong Zhang, François Guimbretière, Cheng Zhang","doi":"10.1145/3610895","DOIUrl":"https://doi.org/10.1145/3610895","url":null,"abstract":"In this paper, we introduce PoseSonic, an intelligent acoustic sensing solution for smartglasses that estimates upper body poses. Our system only requires two pairs of microphones and speakers on the hinges of the eyeglasses to emit FMCW-encoded inaudible acoustic signals and receive reflected signals for body pose estimation. Using a customized deep learning model, PoseSonic estimates the 3D positions of 9 body joints including the shoulders, elbows, wrists, hips, and nose. We adopt a cross-modal supervision strategy to train our model using synchronized RGB video frames as ground truth. We conducted in-lab and semi-in-the-wild user studies with 22 participants to evaluate PoseSonic, and our user-independent model achieved a mean per joint position error of 6.17 cm in the lab setting and 14.12 cm in semi-in-the-wild setting when predicting the 9 body joint positions in 3D. Our further studies show that the performance was not significantly impacted by different surroundings or when the devices were remounted or by real-world environmental noise. Finally, we discuss the opportunities, challenges, and limitations of deploying PoseSonic in real-world applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
InfoPrint 信息打印
Weiwei Jiang, Chaofan Wang, Zhanna Sarsenbayeva, Andrew Irlitti, Jing Wei, Jarrod Knibbe, Tilman Dingler, Jorge Goncalves, Vassilis Kostakos
{"title":"InfoPrint","authors":"Weiwei Jiang, Chaofan Wang, Zhanna Sarsenbayeva, Andrew Irlitti, Jing Wei, Jarrod Knibbe, Tilman Dingler, Jorge Goncalves, Vassilis Kostakos","doi":"10.1145/3610933","DOIUrl":"https://doi.org/10.1145/3610933","url":null,"abstract":"We present a fully-printable method to embed interactive information inside 3D printed objects. The information is invisible to the human eye and can be read using thermal imaging after temperature transfer through interaction with the objects. Prior methods either modify the surface appearance, require customized devices or not commonly used materials, or embed components that are not fully 3D printable. Such limitations restrict the design space for 3D prints, or cannot be readily applied to the already deployed 3D printing setups. In this paper, we present an information embedding technique using low-cost off-the-shelf dual extruder FDM (Fused Deposition Modeling) 3D printers, common materials (e.g., generic PLA), and a mobile thermal device (e.g., a thermal smartphone), by leveraging the thermal properties of common 3D print materials. In addition, we show our method can also be generalized to conventional near-infrared imaging scenarios. We evaluate our technique against multiple design and fabrication parameters and propose a design guideline for different use cases. Finally, we demonstrate various everyday applications enabled by our method, such as interactive thermal displays, user-activated augmented reality, automating thermal triggered events, and hidden tokens for social activities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AttFL 附件
JaeYeon Park, Kichang Lee, Sungmin Lee, Mi Zhang, JeongGil Ko
{"title":"AttFL","authors":"JaeYeon Park, Kichang Lee, Sungmin Lee, Mi Zhang, JeongGil Ko","doi":"10.1145/3610917","DOIUrl":"https://doi.org/10.1145/3610917","url":null,"abstract":"This work presents AttFL, a federated learning framework designed to continuously improve a personalized deep neural network for efficiently analyzing time-series data generated from mobile and embedded sensing applications. To better characterize time-series data features and efficiently abstract model parameters, AttFL appends a set of attention modules to the baseline deep learning model and exchanges their feature map information to gather collective knowledge across distributed local devices at the server. The server groups devices with similar contextual goals using cosine similarity, and redistributes updated model parameters for improved inference performance at each local device. Specifically, unlike previously proposed federated learning frameworks, AttFL is designed specifically to perform well for various recurrent neural network (RNN) baseline models, making it suitable for many mobile and embedded sensing applications producing time-series sensing data. We evaluate the performance of AttFL and compare with five state-of-the-art federated learning frameworks using three popular mobile/embedded sensing applications (e.g., physiological signal analysis, human activity recognition, and audio processing). Our results obtained from CPU core-based emulations and a 12-node embedded platform testbed shows that AttFL outperforms all alternative approaches in terms of model accuracy and communication/computational overhead, and is flexible enough to be applied in various application scenarios exploiting different baseline deep learning model architectures.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-technology Communication between Visible Light and Battery-free RFIDs 可见光和无电池rfid之间的跨技术通信
Ge Wang, Lubing Han, Yuance Chang, Yuting Shi, Chen Qian, Cong Zhao, Han Ding, Wei Xi, Cui Zhao, Jizhong Zhao
{"title":"Cross-technology Communication between Visible Light and Battery-free RFIDs","authors":"Ge Wang, Lubing Han, Yuance Chang, Yuting Shi, Chen Qian, Cong Zhao, Han Ding, Wei Xi, Cui Zhao, Jizhong Zhao","doi":"10.1145/3610883","DOIUrl":"https://doi.org/10.1145/3610883","url":null,"abstract":"The ubiquity of illumination facilities enables the versatile development of Visible Light Communication (VLC). VLC-based research achieved high-speed wireless access and decimeter-level indoor localization with complex equipment. However, it is still unclear whether the VLC is applicable for widely-used battery-free Internet-of-Things nodes, e.g., passive RFIDs. This paper proposes LightSign, the first cross-technology system that enables passive RFID tags to receive visible light messages. LightSign is compatible with commercial protocols, transparent to routine RFID communications, and invisible to human eyes. We propose a pseudo-timing instruction to achieve microsecond-level light switching to modulate the VLC message. To make it perceptible to passive RFIDs, we design an augmented RFID tag and prove its effectiveness theoretically and experimentally. With only one reply from an augmented tag, LightSign can decode 100-bit-long VLC messages. We evaluate LightSign in real industry environments and test its performance with two use cases. The results show that LightSign achieves up to 99.2% decoding accuracy in varying scenarios.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MI-Poser MI-Poser
Riku Arakawa, Bing Zhou, Gurunandan Krishnan, Mayank Goel, Shree K. Nayar
{"title":"MI-Poser","authors":"Riku Arakawa, Bing Zhou, Gurunandan Krishnan, Mayank Goel, Shree K. Nayar","doi":"10.1145/3610891","DOIUrl":"https://doi.org/10.1145/3610891","url":null,"abstract":"Inside-out tracking of human body poses using wearable sensors holds significant potential for AR/VR applications, such as remote communication through 3D avatars with expressive body language. Current inside-out systems often rely on vision-based methods utilizing handheld controllers or incorporating densely distributed body-worn IMU sensors. The former limits hands-free and occlusion-robust interactions, while the latter is plagued by inadequate accuracy and jittering. We introduce a novel body tracking system, MI-Poser, which employs AR glasses and two wrist-worn electromagnetic field (EMF) sensors to achieve high-fidelity upper-body pose estimation while mitigating metal interference. Our lightweight system demonstrates a minimal error (6.6 cm mean joint position error) with real-world data collected from 10 participants. It remains robust against various upper-body movements and operates efficiently at 60 Hz. Furthermore, by incorporating an IMU sensor co-located with the EMF sensor, MI-Poser presents solutions to counteract the effects of metal interference, which inherently disrupts the EMF signal during tracking. Our evaluation effectively showcases the successful detection and correction of interference using our EMF-IMU fusion approach across environments with diverse metal profiles. Ultimately, MI-Poser offers a practical pose tracking system, particularly suited for body-centric AR applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SignRing SignRing
Jiyang Li, Lin Huang, Siddharth Shah, Sean J. Jones, Yincheng Jin, Dingran Wang, Adam Russell, Seokmin Choi, Yang Gao, Junsong Yuan, Zhanpeng Jin
{"title":"SignRing","authors":"Jiyang Li, Lin Huang, Siddharth Shah, Sean J. Jones, Yincheng Jin, Dingran Wang, Adam Russell, Seokmin Choi, Yang Gao, Junsong Yuan, Zhanpeng Jin","doi":"10.1145/3610881","DOIUrl":"https://doi.org/10.1145/3610881","url":null,"abstract":"Sign language is a natural language widely used by Deaf and hard of hearing (DHH) individuals. Advanced wearables are developed to recognize sign language automatically. However, they are limited by the lack of labeled data, which leads to a small vocabulary and unsatisfactory performance even though laborious efforts are put into data collection. Here we propose SignRing, an IMU-based system that breaks through the traditional data augmentation method, makes use of online videos to generate the virtual IMU (v-IMU) data, and pushes the boundary of wearable-based systems by reaching the vocabulary size of 934 with sentences up to 16 glosses. The v-IMU data is generated by reconstructing 3D hand movements from two-view videos and calculating 3-axis acceleration data, by which we are able to achieve a word error rate (WER) of 6.3% with a mix of half v-IMU and half IMU training data (2339 samples for each), and a WER of 14.7% with 100% v-IMU training data (6048 samples), compared with the baseline performance of the 8.3% WER (trained with 2339 samples of IMU data). We have conducted comparisons between v-IMU and IMU data to demonstrate the reliability and generalizability of the v-IMU data. This interdisciplinary work covers various areas such as wearable sensor development, computer vision techniques, deep learning, and linguistics, which can provide valuable insights to researchers with similar research objectives.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PulmoListener PulmoListener
Sejal Bhalla, Salaar Liaqat, Robert Wu, Andrea S. Gershon, Eyal de Lara, Alex Mariakakis
{"title":"PulmoListener","authors":"Sejal Bhalla, Salaar Liaqat, Robert Wu, Andrea S. Gershon, Eyal de Lara, Alex Mariakakis","doi":"10.1145/3610889","DOIUrl":"https://doi.org/10.1145/3610889","url":null,"abstract":"Prior work has shown the utility of acoustic analysis in controlled settings for assessing chronic obstructive pulmonary disease (COPD) --- one of the most common respiratory diseases that impacts millions of people worldwide. However, such assessments require active user input and may not represent the true characteristics of a patient's voice. We propose PulmoListener, an end-to-end speech processing pipeline that identifies segments of the patient's speech from smartwatch audio collected during daily living and analyzes them to classify COPD symptom severity. To evaluate our approach, we conducted a study with 8 COPD patients over 164 ± 92 days on average. We found that PulmoListener achieved an average sensitivity of 0.79 ± 0.03 and a specificity of 0.83 ± 0.05 per patient when classifying their symptom severity on the same day. PulmoListener can also predict the severity level up to 4 days in advance with an average sensitivity of 0.75 ± 0.02 and a specificity of 0.74 ± 0.07. The results of our study demonstrate the feasibility of leveraging natural speech for monitoring COPD in real-world settings, offering a promising solution for disease management and even diagnosis.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Radio Map Construction with Domain Disentangled Learning for Wireless Localization 基于领域解纠缠学习的无线定位快速无线电地图构建
Weina Jiang, Lin Shi, Qun Niu, Ning Liu
{"title":"Fast Radio Map Construction with Domain Disentangled Learning for Wireless Localization","authors":"Weina Jiang, Lin Shi, Qun Niu, Ning Liu","doi":"10.1145/3610922","DOIUrl":"https://doi.org/10.1145/3610922","url":null,"abstract":"The accuracy of wireless fingerprint-based indoor localization largely depends on the precision and density of radio maps. Although many research efforts have been devoted to incremental updating of radio maps, few consider the laborious initial construction of a new site. In this work, we propose an accurate and generalizable framework for efficient radio map construction, which takes advantage of readily-available fine-grained radio maps and constructs fine-grained radio maps of a new site with a small proportion of measurements in it. Specifically, we regard radio maps as domains and propose a Radio Map construction approach based on Domain Adaptation (RMDA). We first employ the domain disentanglement feature extractor to learn domain-invariant features for aligning the source domains (available radio maps) with the target domain (initial radio map) in the domain-invariant latent space. Furthermore, we propose a dynamic weighting strategy, which learns the relevancy of the source and target domain in the domain adaptation. Then, we extract the domain-specific features based on the site's floorplan and use them to constrain the super-resolution of the domain-invariant features. Experimental results demonstrate that RMDA constructs a fine-grained initial radio map of a target site efficiently with a limited number of measurements. Meanwhile, the localization accuracy of the refined radio map with RMDA significantly improved by about 41.35% after construction and is comparable with the dense surveyed radio map (the reduction is less than 8%).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interaction Harvesting 交互收获
John Mamish, Amy Guo, Thomas Cohen, Julian Richey, Yang Zhang, Josiah Hester
{"title":"Interaction Harvesting","authors":"John Mamish, Amy Guo, Thomas Cohen, Julian Richey, Yang Zhang, Josiah Hester","doi":"10.1145/3610880","DOIUrl":"https://doi.org/10.1145/3610880","url":null,"abstract":"Whenever a user interacts with a device, mechanical work is performed to actuate the user interface elements; the resulting energy is typically wasted, dissipated as sound and heat. Previous work has shown that many devices can be powered entirely from this otherwise wasted user interface energy. For these devices, wires and batteries, along with the related hassles of replacement and charging, become unnecessary and onerous. So far, these works have been restricted to proof-of-concept demonstrations; a specific bespoke harvesting and sensing circuit is constructed for the application at hand. The challenge of harvesting energy while simultaneously sensing fine-grained input signals from diverse modalities makes prototyping new devices difficult. To fill this gap, we present a hardware toolkit which provides a common electrical interface for harvesting energy from user interface elements. This facilitates exploring the composability, utility, and breadth of enabled applications of interaction-powered smart devices. We design a set of \"energy as input\" harvesting circuits, a standard connective interface with 3D printed enclosures, and software libraries to enable the exploration of devices where the user action generates the energy needed to perform the device's primary function. This exploration culminated in a demonstration campaign where we prototype several exemplar popular toys and gadgets, including battery-free Bop-It--- a popular 90s rhythm game, an electronic Etch-a-sketch, a \"Simon-Says\"-style memory game, and a service rating device. We run exploratory user studies to understand how generativity, creativity, and composability are hampered or facilitated by these devices. These demonstrations, user study takeaways, and the toolkit itself provide a foundation for building interactive and user-focused gadgets whose usability is not affected by battery charge and whose service lifetime is not limited by battery wear.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
I Know Your Intent 我知道你的意图
Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Zixuan Weng, Ruoyu Li, Yong Jiang
{"title":"I Know Your Intent","authors":"Jingyu Xiao, Qingsong Zou, Qing Li, Dan Zhao, Kang Li, Zixuan Weng, Ruoyu Li, Yong Jiang","doi":"10.1145/3610906","DOIUrl":"https://doi.org/10.1145/3610906","url":null,"abstract":"With the booming of smart home market, intelligent Internet of Things (IoT) devices have been increasingly involved in home life. To improve the user experience of smart homes, some prior works have explored how to use machine learning for predicting interactions between users and devices. However, the existing solutions have inferior User Device Interaction (UDI) prediction accuracy, as they ignore three key factors: routine, intent and multi-level periodicity of human behaviors. In this paper, we present SmartUDI, a novel accurate UDI prediction approach for smart homes. First, we propose a Message-Passing-based Routine Extraction (MPRE) algorithm to mine routine behaviors, then the contrastive loss is applied to narrow representations among behaviors from the same routines and alienate representations among behaviors from different routines. Second, we propose an Intent-aware Capsule Graph Attention Network (ICGAT) to encode multiple intents of users while considering complex transitions between different behaviors. Third, we design a Cluster-based Historical Attention Mechanism (CHAM) to capture the multi-level periodicity by aggregating the current sequence and the semantically nearest historical sequence representations through the attention mechanism. SmartUDI can be seamlessly deployed on cloud infrastructures of IoT device vendors and edge nodes, enabling the delivery of personalized device service recommendations to users. Comprehensive experiments on four real-world datasets show that SmartUDI consistently outperforms the state-of-the-art baselines with more accurate and highly interpretable results.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信