Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies最新文献

筛选
英文 中文
Contact Tracing for Healthcare Workers in an Intensive Care Unit 重症监护病房医护人员的接触者追踪
Jingwen Zhang, Ruixuan Dai, Ashraf Rjob, Ruiqi Wang, Reshad Hamauon, Jeffrey Candell, Thomas Bailey, Victoria J. Fraser, Maria Cristina Vazquez Guillamet, Chenyang Lu
{"title":"Contact Tracing for Healthcare Workers in an Intensive Care Unit","authors":"Jingwen Zhang, Ruixuan Dai, Ashraf Rjob, Ruiqi Wang, Reshad Hamauon, Jeffrey Candell, Thomas Bailey, Victoria J. Fraser, Maria Cristina Vazquez Guillamet, Chenyang Lu","doi":"10.1145/3610924","DOIUrl":"https://doi.org/10.1145/3610924","url":null,"abstract":"Contact tracing is a powerful tool for mitigating the spread of COVID-19 during the pandemic. Front-line healthcare workers are particularly at high risk of infection in hospital units. This paper presents ContAct TraCing for Hospitals (CATCH), an automated contact tracing system designed specifically for healthcare workers in hospital environments. CATCH employs distributed embedded devices placed throughout a hospital unit to detect close contacts among healthcare workers wearing Bluetooth Low Energy (BLE) beacons. We first identify a set of distinct contact tracing scenarios based on the diverse environmental characteristics of a real-world intensive care unit (ICU) and the different working patterns of healthcare workers in different spaces within the unit. We then develop a suite of novel contact tracing methods tailored for each scenario. CATCH has been deployed and evaluated in the ICU of a major medical center, demonstrating superior accuracy in contact tracing over existing approaches through a wide range of experiments. Furthermore, the real-world case study highlights the effectiveness and efficiency of CATCH compared to standard contact tracing practices.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Meta-Synthesis of the Barriers and Facilitators for Personal Informatics Systems 个人信息系统障碍与促进因素的综合分析
Kazi Sinthia Kabir, Jason Wiese
{"title":"A Meta-Synthesis of the Barriers and Facilitators for Personal Informatics Systems","authors":"Kazi Sinthia Kabir, Jason Wiese","doi":"10.1145/3610893","DOIUrl":"https://doi.org/10.1145/3610893","url":null,"abstract":"Personal informatics (PI) systems are designed for diverse users in the real world. Even when these systems are usable, people encounter barriers while engaging with them in ways designers cannot anticipate, which impacts the system's effectiveness. Although PI literature extensively reports such barriers, the volume of this information can be overwhelming. Researchers and practitioners often find themselves repeatedly addressing the same challenges since sifting through this enormous volume of knowledge looking for relevant insights is often infeasible. We contribute to alleviating this issue by conducting a meta-synthesis of the PI literature and categorizing people's barriers and facilitators to engagement with PI systems into eight themes. Based on the synthesized knowledge, we discuss specific generalizable barriers and paths for further investigations. This synthesis can serve as an index to identify barriers pertinent to each application domain and possibly to identify barriers from one domain that might apply to a different domain. Finally, to ensure the sustainability of the syntheses, we propose a Design Statements (DS) block for research articles.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale 社会对人体增强和性能增强技术的态度(SHAPE)量表
Steeven Villa, Jasmin Niess, Albrecht Schmidt, Robin Welsch
{"title":"Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale","authors":"Steeven Villa, Jasmin Niess, Albrecht Schmidt, Robin Welsch","doi":"10.1145/3610915","DOIUrl":"https://doi.org/10.1145/3610915","url":null,"abstract":"Human augmentation technologies (ATs) are a subset of ubiquitous on-body devices designed to improve cognitive, sensory, and motor capacities. Although there is a large corpus of knowledge concerning ATs, less is known about societal attitudes towards them and how they shift over time. To that end, we developed The Society's Attitudes Towards Human Augmentation and Performance Enhancement Technologies (SHAPE) Scale, which measures how users of ATs are perceived. To develop the scale, we first created a list of possible scale items based on past work on how people respond to new technologies. The items were then reviewed by experts. Next, we performed exploratory factor analysis to reduce the scale to its final length of thirteen items. Subsequently, we confirmed test-retest validity of our instrument, as well as its construct validity. The SHAPE scale enables researchers and practitioners to understand elements contributing to attitudes toward augmentation technology users. The SHAPE scale assists designers of ATs in designing artifacts that will be more universally accepted.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos 使用混合现实合成视频调查在受限乘客空间中基于身体、设备和环境的增强现实交互的社会舒适度
Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill
{"title":"Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos","authors":"Daniel Medeiros, Romane Dubus, Julie Williamson, Graham Wilson, Katharina Pöhlmann, Mark McGill","doi":"10.1145/3610923","DOIUrl":"https://doi.org/10.1145/3610923","url":null,"abstract":"Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What and When to Explain? 解释什么,什么时候解释?
Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim
{"title":"What and When to Explain?","authors":"Gwangbin Kim, Dohyeon Yeo, Taewoo Jo, Daniela Rus, SeungJun Kim","doi":"10.1145/3610886","DOIUrl":"https://doi.org/10.1145/3610886","url":null,"abstract":"Explanations in automated vehicles help passengers understand the vehicle's state and capabilities, leading to increased trust in the technology. Specifically, for passengers of SAE Level 4 and 5 vehicles who are not engaged in the driving process, the enhanced sense of control provided by explanations reduces potential anxieties, enabling them to fully leverage the benefits of automation. To construct explanations that enhance trust and situational awareness without disturbing passengers, we suggest testing with people who ultimately employ such explanations, ideally under real-world driving conditions. In this study, we examined the impact of various visual explanation types (perception, attention, perception+attention) and timing mechanisms (constantly provided or only under risky scenarios) on passenger experience under naturalistic driving scenarios using actual vehicles with mixed-reality support. Our findings indicate that visualizing the vehicle's perception state improves the perceived usability, trust, safety, and situational awareness without adding cognitive burden, even without explaining the underlying causes. We also demonstrate that the traffic risk probability could be used to control the timing of an explanation delivery, particularly when passengers are overwhelmed with information. Our study's on-road evaluation method offers a safe and reliable testing environment and can be easily customized for other AI models and explanation modalities.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GlassMessaging GlassMessaging
Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong
{"title":"GlassMessaging","authors":"Nuwan Janaka, Jie Gao, Lin Zhu, Shengdong Zhao, Lan Lyu, Peisen Xu, Maximilian Nabokow, Silang Wang, Yanch Ong","doi":"10.1145/3610931","DOIUrl":"https://doi.org/10.1145/3610931","url":null,"abstract":"Communicating with others while engaging in simple daily activities is both common and natural for people. However, due to the hands- and eyes-busy nature of existing digital messaging applications, it is challenging to message someone while performing simple daily activities. We present GlassMessaging, a messaging application on Optical See-Through Head-Mounted Displays (OHMDs), to support messaging with voice and manual inputs in hands- and eyes-busy scenarios. GlassMessaging is iteratively developed through a formative study identifying current messaging behaviors and challenges in common multitasking with messaging scenarios. We then evaluated this application against the mobile phone platform on varying texting complexities in eating and walking scenarios. Our results showed that, compared to phone-based messaging, GlassMessaging increased messaging opportunities during multitasking due to its hands-free, wearable nature, and multimodal input capabilities. The affordance of GlassMessaging also allows users easier access to voice input than the phone, which thus reduces the response time by 33.1% and increases the texting speed by 40.3%, with a cost in texting accuracy of 2.5%, particularly when the texting complexity increases. Lastly, we discuss trade-offs and insights to lay a foundation for future OHMD-based messaging applications.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135471528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ProxiFit ProxiFit
Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang
{"title":"ProxiFit","authors":"Jiha Kim, Younho Nam, Jungeun Lee, Young-Joo Suh, Inseok Hwang","doi":"10.1145/3610920","DOIUrl":"https://doi.org/10.1145/3610920","url":null,"abstract":"Although many works bring exercise monitoring to smartphone and smartwatch, inertial sensors used in such systems require device to be in motion to detect exercises. We introduce ProxiFit, a highly practical on-device exercise monitoring system capable of classifying and counting exercises even if the device stays still. Utilizing novel proximity sensing of natural magnetism in exercise equipment, ProxiFit brings (1) a new category of exercise not involving device motion such as lower-body machine exercise, and (2) a new off-body exercise monitoring mode where a smartphone can be conveniently viewed in front of the user during workouts. ProxiFit addresses common issues of faint magnetic sensing by choosing appropriate preprocessing, negating adversarial motion artifacts, and designing a lightweight yet noise-tolerant classifier. Also, application-specific challenges such as a wide variety of equipment and the impracticality of obtaining large datasets are overcome by devising a unique yet challenging training policy. We evaluate ProxiFit on up to 10 weight machines (5 lower- and 5 upper-body) and 4 free-weight exercises, on both wearable and signage mode, with 19 users, at 3 gyms, over 14 months, and verify robustness against user and weather variations, spatial and rotational device location deviations, and neighboring machine interference.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals 从与社交焦虑个体的虚拟互动中检测移动感知指标的社会背景
Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes
{"title":"Detecting Social Contexts from Mobile Sensing Indicators in Virtual Interactions with Socially Anxious Individuals","authors":"Zhiyuan Wang, Maria A. Larrazabal, Mark Rucker, Emma R. Toner, Katharine E. Daniel, Shashwat Kumar, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes","doi":"10.1145/3610916","DOIUrl":"https://doi.org/10.1145/3610916","url":null,"abstract":"Mobile sensing is a ubiquitous and useful tool to make inferences about individuals' mental health based on physiology and behavior patterns. Along with sensing features directly associated with mental health, it can be valuable to detect different features of social contexts to learn about social interaction patterns over time and across different environments. This can provide insight into diverse communities' academic, work and social lives, and their social networks. We posit that passively detecting social contexts can be particularly useful for social anxiety research, as it may ultimately help identify changes in social anxiety status and patterns of social avoidance and withdrawal. To this end, we recruited a sample of highly socially anxious undergraduate students (N=46) to examine whether we could detect the presence of experimentally manipulated virtual social contexts via wristband sensors. Using a multitask machine learning pipeline, we leveraged passively sensed biobehavioral streams to detect contexts relevant to social anxiety, including (1) whether people were in a social situation, (2) size of the social group, (3) degree of social evaluation, and (4) phase of social situation (anticipating, actively experiencing, or had just participated in an experience). Results demonstrated the feasibility of detecting most virtual social contexts, with stronger predictive accuracy when detecting whether individuals were in a social situation or not and the phase of the situation, and weaker predictive accuracy when detecting the level of social evaluation. They also indicated that sensing streams are differentially important to prediction based on the context being predicted. Our findings also provide useful information regarding design elements relevant to passive context detection, including optimal sensing duration, the utility of different sensing modalities, and the need for personalization. We discuss implications of these findings for future work on context detection (e.g., just-in-time adaptive intervention development).","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135535744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data 使用睡眠感觉数据预测抑郁症治疗期间症状改善
Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang
{"title":"Predicting Symptom Improvement During Depression Treatment Using Sleep Sensory Data","authors":"Chinmaey Shende, Soumyashree Sahoo, Stephen Sam, Parit Patel, Reynaldo Morillo, Xinyu Wang, Shweta Ware, Jinbo Bi, Jayesh Kamath, Alexander Russell, Dongjin Song, Bing Wang","doi":"10.1145/3610932","DOIUrl":"https://doi.org/10.1145/3610932","url":null,"abstract":"Depression is a serious mental illness. The current best guideline in depression treatment is closely monitoring patients and adjusting treatment as needed. Close monitoring of patients through physician-administered follow-ups or self-administered questionnaires, however, is difficult in clinical settings due to high cost, lack of trained professionals, and burden to the patients. Sensory data collected from mobile devices has been shown to provide a promising direction for long-term monitoring of depression symptoms. Most existing studies in this direction, however, focus on depression detection; the few studies that are on predicting changes in depression are not in clinical settings. In this paper, we investigate using one type of sensory data, sleep data, collected from wearables to predict improvement of depression symptoms over time after a patient initiates a new pharmacological treatment. We apply sleep trend filtering to noisy sleep sensory data to extract high-level sleep characteristics and develop a family of machine learning models that use simple sleep features (mean and variation of sleep duration) to predict symptom improvement. Our results show that using such simple sleep features can already lead to validation F1 score up to 0.68, indicating that using sensory data for predicting depression improvement during treatment is a promising direction.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping 用振动、抚摸、敲击三种触觉研究钢琴歌曲的被动触觉学习
Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl
{"title":"Investigating Passive Haptic Learning of Piano Songs Using Three Tactile Sensations of Vibration, Stroking and Tapping","authors":"Likun Fang, Timo Müller, Erik Pescara, Nikola Fischer, Yiran Huang, Michael Beigl","doi":"10.1145/3610899","DOIUrl":"https://doi.org/10.1145/3610899","url":null,"abstract":"Passive Haptic Learning (PHL) is a method by which users are able to learn motor skills without paying active attention. In past research, vibration is widely applied in PHL as the signal delivered on the participant's skin. The human somatosensory system provides not only discriminative input (the perception of pressure, vibration, slip, and texture, etc.) to the brain but also an affective input (sliding, tapping and stroking, etc.). The former is often described as being mediated by low-threshold mechanosensitive (LTM) units with rapidly conducting large myelinated (Aᵬ) afferents, while the latter is mediated by a class of LTM afferents called C-tactile afferents (CTs). We investigated whether different tactile sensations (tapping, light stroking, and vibration) influence the learning effect of PHL in this work. We built three wearable systems corresponding to the three sensations respectively. 17 participants were invited to learn to play three different note sequences passively via three different systems. The subjects were then tested on their remembered note sequences after each learning session. Our results indicate that the sensations of tapping or stroking are as effective as the vibration system in passive haptic learning of piano songs, providing viable alternatives to the vibration sensations that have been used so far. We also found that participants on average made up to 1.06 errors less when using affective inputs, namely tapping or stroking. As the first work exploring the differences in multiple types of tactile sensations in PHL, we offer our design to the readers and hope they may employ our works for further research of PHL.","PeriodicalId":20553,"journal":{"name":"Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135536099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信