Proceedings of the 22nd International Conference on Intelligent User Interfaces最新文献

筛选
英文 中文
SupportingTrust in Autonomous Driving 支持对自动驾驶的信任
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025198
Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz
{"title":"SupportingTrust in Autonomous Driving","authors":"Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz","doi":"10.1145/3025171.3025198","DOIUrl":"https://doi.org/10.1145/3025171.3025198","url":null,"abstract":"Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124435966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
CogniLearn CogniLearn
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025213
Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos
{"title":"CogniLearn","authors":"Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos","doi":"10.1145/3025171.3025213","DOIUrl":"https://doi.org/10.1145/3025171.3025213","url":null,"abstract":"This paper proposes a novel system for assessing physical exercises specifically designed for cognitive behavior monitoring. The proposed system provides decision support to experts for helping with early childhood development. Our work is based on the well-established framework of Head-Toes-Knees-Shoulders (HTKS) that is known for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. HTKS serves as a useful measure for behavioral self-regulation. Our system, CogniLearn, automates capturing and motion analysis of users performing the HTKS game and provides detailed evaluations using state-of-the-art computer vision and deep learning based techniques for activity recognition and evaluation. The proposed system is supported by an intuitive and specifically designed user interface that can help human experts to cross-validate and/or refine their diagnosis. To evaluate our system, we created a novel dataset, that we made open to the public to encourage further experimentation. The dataset consists of 15 subjects performing 4 different variations of the HTKS task and contains in total more than 60,000 RGB frames, of which 4,443 are fully annotated.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117120272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep Sequential Recommendation for Personalized Adaptive User Interfaces 个性化自适应用户界面的深度顺序推荐
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025207
Harold Soh, S. Sanner, Madeleine White, G. Jamieson
{"title":"Deep Sequential Recommendation for Personalized Adaptive User Interfaces","authors":"Harold Soh, S. Sanner, Madeleine White, G. Jamieson","doi":"10.1145/3025171.3025207","DOIUrl":"https://doi.org/10.1145/3025171.3025207","url":null,"abstract":"Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121392088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Adaptive View Management for Drone Teleoperation in Complex 3D Structures 复杂三维结构下无人机遥操作的自适应视图管理
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025179
J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura
{"title":"Adaptive View Management for Drone Teleoperation in Complex 3D Structures","authors":"J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura","doi":"10.1145/3025171.3025179","DOIUrl":"https://doi.org/10.1145/3025171.3025179","url":null,"abstract":"Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121659227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables DyFAV:动态特征选择和投票,用于使用可穿戴设备实时识别手指拼写字母
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025216
Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta
{"title":"DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables","authors":"Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta","doi":"10.1145/3025171.3025216","DOIUrl":"https://doi.org/10.1145/3025171.3025216","url":null,"abstract":"Recent research has shown that reliable recognition of sign language words and phrases using user-friendly and non-invasive armbands is feasible and desirable. This work provides an analysis and implementation of including fingerspelling recognition (FR) in such systems, which is a much harder problem due to lack of distinctive hand movements. A novel algorithm called DyFAV (Dynamic Feature Selection and Voting) is proposed for this purpose that exploits the fact that fingerspelling has a finite corpus (26 letters for ASL). The system uses an independent multiple agent voting approach to identify letters with high accuracy. The independent voting of the agents ensures that the algorithm is highly parallelizable and thus recognition times can be kept low to suit real-time mobile applications. The results are demonstrated on the entire ASL alphabet corpus for nine people with limited training and average recognition accuracy of 95.36% is achieved which is better than the state-of-art for armband sensors. The mobile, non-invasive, and real time nature of the technology is demonstrated by evaluating performance on various types of Android phones and remote server configurations.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126326955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
"How May I Help You?": Modeling Twitter Customer ServiceConversations Using Fine-Grained Dialogue Acts “我能为您做些什么?”:使用细粒度对话行为建模Twitter客户服务会话
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025191
Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju
{"title":"\"How May I Help You?\": Modeling Twitter Customer ServiceConversations Using Fine-Grained Dialogue Acts","authors":"Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju","doi":"10.1145/3025171.3025191","DOIUrl":"https://doi.org/10.1145/3025171.3025191","url":null,"abstract":"Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained \"dialogue acts\" frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126017526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Interaction Design for Rehabiliation 康复交互设计
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3026365
P. Markopoulos
{"title":"Interaction Design for Rehabiliation","authors":"P. Markopoulos","doi":"10.1145/3025171.3026365","DOIUrl":"https://doi.org/10.1145/3025171.3026365","url":null,"abstract":"Well-known trends pertaining to the aging of population and the rising costs of healthcare motivate the development of rehabilitation technology. There is a considerable body of work in this area including efforts to make serious games, virtual reality and robotic applications. While innovative technologies have been introduced over the years, and often researchers produce promising experimental results, these technologies have not yet delivered the anticipated benefits. The causes for this apparent failure are evident when looking a closer look at the case of stroke rehabilitation, which is one of the heaviest researched topics for developing rehabilitation technologies. It is argued that improvements should be sought by centering the design on an understanding of patient needs, allowing patients, therapists and care givers in general to personalize solutions to the need of patients, effective feedback and motivation strategies to be implemented, and an in depth understanding of the socio-technical system in which the rehabilitation technology will be embedded. These are classic challenges that human computer interaction (HCI) researchers have been dealing with for years, which is why the field of rehabilitation technology requires considerable input from HCI researchers, and which explains the growing number of relevant HCI publications pertaining to rehabilitation. The talk reviews related research carried out at the Eindhoven University of Technology together with collaborating institutes, which has examined the value of tangible user interfaces and embodied interaction in rehabilitation, how designing playful interactions or games with a functional purpose., feedback design. I shall discuss the work we have done to develop rehabilitation technologies for the TagTrrainer system in the doctoral research of Daniel Tetteroo [2,3,4] and the explorations on wearable solutions in the doctoral research of Wang Qi.[5,6]. With our research being design driven and explorative, I will discuss also the current state of the art for the field and the challenges that need to be addressed for human computer interaction research to make a larger impact in the domain of rehabilitation technology.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128200788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CQAVis: Visual Text Analytics for Community Question Answering CQAVis:社区问答的可视化文本分析
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025210
Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini
{"title":"CQAVis: Visual Text Analytics for Community Question Answering","authors":"Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini","doi":"10.1145/3025171.3025210","DOIUrl":"https://doi.org/10.1145/3025171.3025210","url":null,"abstract":"Community question answering (CQA) forums can provide effective means for sharing information and addressing a user's information needs about particular topics. However, many such online forums are not moderated, resulting in many low quality and redundant comments, which makes it very challenging for users to find the appropriate answers to their questions. In this paper, we apply a user-centered design approach to develop a system, CQAVis, which supports users in identifying high quality comments and get their questions answered. Informed by the user's requirements, the system combines both text analytics and interactive visualization techniques together in a synergistic way. Given a new question posed by the user, the text analytic module automatically finds relevant answers by exploring existing related questions and the comments within their threads. Then the visualization module presents the search results to the user and supports the exploration of related comments. We have evaluated the system in the wild by deploying it within a CQA forum among thousands of real users. Through the online study, we gained deeper insights about the potential utility of the system, as well as learned generalizable lessons for designing visual text analytics systems for the domain of CQA forums.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133153075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision UI X-Ray:基于计算机视觉的交互式移动UI测试
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025190
Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang
{"title":"UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision","authors":"Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang","doi":"10.1145/3025171.3025190","DOIUrl":"https://doi.org/10.1145/3025171.3025190","url":null,"abstract":"User Interface/eXperience (UI/UX) significantly affects the lifetime of any software program, particularly mobile apps. A bad UX can undermine the success of a mobile app even if that app enables sophisticated capabilities. A good UX, however, needs to be supported of a highly functional and user friendly UI design. In spite of the importance of building mobile apps based on solid UI designs, UI discrepancies---inconsistencies between UI design and implementation---are among the most numerous and expensive defects encountered during testing. This paper presents UI X-Ray, an interactive UI testing system that integrates computer-vision methods to facilitate the correction of UI discrepancies---such as inconsistent positions, sizes and colors of objects and fonts. Using UI X-Ray does not require any programming experience; therefore, UI X-Ray can be used even by non-programmers---particularly designers---which significantly reduces the overhead involved in writing tests. With the feature of interactive interface, UI testers can quickly generate defect reports and revision instructions---which would otherwise be done manually. We verified our UI X-Ray on 4 developed mobile apps of which the entire development history was saved. UI X-Ray achieved a 99.03% true-positive rate, which significantly surpassed the 20.92% true-positive rate obtained via manual analysis. Furthermore, evaluating the results of our automated analysis can be completed quickly (< 1 minute per view on average) compared to hours of manual work required by UI testers. On the other hand, UI X-Ray received the appreciations from skilled designers and UI X-Ray improves their current work flow to generate UI defect reports and revision instructions. The proposed system, UI X-Ray, presented in this paper has recently become part of a commercial product.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130554670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review 现代触屏键盘作为智能用户界面:研究综述
Proceedings of the 22nd International Conference on Intelligent User Interfaces Pub Date : 2017-03-07 DOI: 10.1145/3025171.3026367
Shumin Zhai
{"title":"Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review","authors":"Shumin Zhai","doi":"10.1145/3025171.3026367","DOIUrl":"https://doi.org/10.1145/3025171.3026367","url":null,"abstract":"Essential to mobile communication, the touchscreen keyboard is the most ubiquitous intelligent user interface on modern mobile phones. Developing smarter, more efficient, easy to learn, and fun to use keyboards has presented many fascinating IUI research and design questions. Some have been addressed by academic research and practitioners in industry, while others remain significant ongoing research challenges. In this IUI 2017 keynote address I will review and synthesize the progress and open research questions of the past 15 years in text input, focusing on those my co-authors and I have directly dealt with through publications, such as the cost-benefit equations of automation and prediction [9], the power of machine/statistical intelligence [4, 7, 12], the human performance models fundamental to the design of error-correction algorithms [1, 2, 8], spatial scaling from a phone to a watch and the implications on human-machine labor division [5], user behavior and learning innovation [7, 11, 12, 13], and the challenges of evaluating the longitudinal effects of personalization and adaptation [4]. Through this research program review, I will illustrate why intelligent user interfaces, or the combination of machine intelligence and human factors, holds the future of human-computer interaction, and information technology at large.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126501607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信