arXiv - CS - Human-Computer Interaction最新文献

筛选
英文 中文
Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions 人类-人工智能文本生成中新出现的依赖行为:幻觉、数据质量评估和认知强迫功能
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.08937
Zahra Ashktorab, Qian Pan, Werner Geyer, Michael Desmond, Marina Danilevsky, James M. Johnson, Casey Dugan, Michelle Bachman
{"title":"Emerging Reliance Behaviors in Human-AI Text Generation: Hallucinations, Data Quality Assessment, and Cognitive Forcing Functions","authors":"Zahra Ashktorab, Qian Pan, Werner Geyer, Michael Desmond, Marina Danilevsky, James M. Johnson, Casey Dugan, Michelle Bachman","doi":"arxiv-2409.08937","DOIUrl":"https://doi.org/arxiv-2409.08937","url":null,"abstract":"In this paper, we investigate the impact of hallucinations and cognitive\u0000forcing functions in human-AI collaborative text generation tasks, focusing on\u0000the use of Large Language Models (LLMs) to assist in generating high-quality\u0000conversational data. LLMs require data for fine-tuning, a crucial step in\u0000enhancing their performance. In the context of conversational customer support,\u0000the data takes the form of a conversation between a human customer and an agent\u0000and can be generated with an AI assistant. In our inquiry, involving 11 users\u0000who each completed 8 tasks, resulting in a total of 88 tasks, we found that the\u0000presence of hallucinations negatively impacts the quality of data. We also find\u0000that, although the cognitive forcing function does not always mitigate the\u0000detrimental effects of hallucinations on data quality, the presence of\u0000cognitive forcing functions and hallucinations together impacts data quality\u0000and influences how users leverage the AI responses presented to them. Our\u0000analysis of user behavior reveals distinct patterns of reliance on AI-generated\u0000responses, highlighting the importance of managing hallucinations in\u0000AI-generated content within conversational AI contexts.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VAE Explainer: Supplement Learning Variational Autoencoders with Interactive Visualization VAE Explainer:利用交互式可视化补充学习变异自动编码器
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.09011
Donald Bertucci, Alex Endert
{"title":"VAE Explainer: Supplement Learning Variational Autoencoders with Interactive Visualization","authors":"Donald Bertucci, Alex Endert","doi":"arxiv-2409.09011","DOIUrl":"https://doi.org/arxiv-2409.09011","url":null,"abstract":"Variational Autoencoders are widespread in Machine Learning, but are\u0000typically explained with dense math notation or static code examples. This\u0000paper presents VAE Explainer, an interactive Variational Autoencoder running in\u0000the browser to supplement existing static documentation (e.g., Keras Code\u0000Examples). VAE Explainer adds interactions to the VAE summary with interactive\u0000model inputs, latent space, and output. VAE Explainer connects the high-level\u0000understanding with the implementation: annotated code and a live computational\u0000graph. The VAE Explainer interactive visualization is live at\u0000https://xnought.github.io/vae-explainer and the code is open source at\u0000https://github.com/xnought/vae-explainer.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Rational Adaptation of Visual Search to Hierarchical Structures 模拟视觉搜索对层次结构的合理适应
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.08967
Saku Sourulahti, Christian P Janssen, Jussi PP Jokinen
{"title":"Modeling Rational Adaptation of Visual Search to Hierarchical Structures","authors":"Saku Sourulahti, Christian P Janssen, Jussi PP Jokinen","doi":"arxiv-2409.08967","DOIUrl":"https://doi.org/arxiv-2409.08967","url":null,"abstract":"Efficient attention deployment in visual search is limited by human visual\u0000memory, yet this limitation can be offset by exploiting the environment's\u0000structure. This paper introduces a computational cognitive model that simulates\u0000how the human visual system uses visual hierarchies to prevent refixations in\u0000sequential attention deployment. The model adopts computational rationality,\u0000positing behaviors as adaptations to cognitive constraints and environmental\u0000structures. In contrast to earlier models that predict search performance for\u0000hierarchical information, our model does not include predefined assumptions\u0000about particular search strategies. Instead, our model's search strategy\u0000emerges as a result of adapting to the environment through reinforcement\u0000learning algorithms. In an experiment with human participants we test the\u0000model's prediction that structured environments reduce visual search times\u0000compared to random tasks. Our model's predictions correspond well with human\u0000search performance across various set sizes for both structured and\u0000unstructured visual layouts. Our work improves understanding of the adaptive\u0000nature of visual search in hierarchically structured environments and informs\u0000the design of optimized search spaces.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction AI as Extraherics:在人机交互中培养高阶思维能力
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.09218
Koji Yatani, Zefan Sramek, Chi-lan Yang
{"title":"AI as Extraherics: Fostering Higher-order Thinking Skills in Human-AI Interaction","authors":"Koji Yatani, Zefan Sramek, Chi-lan Yang","doi":"arxiv-2409.09218","DOIUrl":"https://doi.org/arxiv-2409.09218","url":null,"abstract":"As artificial intelligence (AI) technologies, including generative AI,\u0000continue to evolve, concerns have arisen about over-reliance on AI, which may\u0000lead to human deskilling and diminished cognitive engagement. Over-reliance on\u0000AI can also lead users to accept information given by AI without performing\u0000critical examinations, causing negative consequences, such as misleading users\u0000with hallucinated contents. This paper introduces extraheric AI, a human-AI\u0000interaction conceptual framework that fosters users' higher-order thinking\u0000skills, such as creativity, critical thinking, and problem-solving, during task\u0000completion. Unlike existing human-AI interaction designs, which replace or\u0000augment human cognition, extraheric AI fosters cognitive engagement by posing\u0000questions or providing alternative perspectives to users, rather than direct\u0000answers. We discuss interaction strategies, evaluation methods aligned with\u0000cognitive load theory and Bloom's taxonomy, and future research directions to\u0000ensure that human cognitive skills remain a crucial element in AI-integrated\u0000environments, promoting a balanced partnership between humans and AI.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving governance outcomes through AI documentation: Bridging theory and practice 通过人工智能文件改善治理成果:连接理论与实践
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.08960
Amy A. Winecoff, Miranda Bogen
{"title":"Improving governance outcomes through AI documentation: Bridging theory and practice","authors":"Amy A. Winecoff, Miranda Bogen","doi":"arxiv-2409.08960","DOIUrl":"https://doi.org/arxiv-2409.08960","url":null,"abstract":"Documentation plays a crucial role in both external accountability and\u0000internal governance of AI systems. Although there are many proposals for\u0000documenting AI data, models, systems, and methods, the ways these practices\u0000enhance governance as well as the challenges practitioners and organizations\u0000face with documentation remain underexplored. In this paper, we analyze 37\u0000proposed documentation frameworks and 21 empirical studies evaluating their\u0000use. We identify potential hypotheses about how documentation can strengthen\u0000governance, such as informing stakeholders about AI risks and usage, fostering\u0000collaboration, encouraging ethical reflection, and reinforcing best practices.\u0000However, empirical evidence shows that practitioners often encounter obstacles\u0000that prevent documentation from achieving these goals. We also highlight key\u0000considerations for organizations when designing documentation, such as\u0000determining the appropriate level of detail and balancing automation in the\u0000process. Finally, we offer recommendations for further research and for\u0000implementing effective documentation practices in real-world contexts.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection 合成人类记忆:人工智能编辑的图像和视频可植入虚假记忆并扭曲回忆
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.08895
Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha W. T. Chan, Elizabeth Loftus, Pattie Maes
{"title":"Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort Recollection","authors":"Pat Pataranutaporn, Chayapatr Archiwaranguprok, Samantha W. T. Chan, Elizabeth Loftus, Pattie Maes","doi":"arxiv-2409.08895","DOIUrl":"https://doi.org/arxiv-2409.08895","url":null,"abstract":"AI is increasingly used to enhance images and videos, both intentionally and\u0000unintentionally. As AI editing tools become more integrated into smartphones,\u0000users can modify or animate photos into realistic videos. This study examines\u0000the impact of AI-altered visuals on false memories--recollections of events\u0000that didn't occur or deviate from reality. In a pre-registered study, 200\u0000participants were divided into four conditions of 50 each. Participants viewed\u0000original images, completed a filler task, then saw stimuli corresponding to\u0000their assigned condition: unedited images, AI-edited images, AI-generated\u0000videos, or AI-generated videos of AI-edited images. AI-edited visuals\u0000significantly increased false recollections, with AI-generated videos of\u0000AI-edited images having the strongest effect (2.05x compared to control).\u0000Confidence in false memories was also highest for this condition (1.19x\u0000compared to control). We discuss potential applications in HCI, such as\u0000therapeutic memory reframing, and challenges in ethical, legal, political, and\u0000societal domains.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"77 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Shelter or Not To Shelter: Exploring the Influence of Different Modalities in Virtual Reality on Individuals' Tornado Mitigation Behaviors 躲还是不躲?探索虚拟现实中的不同模式对个人龙卷风减灾行为的影响
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.09205
Jiuyi Xu, Tolulope Sanni, Ziming Liu, Ye Yang, Jiyoung Lee, Wei Song, Yangming Shi
{"title":"To Shelter or Not To Shelter: Exploring the Influence of Different Modalities in Virtual Reality on Individuals' Tornado Mitigation Behaviors","authors":"Jiuyi Xu, Tolulope Sanni, Ziming Liu, Ye Yang, Jiyoung Lee, Wei Song, Yangming Shi","doi":"arxiv-2409.09205","DOIUrl":"https://doi.org/arxiv-2409.09205","url":null,"abstract":"Timely and adequate risk communication before natural hazards can reduce\u0000losses from extreme weather events and provide more resilient disaster\u0000preparedness. However, existing natural hazard risk communications have been\u0000abstract, ineffective, not immersive, and sometimes counterproductive. The\u0000implementation of virtual reality (VR) for natural hazard risk communication\u0000presents a promising alternative to the existing risk communication system by\u0000offering immersive and engaging experiences. However, it is still unknown how\u0000different modalities in VR could affect individuals' mitigation behaviors\u0000related to incoming natural hazards. In addition, it is also not clear how the\u0000repetitive risk communication of different modalities in the VR system leads to\u0000the effect of risk habituation. To fill the knowledge gap, we developed a VR\u0000system with a tornado risk communication scenario and conducted a mixed-design\u0000human subject experiment (N = 24). We comprehensively investigated our research\u0000using both quantitative and qualitative results.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dark Patterns in the Opt-Out Process and Compliance with the California Consumer Privacy Act (CCPA) 选择退出程序中的黑暗模式与加州消费者隐私法案 (CCPA) 的合规性
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.09222
Van Hong Tran, Aarushi Mehrotra, Ranya Sharma, Marshini Chetty, Nick Feamster, Jens Frankenreiter, Lior Strahilevitz
{"title":"Dark Patterns in the Opt-Out Process and Compliance with the California Consumer Privacy Act (CCPA)","authors":"Van Hong Tran, Aarushi Mehrotra, Ranya Sharma, Marshini Chetty, Nick Feamster, Jens Frankenreiter, Lior Strahilevitz","doi":"arxiv-2409.09222","DOIUrl":"https://doi.org/arxiv-2409.09222","url":null,"abstract":"To protect consumer privacy, the California Consumer Privacy Act (CCPA)\u0000mandates that businesses provide consumers with a straightforward way to opt\u0000out of the sale and sharing of their personal information. However, the control\u0000that businesses enjoy over the opt-out process allows them to impose hurdles on\u0000consumers aiming to opt out, including by employing dark patterns. Motivated by\u0000the enactment of the California Privacy Rights Act (CPRA), which strengthens\u0000the CCPA and explicitly forbids certain dark patterns in the opt-out process,\u0000we investigate how dark patterns are used in opt-out processes and assess their\u0000compliance with CCPA regulations. Our research reveals that websites employ a\u0000variety of dark patterns. Some of these patterns are explicitly prohibited\u0000under the CCPA; others evidently take advantage of legal loopholes. Despite the\u0000initial efforts to restrict dark patterns by policymakers, there is more work\u0000to be done.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"127 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Management and Visualization Tools for Emergency Medical Services 紧急医疗服务的管理和可视化工具
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.09154
Vincent Guigues, Anton Kleywegt, Victor Hugo Nascimento, Victor Salles Rodrigues, Thais Viana, Edson Medeiros
{"title":"Management and Visualization Tools for Emergency Medical Services","authors":"Vincent Guigues, Anton Kleywegt, Victor Hugo Nascimento, Victor Salles Rodrigues, Thais Viana, Edson Medeiros","doi":"arxiv-2409.09154","DOIUrl":"https://doi.org/arxiv-2409.09154","url":null,"abstract":"This paper describes an online tool for the visualization of medical\u0000emergency locations, randomly generated sample paths of medical emergencies,\u0000and the animation of ambulance movements under the control of various dispatch\u0000methods in response to these emergencies. The tool incorporates statistical\u0000models for forecasting emergency locations and call arrival times, the\u0000simulation of emergency arrivals and ambulance movement trajectories, and the\u0000computation and visualization of performance metrics such as ambulance response\u0000time distributions. Data for the Rio de Janeiro Emergency Medical Service are\u0000available on the website. A user can upload emergency data for any Emergency\u0000Medical Service, and can then use the visualization tool to explore the\u0000uploaded data. A user can also use the statistical tools and/or the simulation\u0000tool with any of the dispatch methods provided, and can then use the\u0000visualization tool to explore the computational output. Future enhancements\u0000include the ability of a user to embed additional dispatch algorithms into the\u0000simulation; the tool can then be used to visualize the simulation results\u0000obtained with the newly embedded algorithms.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning 预测对自动驾驶汽车的信任:用机器学习模拟年轻人的社会心理特征、风险收益态度和驾驶因素
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-13 DOI: arxiv-2409.08980
Robert Kaufman, Emi Lee, Manas Satish Bedmutha, David Kirsh, Nadir Weibel
{"title":"Predicting Trust In Autonomous Vehicles: Modeling Young Adult Psychosocial Traits, Risk-Benefit Attitudes, And Driving Factors With Machine Learning","authors":"Robert Kaufman, Emi Lee, Manas Satish Bedmutha, David Kirsh, Nadir Weibel","doi":"arxiv-2409.08980","DOIUrl":"https://doi.org/arxiv-2409.08980","url":null,"abstract":"Low trust remains a significant barrier to Autonomous Vehicle (AV) adoption.\u0000To design trustworthy AVs, we need to better understand the individual traits,\u0000attitudes, and experiences that impact people's trust judgements. We use\u0000machine learning to understand the most important factors that contribute to\u0000young adult trust based on a comprehensive set of personal factors gathered via\u0000survey (n = 1457). Factors ranged from psychosocial and cognitive attributes to\u0000driving style, experiences, and perceived AV risks and benefits. Using the\u0000explainable AI technique SHAP, we found that perceptions of AV risks and\u0000benefits, attitudes toward feasibility and usability, institutional trust,\u0000prior experience, and a person's mental model are the most important\u0000predictors. Surprisingly, psychosocial and many technology- and\u0000driving-specific factors were not strong predictors. Results highlight the\u0000importance of individual differences for designing trustworthy AVs for diverse\u0000groups and lead to key implications for future design and research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信