arXiv - CS - Human-Computer Interaction最新文献

筛选
英文 中文
Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction 探索自闭症儿童的注视模式:聚类、可视化和预测
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11744
Weiyan Shi, Haihong Zhang, Jin Yang, Ruiqing Ding, YongWei Zhu, Kenny Tsu Wei Choo
{"title":"Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction","authors":"Weiyan Shi, Haihong Zhang, Jin Yang, Ruiqing Ding, YongWei Zhu, Kenny Tsu Wei Choo","doi":"arxiv-2409.11744","DOIUrl":"https://doi.org/arxiv-2409.11744","url":null,"abstract":"Autism Spectrum Disorder (ASD) significantly affects the social and\u0000communication abilities of children, and eye-tracking is commonly used as a\u0000diagnostic tool by identifying associated atypical gaze patterns. Traditional\u0000methods demand manual identification of Areas of Interest in gaze patterns,\u0000lowering the performance of gaze behavior analysis in ASD subjects. To tackle\u0000this limitation, we propose a novel method to automatically analyze gaze\u0000behaviors in ASD children with superior accuracy. To be specific, we first\u0000apply and optimize seven clustering algorithms to automatically group gaze\u0000points to compare ASD subjects with typically developing peers. Subsequently,\u0000we extract 63 significant features to fully describe the patterns. These\u0000features can describe correlations between ASD diagnosis and gaze patterns.\u0000Lastly, using these features as prior knowledge, we train multiple predictive\u0000machine learning models to predict and diagnose ASD based on their gaze\u0000behaviors. To evaluate our method, we apply our method to three ASD datasets.\u0000The experimental and visualization results demonstrate the improvements of\u0000clustering algorithms in the analysis of unique gaze patterns in ASD children.\u0000Additionally, these predictive machine learning models achieved\u0000state-of-the-art prediction performance ($81%$ AUC) in the field of\u0000automatically constructed gaze point features for ASD diagnosis. Our code is\u0000available at url{https://github.com/username/projectname}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing 揭示在 LLM 角色扮演中检测角色知识错误所面临的挑战
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11726
Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu
{"title":"Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing","authors":"Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu","doi":"arxiv-2409.11726","DOIUrl":"https://doi.org/arxiv-2409.11726","url":null,"abstract":"Large language model (LLM) role-playing has gained widespread attention,\u0000where the authentic character knowledge is crucial for constructing realistic\u0000LLM role-playing agents. However, existing works usually overlook the\u0000exploration of LLMs' ability to detect characters' known knowledge errors (KKE)\u0000and unknown knowledge errors (UKE) while playing roles, which would lead to\u0000low-quality automatic construction of character trainable corpus. In this\u0000paper, we propose a probing dataset to evaluate LLMs' ability to detect errors\u0000in KKE and UKE. The results indicate that even the latest LLMs struggle to\u0000effectively detect these two types of errors, especially when it comes to\u0000familiar knowledge. We experimented with various reasoning strategies and\u0000propose an agent-based reasoning method, Self-Recollection and Self-Doubt\u0000(S2RD), to further explore the potential for improving error detection\u0000capabilities. Experiments show that our method effectively improves the LLMs'\u0000ability to detect error character knowledge, but it remains an issue that\u0000requires ongoing attention.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok 人工智能绘画与人类绘画?解读公众对 TikTok 上人工智能生成的绘画的互动和看法
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11911
Jiajun Wang, Xiangzhe Yuan, Siying Hu, Zhicong Lu
{"title":"AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok","authors":"Jiajun Wang, Xiangzhe Yuan, Siying Hu, Zhicong Lu","doi":"arxiv-2409.11911","DOIUrl":"https://doi.org/arxiv-2409.11911","url":null,"abstract":"With the development of generative AI technology, a vast array of\u0000AI-generated paintings (AIGP) have gone viral on social media like TikTok.\u0000However, some negative news about AIGP has also emerged. For example, in 2022,\u0000numerous painters worldwide organized a large-scale anti-AI movement because of\u0000the infringement in generative AI model training. This event reflected a social\u0000issue that, with the development and application of generative AI, public\u0000feedback and feelings towards it may have been overlooked. Therefore, to\u0000investigate public interactions and perceptions towards AIGP on social media,\u0000we analyzed user engagement level and comment sentiment scores of AIGP using\u0000human painting videos as a baseline. In analyzing user engagement, we also\u0000considered the possible moderating effect of the aesthetic quality of\u0000Paintings. Utilizing topic modeling, we identified seven reasons, including\u0000looks too real, looks too scary, ambivalence, etc., leading to negative public\u0000perceptions of AIGP. Our work may provide instructive suggestions for future\u0000generative AI technology development and avoid potential crises in human-AI\u0000collaboration.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension 从数据故事到对话:生成式人工智能代理和数据故事在增强数据可视化理解方面的随机对照试验
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11645
Lixiang Yan, Roberto Martinez-Maldonado, Yueqiao Jin, Vanessa Echeverria, Mikaela Milesi, Jie Fan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, Dragan Gašević
{"title":"From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension","authors":"Lixiang Yan, Roberto Martinez-Maldonado, Yueqiao Jin, Vanessa Echeverria, Mikaela Milesi, Jie Fan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, Dragan Gašević","doi":"arxiv-2409.11645","DOIUrl":"https://doi.org/arxiv-2409.11645","url":null,"abstract":"Generative AI (GenAI) agents offer a potentially scalable approach to support\u0000comprehending complex data visualisations, a skill many individuals struggle\u0000with. While data storytelling has proven effective, there is little evidence\u0000regarding the comparative effectiveness of GenAI agents. To address this gap,\u0000we conducted a randomised controlled study with 141 participants to compare the\u0000effectiveness and efficiency of data dialogues facilitated by both passive\u0000(which simply answer participants' questions about visualisations) and\u0000proactive (infused with scaffolding questions to guide participants through\u0000visualisations) GenAI agents against data storytelling in enhancing their\u0000comprehension of data visualisations. Comprehension was measured before,\u0000during, and after the intervention. Results suggest that passive GenAI agents\u0000improve comprehension similarly to data storytelling both during and after\u0000intervention. Notably, proactive GenAI agents significantly enhance\u0000comprehension after intervention compared to both passive GenAI agents and\u0000standalone data storytelling, regardless of participants' visualisation\u0000literacy, indicating sustained improvements and learning.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment OSINT 诊所:共同设计用于漏洞评估的人工智能增强型协作 OSINT 调查
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11672
Anirban Mukhopadhyay, Kurt Luther
{"title":"OSINT Clinic: Co-designing AI-Augmented Collaborative OSINT Investigations for Vulnerability Assessment","authors":"Anirban Mukhopadhyay, Kurt Luther","doi":"arxiv-2409.11672","DOIUrl":"https://doi.org/arxiv-2409.11672","url":null,"abstract":"Small businesses need vulnerability assessments to identify and mitigate\u0000cyber risks. Cybersecurity clinics provide a solution by offering students\u0000hands-on experience while delivering free vulnerability assessments to local\u0000organizations. To scale this model, we propose an Open Source Intelligence\u0000(OSINT) clinic where students conduct assessments using only publicly available\u0000data. We enhance the quality of investigations in the OSINT clinic by\u0000addressing the technical and collaborative challenges. Over the duration of the\u00002023-24 academic year, we conducted a three-phase co-design study with six\u0000students. Our study identified key challenges in the OSINT investigations and\u0000explored how generative AI could address these performance gaps. We developed\u0000design ideas for effective AI integration based on the use of AI probes and\u0000collaboration platform features. A pilot with three small businesses\u0000highlighted both the practical benefits of AI in streamlining investigations,\u0000and limitations, including privacy concerns and difficulty in monitoring\u0000progress.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Equimetrics -- Applying HAR principles to equestrian activities Equimetrics -- 在马术活动中应用 HAR 原理
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-18 DOI: arxiv-2409.11989
Jonas Pöhler, Kristof Van Laerhoven
{"title":"Equimetrics -- Applying HAR principles to equestrian activities","authors":"Jonas Pöhler, Kristof Van Laerhoven","doi":"arxiv-2409.11989","DOIUrl":"https://doi.org/arxiv-2409.11989","url":null,"abstract":"This paper presents the Equimetrics data capture system. The primary\u0000objective is to apply HAR principles to enhance the understanding and\u0000optimization of equestrian performance. By integrating data from strategically\u0000placed sensors on the rider's body and the horse's limbs, the system provides a\u0000comprehensive view of their interactions. Preliminary data collection has\u0000demonstrated the system's ability to accurately classify various equestrian\u0000activities, such as walking, trotting, cantering, and jumping, while also\u0000detecting subtle changes in rider posture and horse movement. The system\u0000leverages open-source hardware and software to offer a cost-effective\u0000alternative to traditional motion capture technologies, making it accessible\u0000for researchers and trainers. The Equimetrics system represents a significant\u0000advancement in equestrian performance analysis, providing objective,\u0000data-driven insights that can be used to enhance training and competition\u0000outcomes.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis 使用联合分析法对生物识别系统进行以人为本的风险评估
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-17 DOI: arxiv-2409.11224
Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada
{"title":"A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis","authors":"Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada","doi":"arxiv-2409.11224","DOIUrl":"https://doi.org/arxiv-2409.11224","url":null,"abstract":"Biometric recognition systems, known for their convenience, are widely\u0000adopted across various fields. However, their security faces risks depending on\u0000the authentication algorithm and deployment environment. Current risk\u0000assessment methods faces significant challenges in incorporating the crucial\u0000factor of attacker's motivation, leading to incomplete evaluations. This paper\u0000presents a novel human-centered risk evaluation framework using conjoint\u0000analysis to quantify the impact of risk factors, such as surveillance cameras,\u0000on attacker's motivation. Our framework calculates risk values incorporating\u0000the False Acceptance Rate (FAR) and attack probability, allowing comprehensive\u0000comparisons across use cases. A survey of 600 Japanese participants\u0000demonstrates our method's effectiveness, showing how security measures\u0000influence attacker's motivation. This approach helps decision-makers customize\u0000biometric systems to enhance security while maintaining usability.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Dimensions of Expertise in AR-Guided Psychomotor Tasks 探索 AR 引导的心理运动任务中的专业知识维度
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-17 DOI: arxiv-2409.11599
Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam
{"title":"Exploring Dimensions of Expertise in AR-Guided Psychomotor Tasks","authors":"Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam","doi":"arxiv-2409.11599","DOIUrl":"https://doi.org/arxiv-2409.11599","url":null,"abstract":"This study aimed to explore how novices and experts differ in performing\u0000complex psychomotor tasks guided by augmented reality (AR), focusing on\u0000decision-making and technical proficiency. Participants were divided into\u0000novice and expert groups based on a pre-questionnaire assessing their technical\u0000skills and theoretical knowledge of precision inspection. Participants\u0000completed a post-study questionnaire that evaluated cognitive load (NASA-TLX),\u0000self-efficacy, and experience with the HoloLens 2 and AR app, along with\u0000general feedback. We used multimodal data from AR devices and wearables,\u0000including hand tracking, galvanic skin response, and gaze tracking, to measure\u0000key performance metrics. We found that experts significantly outperformed\u0000novices in decision-making speed, efficiency, accuracy, and dexterity in the\u0000execution of technical tasks. Novices exhibited a positive correlation between\u0000perceived performance in the NASA-TLX and the GSR amplitude, indicating that\u0000higher perceived performance is associated with increased physiological stress\u0000responses. This study provides a foundation for designing multidimensional\u0000expertise estimation models to enable personalized industrial AR training\u0000systems.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups 黑暗模式还是光明模式?探索对比极性对不同年龄组视觉表现的影响
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-17 DOI: arxiv-2409.10841
Zack While, Ali Sarvghad
{"title":"Dark Mode or Light Mode? Exploring the Impact of Contrast Polarity on Visualization Performance Between Age Groups","authors":"Zack While, Ali Sarvghad","doi":"arxiv-2409.10841","DOIUrl":"https://doi.org/arxiv-2409.10841","url":null,"abstract":"This study examines the impact of positive and negative contrast polarities\u0000(i.e., light and dark modes) on the performance of younger adults and people in\u0000their late adulthood (PLA). In a crowdsourced study with 134 participants (69\u0000below age 60, 66 aged 60 and above), we assessed their accuracy and time\u0000performing analysis tasks across three common visualization types (Bar, Line,\u0000Scatterplot) and two contrast polarities (positive and negative). We observed\u0000that, across both age groups, the polarity that led to better performance and\u0000the resulting amount of improvement varied on an individual basis, with each\u0000polarity benefiting comparable proportions of participants. However, the\u0000contrast polarity that led to better performance did not always match their\u0000preferred polarity. Additionally, we observed that the choice of contrast\u0000polarity can have an impact on time similar to that of the choice of\u0000visualization type, resulting in an average percent difference of around 36%.\u0000These findings indicate that, overall, the effects of contrast polarity on\u0000visual analysis performance do not noticeably change with age. Furthermore,\u0000they underscore the importance of making visualizations available in both\u0000contrast polarities to better-support a broad audience with differing needs.\u0000Supplementary materials for this work can be found at\u0000url{https://osf.io/539a4/}.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArticulatePro: A Comparative Study on a Proactive and Non-Proactive Assistant in a Climate Data Exploration Task ArticulatePro:气候数据探索任务中主动和非主动助手的比较研究
arXiv - CS - Human-Computer Interaction Pub Date : 2024-09-17 DOI: arxiv-2409.10797
Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh
{"title":"ArticulatePro: A Comparative Study on a Proactive and Non-Proactive Assistant in a Climate Data Exploration Task","authors":"Roderick Tabalba, Christopher J. Lee, Giorgio Tran, Nurit Kirshenbaum, Jason Leigh","doi":"arxiv-2409.10797","DOIUrl":"https://doi.org/arxiv-2409.10797","url":null,"abstract":"Recent advances in Natural Language Interfaces (NLIs) and Large Language\u0000Models (LLMs) have transformed our approach to NLP tasks, allowing us to focus\u0000more on a Pragmatics-based approach. This shift enables more natural\u0000interactions between humans and voice assistants, which have been challenging\u0000to achieve. Pragmatics describes how users often talk out of turn, interrupt\u0000each other, or provide relevant information without being explicitly asked\u0000(maxim of quantity). To explore this, we developed a digital assistant that\u0000constantly listens to conversations and proactively generates relevant\u0000visualizations during data exploration tasks. In a within-subject study,\u0000participants interacted with both proactive and non-proactive versions of a\u0000voice assistant while exploring the Hawaii Climate Data Portal (HCDP). Results\u0000suggest that the proactive assistant enhanced user engagement and facilitated\u0000quicker insights. Our study highlights the potential of Pragmatic, proactive AI\u0000in NLIs and identifies key challenges in its implementation, offering insights\u0000for future research.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142252429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信