LAK23: 13th International Learning Analytics and Knowledge Conference最新文献

筛选
英文 中文
METS: Multimodal Learning Analytics of Embodied Teamwork Learning 具身团队合作学习的多模态学习分析
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576076
Linxuan Zhao, Z. Swiecki, D. Gašević, Lixiang Yan, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, Xinyu Li, Riordan Alfredo, Roberto Martínez-Maldonado
{"title":"METS: Multimodal Learning Analytics of Embodied Teamwork Learning","authors":"Linxuan Zhao, Z. Swiecki, D. Gašević, Lixiang Yan, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, Xinyu Li, Riordan Alfredo, Roberto Martínez-Maldonado","doi":"10.1145/3576050.3576076","DOIUrl":"https://doi.org/10.1145/3576050.3576076","url":null,"abstract":"Embodied team learning is a form of group learning that occurs in co-located settings where students need to interact with others while actively using resources in the physical learning space to achieve a common goal. In such situations, communication dynamics can be complex as team discourse segments can happen in parallel at different locations of the physical space with varied team member configurations. This can make it hard for teachers to assess the effectiveness of teamwork and for students to reflect on their own experiences. To address this problem, we propose METS (Multimodal Embodied Teamwork Signature), a method to model team dialogue content in combination with spatial and temporal data to generate a signature of embodied teamwork. We present a study in the context of a highly dynamic healthcare team simulation space where students can freely move. We illustrate how signatures of embodied teamwork can help to identify key differences between high and low performing teams: i) across the whole learning session; ii) at different phases of learning sessions; and iii) at particular spaces of interest in the learning space.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131176551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Impact of Non-Cognitive Interventions on Student Learning Behaviors and Outcomes: An analysis of seven large-scale experimental inventions 非认知干预对学生学习行为和结果的影响:七项大型实验发明的分析
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576073
Kirk P. Vanacore, Ashish Gurung, Andrew Mcreynolds, Allison S. Liu, S. Shaw, N. Heffernan
{"title":"Impact of Non-Cognitive Interventions on Student Learning Behaviors and Outcomes: An analysis of seven large-scale experimental inventions","authors":"Kirk P. Vanacore, Ashish Gurung, Andrew Mcreynolds, Allison S. Liu, S. Shaw, N. Heffernan","doi":"10.1145/3576050.3576073","DOIUrl":"https://doi.org/10.1145/3576050.3576073","url":null,"abstract":"As evidence grows supporting the importance of non-cognitive factors in learning, computer-assisted learning platforms increasingly incorporate non-academic interventions to influence student learning and learning related-behaviors. Non-cognitive interventions often attempt to influence students’ mindset, motivation, or metacognitive reflection to impact learning behaviors and outcomes. In the current paper, we analyze data from five experiments, involving seven treatment conditions embedded in mastery-based learning activities hosted on a computer-assisted learning platform focused on middle school mathematics. Each treatment condition embodied a specific non-cognitive theoretical perspective. Over seven school years, 20,472 students participated in the experiments. We estimated the effects of each treatment condition on students’ response time, hint usage, likelihood of mastering knowledge components, learning efficiency, and post-tests performance. Our analyses reveal a mix of both positive and negative treatment effects on student learning behaviors and performance. Few interventions impacted learning as assessed by the post-tests. These findings highlight the difficulty in positively influencing student learning behaviors and outcomes using non-cognitive interventions.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134152197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards Automated Analysis of Rhetorical Categories in Students Essay Writings using Bloom’s Taxonomy 用布鲁姆分类法自动分析学生作文中的修辞范畴
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576112
Sehrish Iqbal, Mladen Raković, Guanliang Chen, Tongguang Li, Rafael Ferreira Mello, Yizhou Fan, G. Fiorentino, Naif Radi Aljohani, D. Gašević
{"title":"Towards Automated Analysis of Rhetorical Categories in Students Essay Writings using Bloom’s Taxonomy","authors":"Sehrish Iqbal, Mladen Raković, Guanliang Chen, Tongguang Li, Rafael Ferreira Mello, Yizhou Fan, G. Fiorentino, Naif Radi Aljohani, D. Gašević","doi":"10.1145/3576050.3576112","DOIUrl":"https://doi.org/10.1145/3576050.3576112","url":null,"abstract":"Essay writing has become one of the most common learning tasks assigned to students enrolled in various courses at different educational levels, owing to the growing demand for future professionals to effectively communicate information to an audience and develop a written product (i.e. essay). Evaluating a written product requires scorers who manually examine the existence of rhetorical categories, which is a time-consuming task. Machine Learning (ML) approaches have the potential to alleviate this challenge. As a result, several attempts have been made in the literature to automate the identification of rhetorical categories using Rhetorical Structure Theory (RST). However, RST do not provide information regarding students’ cognitive level, which motivates the use of Bloom’s Taxonomy. Therefore, in this research we propose to: i) investigate the extent to which classification of rhetorical categories can be automated based on Bloom’s taxonomy by comparing the traditional ML classifiers with the pre-trained language model BERT, ii) explore the associations between rhetorical categories and writing performance. Our results showed that BERT model outperformed the traditional ML-based classifiers with 18% better accuracy, indicating it can be used in future analytics tool. Moreover, we found a statistical difference between the associations of rhetorical categories in low-achiever, medium-achiever and high-achiever groups which implies that rhetorical categories can be predictive of writing performance.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recurrence Quantification Analysis of Eye Gaze Dynamics During Team Collaboration 团队协作中眼球注视动态的递归量化分析
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576113
R. Moulder, Brandon M. Booth, Angelina Abitino, Sidney K. D’Mello
{"title":"Recurrence Quantification Analysis of Eye Gaze Dynamics During Team Collaboration","authors":"R. Moulder, Brandon M. Booth, Angelina Abitino, Sidney K. D’Mello","doi":"10.1145/3576050.3576113","DOIUrl":"https://doi.org/10.1145/3576050.3576113","url":null,"abstract":"Shared visual attention between team members facilitates collaborative problem solving (CPS), but little is known about how team-level eye gaze dynamics influence the quality and successfulness of CPS. To better understand the role of shared visual attention during CPS, we collected eye gaze data from 279 individuals solving computer-based physics puzzles while in teams of three. We converted eye gaze into discrete screen locations and quantified team-level gaze dynamics using recurrence quantification analysis (RQA). Specifically, we used a centroid-based auto-RQA approach, a pairwise team member cross-RQAs approach, and a multi-dimensional RQA approach to quantify team-level eye gaze dynamics from the eye gaze data of team members. We find that teams differing in composition based on prior task knowledge, gender, and race show few differences in team-level eye gaze dynamics. We also find that RQA metrics of team-level eye gaze dynamics were predictive of task success (all ps < .001). However, the same metrics showed different patterns of feature importance depending on predictive model and RQA type, suggesting some redundancy in task-relevant information. These findings signify that team-level eye gaze dynamics play an important role in CPS and that different forms of RQA pick up on unique aspects of shared attention between team-members.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130729247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Transformer Language Models to Validate Peer-Assigned Essay Scores in Massive Open Online Courses (MOOCs) 使用转换语言模型验证大规模在线开放课程(MOOCs)中同伴分配的论文分数
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576098
Wesley Morris, S. Crossley, Langdon Holmes, Anne Trumbore
{"title":"Using Transformer Language Models to Validate Peer-Assigned Essay Scores in Massive Open Online Courses (MOOCs)","authors":"Wesley Morris, S. Crossley, Langdon Holmes, Anne Trumbore","doi":"10.1145/3576050.3576098","DOIUrl":"https://doi.org/10.1145/3576050.3576098","url":null,"abstract":"Massive Open Online Courses (MOOCs) such as those offered by Coursera are popular ways for adults to gain important skills, advance their careers, and pursue their interests. Within these courses, students are often required to compose, submit, and peer review written essays, providing a valuable pedagogical experience for the student and a wealth of natural language data for the educational researcher. However, the scores provided by peers do not always reflect the actual quality of the text, generating questions about the reliability and validity of the scores. This study evaluates methods to increase the reliability of MOOC peer-review ratings through a series of validation tests on peer-reviewed essays. Reliability of reviewers was based on correlations between text length and essay quality. Raters were pruned based on score variance and the lexical diversity observed in their comments to create sub-sets of raters. Each subset was then used as training data to finetune distilBERT large language models to automatically score essay quality as a measure of validation. The accuracy of each language model for each subset was evaluated. We find that training language models on data subsets produced by more reliable raters based on a combination of score variance and lexical diversity produce more accurate essay scoring models. The approach developed in this study should allow for enhanced reliability of peer-reviewed scoring in MOOCS affording greater credibility within the systems.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132068801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TikTok as Learning Analytics Data: Framing Climate Change and Data Practices TikTok作为学习分析数据:框架气候变化和数据实践
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2023-03-13 DOI: 10.1145/3576050.3576055
Ha Nguyen
{"title":"TikTok as Learning Analytics Data: Framing Climate Change and Data Practices","authors":"Ha Nguyen","doi":"10.1145/3576050.3576055","DOIUrl":"https://doi.org/10.1145/3576050.3576055","url":null,"abstract":"Climate change has far-reaching impacts on communities around the world. However, climate change education has more often focused on scientific facts and statistics at a global scale than experiences at personal and local scales. To understand how to frame climate change education, I turn to youth-created videos on TikTok—a video-sharing, social media platform. Semantic network analysis of hashtags related to climate change reveals multifaceted, intertwining discourse around awareness of climate change consequences, call for action to reduce human impacts on natural systems, and environmental activism. I further explore how youth integrate personal, lived experiences data into climate change discussions. A higher usage of second-person perspective (\"you\"; i.e., addressing the audience), prosocial and agency words, and negative messaging tone are associated with higher odds of a video integrating lived experiences. These findings illustrate the platform’s affordances: In communicating to a broad audience, youth take on agency and pro-social stances and express emotions to relate to viewers and situate their content. Findings suggest the utility of learning analytics to explore youth’s perspectives and provide insights to frame climate change education in ways that elevate lived experiences.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120940296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Protected Attributes Tell Us Who, Behavior Tells Us How: A Comparison of Demographic and Behavioral Oversampling for Fair Student Success Modeling 受保护的属性告诉我们是谁,行为告诉我们如何:公平学生成功模型的人口统计学和行为过采样的比较
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2022-12-20 DOI: 10.1145/3576050.3576149
J. Cock, Muhammad Bilal, Richard Davis, M. Marras, Tanja Kaser
{"title":"Protected Attributes Tell Us Who, Behavior Tells Us How: A Comparison of Demographic and Behavioral Oversampling for Fair Student Success Modeling","authors":"J. Cock, Muhammad Bilal, Richard Davis, M. Marras, Tanja Kaser","doi":"10.1145/3576050.3576149","DOIUrl":"https://doi.org/10.1145/3576050.3576149","url":null,"abstract":"Algorithms deployed in education can shape the learning experience and success of a student. It is therefore important to understand whether and how such algorithms might create inequalities or amplify existing biases. In this paper, we analyze the fairness of models which use behavioral data to identify at-risk students and suggest two novel pre-processing approaches for bias mitigation. Based on the concept of intersectionality, the first approach involves intelligent oversampling on combinations of demographic attributes. The second approach does not require any knowledge of demographic attributes and is based on the assumption that such attributes are a (noisy) proxy for student behavior. We hence propose to directly oversample different types of behaviors identified in a cluster analysis. We evaluate our approaches on data from (i) an open-ended learning environment and (ii) a flipped classroom course. Our results show that both approaches can mitigate model bias. Directly oversampling on behavior is a valuable alternative, when demographic metadata is not available. Source code and extended results are provided in https://github.com/epfl-ml4ed/behavioral-oversampling.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131067832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Insights into undergraduate pathways using course load analytics 利用课程负荷分析洞察本科生的学习途径
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2022-12-20 DOI: 10.1145/3576050.3576081
Conrad Borchers, Z. Pardos
{"title":"Insights into undergraduate pathways using course load analytics","authors":"Conrad Borchers, Z. Pardos","doi":"10.1145/3576050.3576081","DOIUrl":"https://doi.org/10.1145/3576050.3576081","url":null,"abstract":"Course load analytics (CLA) inferred from LMS and enrollment features can offer a more accurate representation of course workload to students than credit hours and potentially aid in their course selection decisions. In this study, we produce and evaluate the first machine-learned predictions of student course load ratings and generalize our model to the full 10,000 course catalog of a large public university. We then retrospectively analyze longitudinal differences in the semester load of student course selections throughout their degree. CLA by semester shows that a student’s first semester at the university is among their highest load semesters, as opposed to a credit hour-based analysis, which would indicate it is among their lowest. Investigating what role predicted course load may play in program retention, we find that students who maintain a semester load that is low as measured by credit hours but high as measured by CLA are more likely to leave their program of study. This discrepancy in course load is particularly pertinent in STEM and associated with high prerequisite courses. Our findings have implications for academic advising, institutional handling of the freshman experience, and student-facing analytics to help students better plan, anticipate, and prepare for their selected courses.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122901174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design 信任解释器:课程设计中可解释人工智能的教师验证
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2022-12-17 DOI: 10.1145/3576050.3576147
Vinitra Swamy, Sijia Du, M. Marras, Tanja Kaser
{"title":"Trusting the Explainers: Teacher Validation of Explainable Artificial Intelligence for Course Design","authors":"Vinitra Swamy, Sijia Du, M. Marras, Tanja Kaser","doi":"10.1145/3576050.3576147","DOIUrl":"https://doi.org/10.1145/3576050.3576147","url":null,"abstract":"Deep learning models for learning analytics have become increasingly popular over the last few years; however, these approaches are still not widely adopted in real-world settings, likely due to a lack of trust and transparency. In this paper, we tackle this issue by implementing explainable AI methods for black-box neural networks. This work focuses on the context of online and blended learning and the use case of student success prediction models. We use a pairwise study design, enabling us to investigate controlled differences between pairs of courses. Our analyses cover five course pairs that differ in one educationally relevant aspect and two popular instance-based explainable AI methods (LIME and SHAP). We quantitatively compare the distances between the explanations across courses and methods. We then validate the explanations of LIME and SHAP with 26 semi-structured interviews of university-level educators regarding which features they believe contribute most to student success, which explanations they trust most, and how they could transform these insights into actionable course design decisions. Our results show that quantitatively, explainers significantly disagree with each other about what is important, and qualitatively, experts themselves do not agree on which explanations are most trustworthy. All code, extended results, and the interview protocol are provided at https://github.com/epfl-ml4ed/trusting-explainers.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122709604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning 不要相信一个模型,因为它是自信的:发现和描述未知的未知因素对在线学习中学生成功的预测
LAK23: 13th International Learning Analytics and Knowledge Conference Pub Date : 2022-12-16 DOI: 10.1145/3576050.3576148
Roberta Galici, Tanja Kaser, G. Fenu, M. Marras
{"title":"Do Not Trust a Model Because It is Confident: Uncovering and Characterizing Unknown Unknowns to Student Success Predictors in Online-Based Learning","authors":"Roberta Galici, Tanja Kaser, G. Fenu, M. Marras","doi":"10.1145/3576050.3576148","DOIUrl":"https://doi.org/10.1145/3576050.3576148","url":null,"abstract":"Student success models might be prone to develop weak spots, i.e., examples hard to accurately classify due to insufficient representation during model creation. This weakness is one of the main factors undermining users’ trust, since model predictions could for instance lead an instructor to not intervene on a student in need. In this paper, we unveil the need of detecting and characterizing unknown unknowns in student success prediction in order to better understand when models may fail. Unknown unknowns include the students for which the model is highly confident in its predictions, but is actually wrong. Therefore, we cannot solely rely on the model’s confidence when evaluating the predictions quality. We first introduce a framework for the identification and characterization of unknown unknowns. We then assess its informativeness on log data collected from flipped courses and online courses using quantitative analyses and interviews with instructors. Our results show that unknown unknowns are a critical issue in this domain and that our framework can be applied to support their detection. The source code is available at https://github.com/epfl-ml4ed/unknown-unknowns.","PeriodicalId":394433,"journal":{"name":"LAK23: 13th International Learning Analytics and Knowledge Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127076503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信