Assessment in Education-Principles Policy & Practice最新文献

筛选
英文 中文
Complementary strengths? Evaluation of a hybrid human-machine scoring approach for a test of oral academic English 互补优势?学术英语口语测试中人机混合评分方法的评价
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1979466
Larry Davis, S. Papageorgiou
{"title":"Complementary strengths? Evaluation of a hybrid human-machine scoring approach for a test of oral academic English","authors":"Larry Davis, S. Papageorgiou","doi":"10.1080/0969594X.2021.1979466","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1979466","url":null,"abstract":"ABSTRACT Human raters and machine scoring systems potentially have complementary strengths in evaluating language ability; specifically, it has been suggested that automated systems might be used to make consistent measurements of specific linguistic phenomena, whilst humans evaluate more global aspects of performance. We report on an empirical study that explored the possibility of combining human and machine scores using responses from the speaking section of the TOEFL iBT® test. Human raters awarded scores for three sub-constructs: delivery, language use and topic development. The SpeechRaterSM automated scoring system produced scores for delivery and language use. Composite scores computed from three different combinations of human and automated analytic scores were equally or more reliable than human holistic scores, probably due to the inclusion of multiple observations in composite scores. However, composite scores calculated solely from human analytic scores showed the highest reliability and reliability steadily decreased as more machine scores replaced human scores.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83110288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Assessing L2 English speaking using automated scoring technology: examining automarker reliability 使用自动评分技术评估第二语言英语口语:检查自动评分的可靠性
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1979467
Jing Xu, Edmund Jones, V. Laxton, E. Galaczi
{"title":"Assessing L2 English speaking using automated scoring technology: examining automarker reliability","authors":"Jing Xu, Edmund Jones, V. Laxton, E. Galaczi","doi":"10.1080/0969594X.2021.1979467","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1979467","url":null,"abstract":"ABSTRACT Recent advances in machine learning have made automated scoring of learner speech widespread, and yet validation research that provides support for applying automated scoring technology to assessment is still in its infancy. Both the educational measurement and language assessment communities have called for greater transparency in describing scoring algorithms and research evidence about the reliability of automated scoring. This paper reports on a study that investigated the reliability of an automarker using candidate responses produced in an online oral English test. Based on ‘limits of agreement’ and multi-faceted Rasch analyses on automarker scores and individual examiner scores, the study found that the automarker, while exhibiting excellent internal consistency, was slightly more lenient than examiner fair average scores, particularly for low-proficiency speakers. Additionally, it was found that an automarker uncertainty measure termed Language Quality, which indicates the confidence of speech recognition, was useful for predicting automarker reliability and flagging abnormal speech.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74694089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Use of innovative technology in oral language assessment 运用创新科技进行口语评估
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.2004530
Fumiyo Nakatsuhara, Vivien Berry
{"title":"Use of innovative technology in oral language assessment","authors":"Fumiyo Nakatsuhara, Vivien Berry","doi":"10.1080/0969594X.2021.2004530","DOIUrl":"https://doi.org/10.1080/0969594X.2021.2004530","url":null,"abstract":"The theme of the very first Special Issue of Assessment in Education: Principles, Policy and Practice (Volume 10, Issue 3, published in 2003) was ‘Assessment for the Digital Age’. The editorial of that Special Issue notes that the aim of the volume was to ‘draw the attention of the international assessment community to a range of potential and actual relationships between digital technologies and assessment’ (McFarlane, 2003, p. 261). Since then, there is no doubt that the role of digital technologies in assessment has evolved even more dynamically than any assessment researchers and practitioners had expected. In particular, exponential advances in technology and the increased availability of high-speed internet in recent years have not only changed the way we communicate orally in social, professional, and educational contexts, but also the ways in which we assess oral language. Revisiting the same theme after almost two decades, but specifically from an oral language assessment perspective, this Special Issue presents conceptual and empirical papers that discuss the opportunities and challenges that the latest innovative affordances offer. The current landscape of oral language assessment can be characterised by numerous examples of the development and use of digital technology (Sawaki, 2022; Xi, 2022). While these innovations have opened the door to types of speaking test tasks which were previously not possible and have provided language test practitioners with more efficient ways of delivering and scoring tests, it should be kept in mind that ‘each of the affordances offered by technology also raises a new set of issues to be tackled’ (Chapelle, 2018). This does not mean that we should be excessively concerned or sceptical about technology-mediated assessments; it simply means that greater transparency is needed. Up-to-date information and appropriate guidance about the use of innovative technology in language testing and, more importantly, what language skills are elicited from test-takers and how they are measured, should be available to test users so that they can both embrace and critically engage with the fast-moving developments in the field (see also Khabbazbashi et al., 2021; Litman et al., 2018). This current Special Issue therefore aims to contribute to and to encourage transparent dialogues by test researchers, practitioners, and users within the international testing community on recent research which investigates both methods of delivery and methods of scoring in technology-mediated oral language assessments. Of the seven articles in this volume, the first three are on the application of technologies for speaking test delivery. In the opening article, Ockey and Neiriz offer a conceptual paper examining five models of technology-delivered assessments of oral communication that have been utilised over the past three decades. Drawing on Bachman and Palmer's (1996) qualities of test usefulness, Ockey and Hirch's (2020) assessment o","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84005088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating technology-mediated second language oral communication assessment delivery models 评估技术介导的第二语言口头交流评估交付模式
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-07-04 DOI: 10.1080/0969594X.2021.1976106
G. Ockey, Reza Neiriz
{"title":"Evaluating technology-mediated second language oral communication assessment delivery models","authors":"G. Ockey, Reza Neiriz","doi":"10.1080/0969594X.2021.1976106","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1976106","url":null,"abstract":"ABSTRACT As our understanding of the construct of oral communication (OC) has evolved, so have the possibilities of computer technology undertaking the delivery of tests that measure this ability. It is paramount to understand to what extent such developments lead to accurate, comprehensive, and useful assessment of OC. In this paper, we discuss five models of technology-delivered OC assessment that have appeared in the past three decades. We evaluate these models in terms of how well their respective methods aid in assessing OC. To achieve this aim, we use a framework which takes into account a contemporary view of OC ability, including the call for incorporating English as a lingua franca (ELF) considerations into English language assessment. The evaluation of the five models suggests strengths and weaknesses of each that should be considered when determining which is used for a particular purpose.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89507154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Teacher use of digital technologies for school-based assessment: a scoping review 教师使用数字技术进行校本评估:范围审查
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1929828
Christopher N. Blundell
{"title":"Teacher use of digital technologies for school-based assessment: a scoping review","authors":"Christopher N. Blundell","doi":"10.1080/0969594X.2021.1929828","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1929828","url":null,"abstract":"ABSTRACT This paper presents a scoping review of, firstly, how teachers use digital technologies for school-based assessment, and secondly, how these assessment-purposed digital technologies are used in teacher- and student-centred pedagogies. It draws on research about the use of assessment-purposed digital technologies in school settings, published from 2009 to 2019 in peer-reviewed journals and conference proceedings. The findings indicate automated marking and computer- and web-based assessment technologies support established school-based assessment practices, and that game-based and virtual/augmented environments and ePortfolios diversify the modes of assessment and the evidence of learning collected. These technologies improve the efficiency of assessment practices in teacher-centred pedagogies and provide latitude to assess evidence of learning from more diverse modes of engagement in student-centred pedagogies. Current research commonly focuses on validating specific technologies and most commonly relates to automated assessment of closed outcomes within a narrow range of learning areas; these limits indicate opportunities for future research.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74148434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Conceptualising a Fairness Framework for Assessment Adjusted Practices for Students with Disability: An Empirical Study 对残疾学生评估调整实践的公平框架的概念化:一项实证研究
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1932736
A. Rasooli, Maryam Razmjoee, J. Cumming, E. Dickson, A. Webster
{"title":"Conceptualising a Fairness Framework for Assessment Adjusted Practices for Students with Disability: An Empirical Study","authors":"A. Rasooli, Maryam Razmjoee, J. Cumming, E. Dickson, A. Webster","doi":"10.1080/0969594X.2021.1932736","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1932736","url":null,"abstract":"ABSTRACT Given the increasing diversity of teachers and students in 21st century classrooms, fairness is a key consideration in classroom adjusted assessment and instructional practices for students with disability. Despite its significance, little research has attempted to explicitly conceptualise fairness for classroom assessment adjusted practices. The purpose of this study is to leverage the multiple perspectives of secondary school students with disability, their teachers, and parents to build a multi-dimensional framework of fairness for assessment adjusted practices. Open-ended survey data were collected from 60 students with disability, 45 teachers, and 58 parents in four states in Australia and were analyzed using qualitative inductive analysis. The findings present a multidimensional framework for assessment adjusted practices that include interactions across elements of assessment practices, socio-emotional environment, overall conceptions of fairness, and contextual barriers and facilitators. The interactions across these elements influence the learning opportunities and academic outcomes for students with disability.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76263162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes 谁是反馈对象?问责制和质量保证议程对制定反馈程序的影响
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1926221
N. Winstone, D. Carless
{"title":"Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes","authors":"N. Winstone, D. Carless","doi":"10.1080/0969594X.2021.1926221","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1926221","url":null,"abstract":"ABSTRACT In education systems across the world, teachers are under increasing quality assurance scrutiny in relation to the provision of feedback comments to students. This is particularly pertinent in higher education, where accountability arising from student dissatisfaction with feedback causes concern for institutions. Through semi-structured interviews with twenty-eight educators from a range of institution types, we investigated how educators perceive, interpret, and enact competing functions of feedback. The data demonstrate that educators often experienced professional dissonance where perceived quality assurance requirements conflicted with their own beliefs about the centrality of student learning in feedback processes. Such dissonance arose from the pressure to secure student satisfaction, and avoid complaints. The data also demonstrate that feedback does ‘double duty’ through the requirement to manage competing audiences for feedback comments. Quality enhancement of feedback processes could profitably focus less on teacher inputs and more on evidence of student response to feedback.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73363032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Who is feedback for? 谁是反馈对象?
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-05-04 DOI: 10.1080/0969594X.2021.1975996
Therese N. Hopfenbeck
{"title":"Who is feedback for?","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2021.1975996","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1975996","url":null,"abstract":"The articles in this regular issue look at different forms of assessment practices such as grading and feedback and how stakeholders interact with the outcomes of these practices. The first article presents a research study from Sweden on holistic and analytic grading. As grades are the main criteria for selecting schools for higher education, and they are based upon teachers’ judgement, grading is rather high stakes for students in Sweden. Johnson et al. (this issue) set up an experimental study where Swedish teachers were randomly assigned to two different conditions (i.e. analytic or holistic grading), in either English as a foreign language (EFL) or mathematics. The research study was conducted online, with only grades and written justification from the teachers collected by the research team. In the analytic condition, teachers received authentic student responses from four students four times, and were asked to grade these through an Internet-based form. At the end of the semester, teachers were asked to provide an overall grade. In the holistic condition, teachers received all material at one time, and would therefore not be influenced by previous experiences. Findings indicate that analytic grading was preferable to holistic grading in terms of agreement among teachers, with stronger effects found in EFL. Teachers in the analytic conditions made more references to grade levels without specifying criteria, while teachers in the holistic conditions provided more references to criteria in their justifications. Although the participants volunteered for the experiment and it was relatively small, the study offers important empirical results in an area where there are still more questions than solutions. The authors propose further investigations into how to increase agreement between teachers’ grading, including using moderation procedures where teachers could review each other’s grading. In the second article, Yan et al. (this issue) present a systematic review on factors influencing teacher’s intentions and implementations regarding formative assessment. The 52 studies included in the qualitative synthesis discuss issues such as how teachers’ selfefficacy and education and training, influence their intention to conduct formative assessment, and add to previous reviews on implementation of formative assessment. More specifically, it demonstrates how not only contextual but also personal factors need to be taken into consideration when designing school-based support measures or teacher professional development programmes with the aim to promote formative assessment practices. In the article Who is feedback for? The influence of accountability and quality assurance agendas on the enactment of feedback processes, Winstone & Cardiff (this issue) explore the consequences of the evaluation and accountability measures in higher education in UK, and how it influences and interacts with feedback processes from teachers to students. The study is of imp","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84655704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The uses and misuses of centralised high stakes examinations-Assessment Policy and Practice in Georgia 集中高风险考试的使用与滥用——格鲁吉亚的评估政策与实践
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-04-04 DOI: 10.1080/0969594X.2021.1900775
Sophia Gorgodze, Lela Chakhaia
{"title":"The uses and misuses of centralised high stakes examinations-Assessment Policy and Practice in Georgia","authors":"Sophia Gorgodze, Lela Chakhaia","doi":"10.1080/0969594X.2021.1900775","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1900775","url":null,"abstract":"ABSTRACT Trust in centralised high-stakes exams in Georgia has grown since 2005, when the introduction of nationwide standardised tests for university entry successfully eradicated the deep-rooted corruption in the admissions system. In 2011, another set of high-stakes exams were introduced for school graduation, resulting in a minimum of 12 exams for secondary school graduation and university entry. The examination system reform in 2019 was limited to abolishing the school graduation exams and reducing the number of university admission exams. Fewer exams instigated the fear of decrease in student motivation and the deterioration of learning outcomes among some stakeholders. This article describes how centralised high-stakes assessments have become an integral part of the education system, cites available evidence on their impact, accounts for recent changes, and argues that overreliance on centralised high-stakes exams is due to complex educational, political and social processes that make it difficult to transform the system.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89261196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Signature assessment and feedback practices in the disciplines 在学科中签名评估和反馈实践
IF 3.2 3区 教育学
Assessment in Education-Principles Policy & Practice Pub Date : 2021-03-04 DOI: 10.1080/0969594X.2021.1930444
Edd Pitt, Kathleen M. Quinlan
{"title":"Signature assessment and feedback practices in the disciplines","authors":"Edd Pitt, Kathleen M. Quinlan","doi":"10.1080/0969594X.2021.1930444","DOIUrl":"https://doi.org/10.1080/0969594X.2021.1930444","url":null,"abstract":"In the main, attention to disciplinary practices has been neglected in assessment and feedback research (Coffey et al., 2011; Cowie & Moreland, 2015). Only recently, the longstanding interest in authentic assessment (e.g. Wiggins, 1989) has re-surfaced in higher education literature on authentic assessment design (Ashford-Rowe et al., 2014; Villarroel et al., 2018) and authentic feedback (Dawson et al., 2020). To address this gap, in our 2019 call for papers for this special issue, we sought articles that would explore the potential of what we called ‘signature’ assessment and feedback practices. Just as signature pedagogies (Shulman, 2005) have directed attention to disciplineand profession-specific teaching practices in higher education, we used the term ‘signature’ to invite researchers and educators to consider discipline-specific assessment and feedback practices. While these signatures will be authentic to a discipline, the term implies that they will be uniquely characteristic of a particular discipline. Thus, we invited researchers and educators to dig deeply into what makes a discipline or profession special and distinct from other fields. Because attention to disciplines has the potential to connect primary and secondary with tertiary education, which is often siloed in its own journals, the call for papers also explicitly sought examples from different levels of education. Two years later, this special issue contains five theoretically framed and grounded empirical papers that: a) situate particular assessment and feedback practices within a discipline; b) analyse how engagement with those assessment and feedback activities allows students to participate more fully or effectively within the disciplinary or professional community, and c) illuminate new aspects of assessment and feedback. We (Quinlan and Pitt, this issue) conclude this special issue with an article that draws on the five empirical papers to construct a taxonomy for advancing research on signature assessment and feedback practices.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":3.2,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75536229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信