Assessing Writing最新文献

筛选
英文 中文
Effects of a genre and topic knowledge activation device on a standardized writing test performance 体裁和主题知识激活装置对标准化写作测试成绩的影响
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100898
Natalia Ávila Reyes , Diego Carrasco , Rosario Escribano , María Jesús Espinosa , Javiera Figueroa , Carolina Castillo
{"title":"Effects of a genre and topic knowledge activation device on a standardized writing test performance","authors":"Natalia Ávila Reyes ,&nbsp;Diego Carrasco ,&nbsp;Rosario Escribano ,&nbsp;María Jesús Espinosa ,&nbsp;Javiera Figueroa ,&nbsp;Carolina Castillo","doi":"10.1016/j.asw.2024.100898","DOIUrl":"10.1016/j.asw.2024.100898","url":null,"abstract":"<div><div>The aim of this article was twofold: first, to introduce a design for a writing test intended for application in large-scale assessments of writing, and second, to experimentally examine the effects of employing a device for activating prior knowledge of topic and genre as a means of controlling construct-irrelevant variance and enhancing validity. An authentic, situated writing task was devised, offering students a communicative purpose and a defined audience. Two devices were utilized for the cognitive activation of topic and genre knowledge: an infographic and a genre model. The participants in this study were 162 fifth-grade students from Santiago de Chile, with 78 students assigned to the experimental condition (with activation device) and 84 students assigned to the control condition (without activation device). The results demonstrate that the odds of presenting good writing ability are higher for students who were part of the experimental group, even when controlling for text transcription ability, considered a predictor of writing. These findings hold implications for the development of large-scale tests of writing guided by principles of educational and social justice.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100898"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142702966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers 检测和评估人工智能生成的文本和人类制作的文本:以第二语言写作教师为例
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100899
Loc Nguyen , Jessie S. Barrot
{"title":"Detecting and assessing AI-generated and human-produced texts: The case of second language writing teachers","authors":"Loc Nguyen ,&nbsp;Jessie S. Barrot","doi":"10.1016/j.asw.2024.100899","DOIUrl":"10.1016/j.asw.2024.100899","url":null,"abstract":"<div><div>Artificial intelligence (AI) technologies have recently attracted the attention of second language (L2) writing scholars and practitioners. While they recognize the tool’s viability, they also raised the potential adverse effects of these tools on accurately reflecting students’ actual level of writing performance. It is, therefore, crucial for teachers to discern AI-generated essays from human-produced work for more accurate assessment. However, limited information is available about how they assess and distinguish between essays produced by AI and human authors. Thus, this study analyzed the scores and comments teachers gave and looked into their strategies for identifying the source of the essays. Findings showed that essays by a native English-speaking (NS) lecturer and ChatGPT were rated highly. Meanwhile, essays by an NS college student, non-native English-speaking (NNS) college student, and NNS lecturer scored lower, which made them distinguishable from an AI-generated text. The study also revealed that teachers could not consistently identify the AI-generated text, particularly those written by an NS professional. These findings were attributed to teachers’ past engagement with AI writing tools, familiarity with common L2 learner errors, and exposure to native and non-native English writing. From these results, implications for L2 writing instruction and future research are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100899"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142656111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing 中国英语专业本科生定时与非定时议论文写作中语音的比较研究
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100896
Xiangmin Zeng , Jie Liu , Neil Evan Jon Anthony Bowen
{"title":"A comparative study of voice in Chinese English-major undergraduates’ timed and untimed argument writing","authors":"Xiangmin Zeng ,&nbsp;Jie Liu ,&nbsp;Neil Evan Jon Anthony Bowen","doi":"10.1016/j.asw.2024.100896","DOIUrl":"10.1016/j.asw.2024.100896","url":null,"abstract":"<div><div>As a somewhat elusive and occlusive concept, voice can be a challenging and formidable hurdle for second language (L2) writers. One area that exemplifies this struggle is timed argument writing, where authors must position claims, ideas, and individual perspectives to an existing knowledge base and scholarly community under the confines of time. To enrich our understandings of voice construction in L2 English writers’ timed writing, we explored how 41 Chinese English-major undergraduates deployed authorial voice in two prompt-based argument writing tasks (timed versus untimed). We also sampled their self-reported knowledge, use, and understanding of voice through a survey-based instrument. To compare the quantity and quality of voice construction between the two samples, we measured 10 voice categories, three voice dimensions, and overall voice strength. Results showed that only two categories displayed statistically significant differences in terms of frequencies, but all three voice dimensions and overall voice strength scored significantly higher in untimed writing samples. Based on the results of our text analysis and survey, we further highlight the complexities of voice in L2 writing, provide evidence in support of existing voice rubrics, and make practical suggestions for teaching and assessing voice in writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100896"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of task duration on the scoring of independent writing responses of adult L2-English writers 任务持续时间对英语为第二语言的成年写作者独立写作回答评分的影响
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100895
Ben Naismith , Yigal Attali , Geoffrey T. LaFlair
{"title":"The impact of task duration on the scoring of independent writing responses of adult L2-English writers","authors":"Ben Naismith ,&nbsp;Yigal Attali ,&nbsp;Geoffrey T. LaFlair","doi":"10.1016/j.asw.2024.100895","DOIUrl":"10.1016/j.asw.2024.100895","url":null,"abstract":"<div><div>In writing assessment, there is inherently a tension between authenticity and practicality: tasks with longer durations may more closely reflect real-life writing processes but are less feasible to administer and score. What is more, given total testing time, there is necessarily a trade-off between task duration and number of tasks. Traditionally, high-stakes assessments have managed this trade-off by administering one or two writing tasks each test, allowing 20–40 minutes per task. However, research on second language (L2) English writing has not found longer task durations to significantly improve score validity or reliability. Importantly, very few studies have compared much shorter durations for writing tasks to more traditional allotments. To explore this issue, we asked adult L2-English test takers to respond to two writing prompts with either 5-minute or 20-minute time limits. Responses were then evaluated by expert human raters and an automated writing evaluation tool. Regardless of scoring method, short duration scores evidenced equally high test-retest reliability and criterion validity as long duration scores. As expected, longer task duration yielded higher scores, but regardless of duration, test takers demonstrated the entire spectrum of writing proficiency. Implications for writing assessment are discussed in relation to scoring practices and task design.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100895"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports 以语言特征为指标,对中学英语学习者科学报告写作质量的结构方程研究
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-10-01 DOI: 10.1016/j.asw.2024.100897
Jack Pun , Wangyin Kenneth Li
{"title":"A structural equation investigation of linguistic features as indices of writing quality in assessed secondary-level EMI learners’ scientific reports","authors":"Jack Pun ,&nbsp;Wangyin Kenneth Li","doi":"10.1016/j.asw.2024.100897","DOIUrl":"10.1016/j.asw.2024.100897","url":null,"abstract":"<div><div>While inquiry into the relationship between linguistic features and L2 writing quality has been a long-standing line of research, little scholarly attention has been drawn to the predictive value of linguistic features in assessing the writing quality of English-medium scientific report writing. This study adds to the existing literature by examining the relation of lexical and syntactic complexity to writing quality, based on 106 scientific reports composed by Hong Kong Chinese learners of English in EMI secondary schools. Natural language processing tools were employed to extract computational indices of linguistic complexity features, followed by the use of a structural equation modeling (SEM) approach to investigate their predictive power. The validity of the anticipated construct was confirmed based upon several goodness-of-fit criteria. The SEM analysis indicated that writing quality was predicted by lexical sophistication (i.e., text-based complexity: word range and academic words; psycholinguistic complexity: word familiarity and age-of-acquisition ratings), lexical diversity (i.e., MTLD and VocD), and syntactic complexity (i.e., mean length of sentence and dependent clauses per T-unit). However, the relation of lexical diversity and syntactic complexity to writing quality was mediated by lexical sophistication. Implications for scientific report writing assessment and pedagogy in EMI educational contexts are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100897"},"PeriodicalIF":4.2,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validating an integrated reading-into-writing scale with trained university students 通过训练有素的大学生验证 "从阅读到写作 "综合量表
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-24 DOI: 10.1016/j.asw.2024.100894
Claudia Harsch , Valeriia Koval , Paraskevi (Voula) Kanistra , Ximena Delgado-Osorio
{"title":"Validating an integrated reading-into-writing scale with trained university students","authors":"Claudia Harsch ,&nbsp;Valeriia Koval ,&nbsp;Paraskevi (Voula) Kanistra ,&nbsp;Ximena Delgado-Osorio","doi":"10.1016/j.asw.2024.100894","DOIUrl":"10.1016/j.asw.2024.100894","url":null,"abstract":"<div><div>Integrated tasks are often used in higher education (HE) for diagnostic purposes, with increasing popularity in lingua franca contexts, such as German HE, where English-medium courses are gaining ground. In this context, we report the validation of a new rating scale for assessing reading-into-writing tasks. To examine scoring validity, we employed Weir’s (2005) socio-cognitive framework in an explanatory mixed-methods design. We collected 679 integrated performances in four summary and opinion tasks, which were rated by six trained student raters. They are to become writing tutors for first-year students. We utilized a many-facet Rasch model to investigate rater severity, reliability, consistency, and scale functioning. Using thematic analysis, we analyzed think-aloud protocols, retrospective and focus group interviews with the raters. Findings showed that the rating scale overall functions as intended and is perceived by the raters as valid operationalization of the integrated construct. FACETS analyses revealed reasonable reliabilities, yet exposed local issues with certain criteria and band levels. This is corroborated by the challenges reported by the raters, which they mainly attributed to the complexities inherent in such an assessment. Applying Weir’s (2005) framework in a mixed-methods approach facilitated the interpretation of the quantitative findings and yielded insights into potential validity threads.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100894"},"PeriodicalIF":4.2,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000874/pdfft?md5=73c505eab3803fbf3a3edfd0612d454a&pid=1-s2.0-S1075293524000874-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding the SSARC model of task sequencing: Assessing L2 writing development 了解任务排序的 SSARC 模型:评估 L2 写作发展
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-23 DOI: 10.1016/j.asw.2024.100893
Mahmoud Abdi Tabari , Yizhou Wang , Michol Miller
{"title":"Understanding the SSARC model of task sequencing: Assessing L2 writing development","authors":"Mahmoud Abdi Tabari ,&nbsp;Yizhou Wang ,&nbsp;Michol Miller","doi":"10.1016/j.asw.2024.100893","DOIUrl":"10.1016/j.asw.2024.100893","url":null,"abstract":"<div><div>This study aimed to explore the impact of task sequencing on the development of second language (L2) writing and investigate how L2 learners performed on three decision-making writing tasks completed in different orders over nine weeks. 120 advanced-high EFL students were randomly assigned to one of three groups and given different task sequences: 1) a simple-medium-complex (SMC) sequence, 2) a complex-medium-simple sequence (CMS), or 3) a random sequence (RDM). Essays were analyzed using measures of syntactic complexity, accuracy, lexical complexity, and fluency (CALF). Results showed that the CALF of L2 writing demonstrated longitudinal development over time in all three task sequencing groups. CALF development was not immediately apparent in the first six weeks, with most measures displaying a significant increase by the end of the ninth week. Furthermore, different task sequences resulted in varying patterns and magnitudes of CALF growth, but no specific sequence was found to be superior overall.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100893"},"PeriodicalIF":4.2,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the use of model texts as a feedback instrument in expository writing: EFL learners’ noticing, incorporations, and text quality 探索在说明文写作中使用范文作为反馈工具:英语学习者的注意、融入和文本质量
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-20 DOI: 10.1016/j.asw.2024.100890
Long Quoc Nguyen , Bao Trang Thi Nguyen , Hoang Yen Phuong
{"title":"Exploring the use of model texts as a feedback instrument in expository writing: EFL learners’ noticing, incorporations, and text quality","authors":"Long Quoc Nguyen ,&nbsp;Bao Trang Thi Nguyen ,&nbsp;Hoang Yen Phuong","doi":"10.1016/j.asw.2024.100890","DOIUrl":"10.1016/j.asw.2024.100890","url":null,"abstract":"<div><div>Model texts as a feedback instrument (MTFI) have proven effective in enhancing L2 writing, yet research on this domain mainly focused on narrative compositions over a three-stage task: i) composing, ii) comparing, and iii) rewriting. The impact of MTFI on learners’ noticing, incorporations, and text quality in expository writing, especially in the Vietnamese context, remains underexplored. To address these gaps, this study aims to investigate the effect of MTFI on 68 Vietnamese EFL undergraduates’ expository writing following a process-product approach. The participants were divided into a control group (CG, <em>N</em> = 33) and an experimental group (EG, <em>N</em> = 35). Both groups attended stages one and three, but only the EG compared their initial writing with a model text in stage two. The results, derived from learners’ note-taking sheets, written paragraphs, and semi-structured interviews, revealed that despite the two groups’ comparability in stage one, the EG demonstrated significantly better text quality than the CG in stage three, particularly in content, lexis, and organization. Furthermore, while the EG largely encountered lexical issues at the outset, they primarily concentrated on content-related and organizational features in the subsequent stages. Based on the findings, recommendations for future research and implications for pedagogy were deliberated.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100890"},"PeriodicalIF":4.2,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the development of noun phrase complexity in L2 English writings across two genres 探索两种体裁的后二英语写作中名词短语复杂性的发展
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-16 DOI: 10.1016/j.asw.2024.100892
Yixin Wang, Jingyang Jiang
{"title":"Exploring the development of noun phrase complexity in L2 English writings across two genres","authors":"Yixin Wang,&nbsp;Jingyang Jiang","doi":"10.1016/j.asw.2024.100892","DOIUrl":"10.1016/j.asw.2024.100892","url":null,"abstract":"<div><p>Researchers in second language (L2) writing studies are increasingly focusing on examining complex noun phrases (NPs). However, recent studies on NP complexity show a preference for examining advanced learners’ writings, despite the fact that English writings of early L2 learners already contain many NPs. In the present study, we used a corpus-based approach to investigate the development of NP complexity in argumentative and narrative compositions written by English as a foreign language (EFL) learners with different proficiency levels. The results show that eight NP complexity features presented patterns of growth at different proficiency levels. Among the eight features, attributive adjectives and -ing participles as post-modifiers can both reflect the development and characteristics of Chinese EFL learners’ writings. We also found that genre effect on NP complexity growth was the result of both task-related factors of genres and learners’ genre exposure. Our results largely corroborate the developmental index proposed by Biber et al. (2011), and confirm that NP complexity starts to grow from early stages of learning among L2 English learners with genre-specific features.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100892"},"PeriodicalIF":4.2,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142241243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L2 master’s and doctoral students’ preferences for supervisor written feedback on their theses/dissertations 语言为第二语言的硕士生和博士生对导师就其论文/毕业论文提供书面反馈意见的偏好
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-09-11 DOI: 10.1016/j.asw.2024.100891
MohammadHamed Hoomanfard
{"title":"L2 master’s and doctoral students’ preferences for supervisor written feedback on their theses/dissertations","authors":"MohammadHamed Hoomanfard","doi":"10.1016/j.asw.2024.100891","DOIUrl":"10.1016/j.asw.2024.100891","url":null,"abstract":"<div><p>The present study employed a qualitative research design to investigate possible differences between L2 master’s and doctoral students’ preferences for supervisor written feedback. Although the role of learners’ preferences, as a part of attitudinal engagement, has been emphasized in the literature on feedback, there are still niches in the literature that need to be occupied. One of these gaps is the examination of L2 master’s and doctoral students’ preferences for supervisor written feedback on their theses/dissertations. To bridge this research gap, the researcher interviewed 52 master’s and 21 doctoral Iranian English Language Teaching students. Thematic analysis of the interview data identified five main preferences: feedback that is clear, specific, encouraging, dialogic, and non-appropriative. The examination of interview data showed that both master’s and doctoral students expressed high levels of preference for receiving clear and encouraging feedback. A significantly higher percentage of master’s students expressed their preference for specific comments. In contrast, doctoral students exhibited heightened preferences for non-appropriative and dialogic feedback. The findings also provided insights into the underlying factors that can shape master’s and doctoral students’ preferences. Several practical implications and suggestions for further research are also discussed in this study.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"62 ","pages":"Article 100891"},"PeriodicalIF":4.2,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信