Assessing Writing最新文献

筛选
英文 中文
"Navigating innovation and equity in writing assessment" "在写作评估的创新与公平中导航"
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-07-01 DOI: 10.1016/j.asw.2024.100873
Kelly Hartwell , Laura Aull
{"title":"\"Navigating innovation and equity in writing assessment\"","authors":"Kelly Hartwell ,&nbsp;Laura Aull","doi":"10.1016/j.asw.2024.100873","DOIUrl":"10.1016/j.asw.2024.100873","url":null,"abstract":"<div><p>The 2024 Tools &amp; Technology forum underscores the significant role of emerging writing technologies in shaping writing assessment practices post-COVID-19, emphasizing the necessity of ensuring that these innovations uphold core principles of validity, fairness, and equity. AI-driven tools offer promising improvements but also require careful consideration to ensure that they reflect writing constructs, align with educational goals, and promote equitable assessment practices. Validity is explored through dimensions such as construct, content, and consequential validity, raising questions about how assessment tools may capture the complexity of writing and their broader impacts on educational stakeholders. Fairness in writing assessment is examined with regard to cultural responsiveness and accessibility, and how assessment tools may be designed to accommodate various student needs. Equity extends these considerations by addressing systemic inequities and promoting assessment practices that support diverse learning styles and reduce barriers for marginalized students. The reviews of three assessment tools—PERSUADE 2.0, EvaluMate, and a web application for systematic review writing—illustrate how innovations can support valid, fair, and equitable writing assessments across educational contexts. The forum emphasizes the importance of ongoing dialogue and adaptation to create inclusive and just educational experiences.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100873"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141638579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy 英语写作课上的同伴反馈对 EFL 学生写作反馈素养的影响
IF 4.2 1区 文学
Assessing Writing Pub Date : 2024-07-01 DOI: 10.1016/j.asw.2024.100874
Fanrong Weng , Cecilia Guanfang Zhao , Shangwen Chen
{"title":"Effects of peer feedback in English writing classes on EFL students’ writing feedback literacy","authors":"Fanrong Weng ,&nbsp;Cecilia Guanfang Zhao ,&nbsp;Shangwen Chen","doi":"10.1016/j.asw.2024.100874","DOIUrl":"10.1016/j.asw.2024.100874","url":null,"abstract":"<div><p>Despite the increasing scholarly attention towards students’ writing feedback literacy in recent years, empirical explorations of effective approaches to enhancing this capacity remain scarce. While peer feedback often plays an important role in English as a Foreign Language (EFL) writing development, few studies seem to have addressed the potential impacts of peer feedback activities on students’ overall writing feedback literacy. To fill this gap, a mixed-methods study was designed to investigate the effect of peer feedback activities on students’ writing feedback literacy development across such dimensions as appreciating feedback, making judgements, acknowledging different sources of feedback, managing affect, and taking actions with feedback. Two intact classes, one as the experimental group and the other control group, participated in the study. The experimental group engaged in peer feedback activities during the semester (12 weeks), whereas the control group received conventional teacher feedback only. The pre- and post-intervention results based on a writing feedback literacy scale were compared between the two groups, in addition to the analysis of interviews with the teacher and focal students from the experimental group, as well as students’ written assignments and revisions after receiving peer feedback. Results showed that peer feedback activities could significantly improve students’ appreciation of feedback and their ability to make judgements. Nevertheless, no significant changes in other dimensions were identified. These findings extend the current understanding of EFL students’ writing feedback literacy and hold valuable pedagogical implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100874"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices 沙特大学生英语写作反馈偏好与教师实践之间的匹配与不匹配
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-06-17 DOI: 10.1016/j.asw.2024.100863
Muhammad M.M. Abdel Latif , Zainab Alsuhaibani , Asma Alsahil
{"title":"Matches and mismatches between Saudi university students' English writing feedback preferences and teachers' practices","authors":"Muhammad M.M. Abdel Latif ,&nbsp;Zainab Alsuhaibani ,&nbsp;Asma Alsahil","doi":"10.1016/j.asw.2024.100863","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100863","url":null,"abstract":"<div><p>Though much research has dealt with feedback practices in L2 writing classes, scarce studies have tried to investigate learner and teacher feedback perspectives from a wide angle. Drawing on an 8-dimension framework of feedback in writing classes, this study investigated the potential matches and mismatches between Saudi university students' English writing feedback preferences and their teachers' reported practices. Quantitative and qualitative data was collected using a student questionnaire and a teacher one. The two surveys assessed students' preferences for and teachers' use of 26 writing feedback modes, strategies and activities. A total of 575 undergraduate English majors at 11 Saudi universities completed the student questionnaire, and 82 writing instructors completed the teacher questionnaire. The data analysis revealed that the differences between the students' English writing feedback preferences and their teachers' practices vary from one feedback dimension to another. The study generally indicates that the mismatches between the students' writing feedback preferences and the teachers' reported practices far exceed the matches. The qualitative data obtained from the answers to a set of open-ended questions in both questionnaires provided information about the students' and teachers' feedback-related beliefs and reasons. The paper ends with discussing the results and their implications.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100863"},"PeriodicalIF":3.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141423148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing 更复杂 "就等于 "更好的写作 "吗?探究高中英语学习者议论文写作中形式复杂性与意义复杂性之间的关系
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-06-13 DOI: 10.1016/j.asw.2024.100867
Sachiko Yasuda
{"title":"Does “more complexity” equal “better writing”? Investigating the relationship between form-based complexity and meaning-based complexity in high school EFL learners’ argumentative writing","authors":"Sachiko Yasuda","doi":"10.1016/j.asw.2024.100867","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100867","url":null,"abstract":"<div><p>The study examines the relationship between form-based complexity and meaning-based complexity in argumentative essays written by high school students learning English as a foreign language (EFL) in relation to writing quality. The data comprise argumentative essays written by 102 Japanese high school learners at different proficiency levels. The students’ proficiency levels were determined based on the evaluation of their argumentative essays by human raters using the GTEC rubric. The students’ essays were analyzed from multiple dimensions, focusing on both form-based complexity (lexical complexity, large-grained syntactic complexity, and fine-grained syntactic complexity features) and meaning-based complexity (argument quality). The results of the multidimensional analysis revealed that the most influential factor in determining overall essay scores was not form-based complexity but meaning-based complexity achieved through argument quality. Moreover, the results indicated that meaning-based complexity was strongly correlated with the use of complex nominals rather than clausal complexity. These insights have significant implications for both the teaching and assessment of argumentative essays among high school EFL learners, underscoring the importance of understanding what aspects of writing to prioritize and how best to assess student writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100867"},"PeriodicalIF":3.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thirty years of writing assessment: A bibliometric analysis of research trends and future directions 写作评估三十年:对研究趋势和未来方向的文献计量分析
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-06-07 DOI: 10.1016/j.asw.2024.100862
Jihua Dong , Yanan Zhao , Louisa Buckingham
{"title":"Thirty years of writing assessment: A bibliometric analysis of research trends and future directions","authors":"Jihua Dong ,&nbsp;Yanan Zhao ,&nbsp;Louisa Buckingham","doi":"10.1016/j.asw.2024.100862","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100862","url":null,"abstract":"<div><p>This study employs a bibliometric analysis to identify the research trends in the field of writing assessment over the last 30 years (1993–2022). Employing a dataset of 1,712 articles and 52,092 unique references, keyword co-occurrence analyses were used to identify prominent research topics, co-citation analyses were conducted to identify influential publications and journals, and a structural variation analysis was employed to identify transformative research in recent years. The results revealed the growing popularity of the writing assessment field, and the increasing diversity of research topics in the field. The research trends have become more associated with technology and cognitive and metacognitive processes. The influential publications indicate changes in research interest towards cross-disciplinary publications. The journals identified as key venues for writing assessment research also changed across the three decades. The latest transformative research points out possible future directions, including the integration of computational methods in writing assessment, and investigations into relationships between writing quality and various factors. This study contributes to our understanding of the development and future directions of writing assessment research, and has implications for researchers and practitioners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100862"},"PeriodicalIF":3.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141286045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing EvaluMate:使用人工智能支持学生在写作互评中提供反馈
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100864
Kai Guo
{"title":"EvaluMate: Using AI to support students’ feedback provision in peer assessment for writing","authors":"Kai Guo","doi":"10.1016/j.asw.2024.100864","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100864","url":null,"abstract":"<div><p>Peer feedback plays an important role in promoting learning in the writing classroom. However, providing high-quality feedback can be demanding for student reviewers. To address this challenge, this article proposes an AI-enhanced approach to peer feedback provision. I introduce EvaluMate, a newly developed online peer review system that leverages ChatGPT, a large language model (LLM), to scaffold student reviewers’ feedback generation. I discuss the design and functionality of EvaluMate, highlighting its affordances in supporting student reviewers’ provision of comments on peers’ essays. I also address the system’s limitations and propose potential solutions. Furthermore, I recommend future research on students’ engagement with this learning approach and its impact on learning outcomes. By presenting EvaluMate, I aim to inspire researchers and practitioners to explore the potential of AI technology in the teaching, learning, and assessment of writing.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100864"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process 比较纸质模式和电脑模式下的中文第二语言写作成绩:从写作产品和过程的角度看问题
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-05-31 DOI: 10.1016/j.asw.2024.100849
Xiaozhu Wang, Jimin Wang
{"title":"Comparing Chinese L2 writing performance in paper-based and computer-based modes: Perspectives from the writing product and process","authors":"Xiaozhu Wang,&nbsp;Jimin Wang","doi":"10.1016/j.asw.2024.100849","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100849","url":null,"abstract":"<div><p>As writing is a complex language-producing process dependent on the writing environment and medium, the comparability of computer-based (CB) and paper-based (PB) writing assessments has been studied extensively since the emergence of computer-based language writing assessment. This study investigated the differences in the writing product and process between CB and PB modes of writing assessment in Chinese as a second language, of which the character writing system is considered challenging for learners. The many-facet Rasch model (MFRM) was adopted to reveal the text quality differences. Keystrokes and handwriting trace data were utilized to unveil insights into the writing process. The results showed that Chinese L2 learners generated higher-quality texts with fewer character mistakes in the CB mode. They revised much more, paused shorter and less frequently between lower-level linguistic units in the CB mode. The quality of CB text is associated with revision behavior, whereas pause duration serves as a stronger predictor of PB text quality. The findings suggest that the act of handwriting Chinese characters makes the construct of PB distinct from the CB writing assessment in L2 Chinese. Thus, the setting of the assessment mode should consider the target language use and the test taker’s characteristics.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100849"},"PeriodicalIF":3.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A teacher’s inquiry into diagnostic assessment in an EAP writing course 一位教师对 EAP 写作课程诊断评估的探究
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-05-30 DOI: 10.1016/j.asw.2024.100848
Rabail Qayyum
{"title":"A teacher’s inquiry into diagnostic assessment in an EAP writing course","authors":"Rabail Qayyum","doi":"10.1016/j.asw.2024.100848","DOIUrl":"10.1016/j.asw.2024.100848","url":null,"abstract":"<div><p>Research into diagnostic assessment of writing has largely ignored how diagnostic feedback information leads to differentiated instruction and learning. This case study research presents a teacher’s account of validating an in-house diagnostic assessment procedure in an English for Academic Purposes writing course with a view to refining it. I developed a validity argument and gathered and interpreted related evidence, focusing on one student’s performance in and perception of the assessment. The analysis revealed that to an extent the absence of proper feedback mechanisms limited the use of the test, somewhat weakened its impact, and reduced the potential for learning. I propose a modification to the assessment procedure involving a sample student feedback report.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100848"},"PeriodicalIF":3.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test 综合写作任务的结构表征和预测有效性:对 Duolingo 英语测试写作部分的研究
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-05-28 DOI: 10.1016/j.asw.2024.100846
Qin Xie
{"title":"Construct representation and predictive validity of integrated writing tasks: A study on the writing component of the Duolingo English Test","authors":"Qin Xie","doi":"10.1016/j.asw.2024.100846","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100846","url":null,"abstract":"<div><p>This study examined whether two integrated reading-to-write tasks could broaden the construct representation of the writing component of <em>Duolingo English Test</em> (DET). It also verified whether they could enhance DET’s predictive power of English academic writing in universities. The tasks were (1) writing a summary based on two source texts and (2) writing a reading-to-write essay based on five texts. Both were given to a sample (N = 204) of undergraduates from Hong Kong. Each participant also submitted an academic assignment written for the assessment of a disciplinary course. Three professional raters double-marked all writing samples against detailed analytical rubrics. Raw scores were first processed using Multi-Faceted Rasch Measurement to estimate inter- and intra-rater consistency and generate adjusted (fair) measures. Based on these measures, descriptive analyses, sequential multiple regression, and Structural Equation Modeling were conducted (in that order). The analyses verified the writing tasks’ underlying component constructs and assessed their relative contributions to the overall integrated writing scores. Both tasks were found to contribute to DET’s construct representation and add moderate predictive power to the domain performance. The findings, along with their practical implications, are discussed, especially regarding the complex relations between construct representation and predictive validity.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100846"},"PeriodicalIF":3.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1075293524000394/pdfft?md5=1959b9ed8a9acc732d6a5985fba62520&pid=1-s2.0-S1075293524000394-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141243090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How syntactic complexity indices predict Chinese L2 writing quality: An analysis of unified dependency syntactically-annotated corpus 句法复杂性指数如何预测中文 L2 写作质量?统一依存句法注释语料分析
IF 3.9 1区 文学
Assessing Writing Pub Date : 2024-05-16 DOI: 10.1016/j.asw.2024.100847
Yuxin Hao , Xuelin Wang , Shuai Bin , Qihao Yang , Haitao Liu
{"title":"How syntactic complexity indices predict Chinese L2 writing quality: An analysis of unified dependency syntactically-annotated corpus","authors":"Yuxin Hao ,&nbsp;Xuelin Wang ,&nbsp;Shuai Bin ,&nbsp;Qihao Yang ,&nbsp;Haitao Liu","doi":"10.1016/j.asw.2024.100847","DOIUrl":"https://doi.org/10.1016/j.asw.2024.100847","url":null,"abstract":"<div><p>Previous syntactic complexity (SC) research on L2 Chinese has overlooked a range of Chinese-specific structures and fine-grained indices. This study, utilizing a syntactically annotated Chinese L2 writing corpus, simultaneously employs both large-grained and fine-grained syntactic complexity indices to investigate the relationship between syntactic complexity and writing quality produced by English-speaking Chinese second language (ECSL) learners from macro and micro perspectives. Our findings reveal the following: (a) at a large-grained level of analysis using syntactic complexity indices, the generic syntactic complexity indice (GSC indice) number of T-units per sentence and the Chinese-specific syntactic complexity indice (CSC indice) number of Clauses per topic chain unit account for 14.5% of the total variance in writing scores among ECSL learners; (b) the syntactic diversity model alone accounts for 24.7% of the variance in Chinese writing scores among ECSL learners; (c) the stepwise regression analysis model, which integrates fine-grained SC indices extracted from the syntactically annotated corpus, explains 43.7% of the variance in Chinese writing quality. This model incorporates CSC indices such as average ratio of dependency types per 30 dependency segments, the ratio of adjuncts to sentence end, the ratio of predicate complements, the ratio of numeral adjuncts, the mean length of Topic-Comment-Unit dependency distance, as well as GSC indices like the ratio of main governors, the ratio of attributers, the ratio of coordinating adjuncts, and the ratio of sentential objects. These findings highlight the valuable insights that syntactically annotated fine-grained SC indices offer regarding the writing characteristics of ECSL learners.</p></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"61 ","pages":"Article 100847"},"PeriodicalIF":3.9,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140952436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信