Assessing Writing最新文献

筛选
英文 中文
Assessing GenAI-assisted digital multimodal composing: Reconceptualizing a genre-based framework through self-assessment and peer assessment 评估genai辅助的数字多模态组合:通过自我评估和同行评估重新定义基于体裁的框架
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.asw.2026.101017
Yunan Zhang , Zixuan Li
{"title":"Assessing GenAI-assisted digital multimodal composing: Reconceptualizing a genre-based framework through self-assessment and peer assessment","authors":"Yunan Zhang ,&nbsp;Zixuan Li","doi":"10.1016/j.asw.2026.101017","DOIUrl":"10.1016/j.asw.2026.101017","url":null,"abstract":"<div><div>Generative AI (GenAI) has shown promising potential in scaffolding digital multimodal composing (DMC), while also introducing challenges for assessing GenAI-assisted DMC. Existing genre-based framework was developed for non-GenAI contexts and may not capture the complexity of GenAI-assisted DMC. To address this gap, this study first proposed a preliminary reconceptualized genre-based framework, which was then tested and refined through student self- and peer-assessment. Data were collected from assessment sheets, AI use reports, focus groups, and a teacher interview, and analyzed through thematic analysis. Based on students’ assessment practices, new descriptors emerged to capture high-quality GenAI-assisted DMC, resulting in a refined genre-based model. Revisions were made to extend purpose (to include audience resonance), base units (to emphasize accuracy and authenticity), layout (to ensure stylistic alignment), navigation (to integrate GenAI-enhanced signposting and organization), and rhetoric (to use GenAI for brainstorming rhetorical devices). In addition, a new “Human–AI Co-Composing Process” dimension was added to highlight criticality and prompt literacy. This research contributes to the effective and ethical assessment of GenAI-assisted DMC and has implications for (re)conceptualizing DMC competence in the GenAI age.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101017"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146173983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the roles of gender, linguistic, and cognitive variables in continuation writing task performance among learners of English 探讨性别、语言和认知变量在英语学习者继续写作任务表现中的作用
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-03-30 DOI: 10.1016/j.asw.2026.101043
Fangzhu Chen, Aiping Zhao
{"title":"Exploring the roles of gender, linguistic, and cognitive variables in continuation writing task performance among learners of English","authors":"Fangzhu Chen,&nbsp;Aiping Zhao","doi":"10.1016/j.asw.2026.101043","DOIUrl":"10.1016/j.asw.2026.101043","url":null,"abstract":"<div><div>Despite the importance of the continuation task, an innovative reading-writing integrated task, few studies have examined its contributing factors. This study investigated the influence of various linguistic and cognitive variables (i.e., reading comprehension, vocabulary, grammatical knowledge, morphological awareness, and inference making) on specific dimensions (i.e., ideas, organization, word choice, sentence fluency, and conventions) of second language/L2 continuation writing, while statistically controlling for the influence of gender. The participants were 162 high school English learners in China. Hierarchical regression analyses revealed that gender predicted all writing dimensions. Female students outperformed male students in all dimensions of continuation writing. Linguistic variables contributed to different dimensions of continuation writing. More specifically, reading comprehension significantly contributed to all dimensions. Vocabulary and morphological awareness significantly predicted word choice. Grammatical knowledge and morphological awareness significantly predicted organization, sentence fluency, and conventions. Inference making made significant predication to one dimension, i.e., ideas, even after controlling for gender and linguistic variables. The findings deepen our understanding of the contributing factors of continuation writing, reveal gender differences, and highlight the value of analytic assessment and targeted instruction of linguistic and cognitive skills in supporting students’ development in each dimension of continuation writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101043"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147600052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can algorithm-based feedback help students to write better? A meta-analysis exploring surface- and deep-level outcomes 基于算法的反馈能帮助学生更好地写作吗?探索表层和深层结果的荟萃分析
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-04-12 DOI: 10.1016/j.asw.2026.101034
Sina Scherer , Steve Graham , Vera Busse
{"title":"Can algorithm-based feedback help students to write better? A meta-analysis exploring surface- and deep-level outcomes","authors":"Sina Scherer ,&nbsp;Steve Graham ,&nbsp;Vera Busse","doi":"10.1016/j.asw.2026.101034","DOIUrl":"10.1016/j.asw.2026.101034","url":null,"abstract":"<div><div>Against the backdrop of rapid developments of algorithm-based feedback tools — from older tools mainly providing feedback on grammar and spelling to advanced tools based on generative artificial intelligence offering more comprehensive writing support — our meta-analysis examines to what extent algorithm-based feedback improves not only surface- (e.g., grammar and spelling) but also deep-level (e.g., structure, content, coherence) writing outcomes for different learners at secondary school and university. We reviewed experimental and quasi-experimental studies published between 2011 and the end of 2024, covering five European languages. Results from the 33 included studies indicated that algorithm-based feedback was beneficial for improving writing in general (<em>g</em> = 0.36). Specifically, positive effects were observed for surface-level outcomes at posttest (<em>g</em> = 0.31), though no lasting effects were found at maintenance (<em>g</em> = −0.02). In contrast, deep-level writing outcomes showed sustained improvement, with positive effects both at posttest (<em>g</em> = 0.31) and maintenance (<em>g</em> = 0.54). No significant differences between secondary and university students were observed. However, L2 learners, in general, seemed to profit most from algorithm-based feedback, showing gains in surface- (<em>g</em> = 0.77, bordering on significance), and deep-level outcomes (<em>g</em> = 0.46). While no significant differences were found between the effects of specific types of algorithm-based feedback tools, feedback from Grammarly and Pigai statistically enhanced students’ writing, but effects of ChatGPT feedback were non-significant. We discuss implications for future research and educational practice, also in light of the small transfer of learning to new writing tasks.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101034"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147702676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An ecological approach to L2 learners’ engagement with written feedback 二语学习者参与书面反馈的生态方法
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-02-12 DOI: 10.1016/j.asw.2026.101028
Zaibo Long , Jinfen Xu
{"title":"An ecological approach to L2 learners’ engagement with written feedback","authors":"Zaibo Long ,&nbsp;Jinfen Xu","doi":"10.1016/j.asw.2026.101028","DOIUrl":"10.1016/j.asw.2026.101028","url":null,"abstract":"<div><div>L2 learners’ engagement with written feedback is pivotal to realizing feedback’s learning potential. Over the past decade, this construct has attracted growing scholarly attention; however, relatively little work has been done to theorize feedback engagement, particularly how individual and contextual factors interact to shape it. To address this gap, this article conceptualizes L2 written feedback engagement from an ecological perspective. Specifically, it synthesizes how engagement has been defined and operationalized in the literature, noting how the increasing use of generative AI has prompted an expanded conceptualization of feedback engagement. Drawing on affordance theory and the nested ecosystems model, the article characterizes engagement as multifaceted, dynamic, and context-sensitive, especially where teacher, peer, automated, and generative-AI feedback coexist. It further sketches future directions for ecological work, including longitudinal and multi-source designs, explicit modelling of person–context interactions, and methodological innovations to reveal underlying mechanisms. Overall, this article aims to deepen researchers’ and teachers’ understanding of feedback engagement and to inform context-responsive interventions that support learners’ engagement with written feedback.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101028"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT feedback and emotional engagement in L2 writing: A control-value theory perspective using Q-methodology ChatGPT反馈与第二语言写作中的情感投入:基于q -方法论的控制价值理论视角
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-04-02 DOI: 10.1016/j.asw.2026.101045
Guangyuan Yao , Zhaoxia Liu
{"title":"ChatGPT feedback and emotional engagement in L2 writing: A control-value theory perspective using Q-methodology","authors":"Guangyuan Yao ,&nbsp;Zhaoxia Liu","doi":"10.1016/j.asw.2026.101045","DOIUrl":"10.1016/j.asw.2026.101045","url":null,"abstract":"<div><div>As generative artificial intelligence (GAI) tools like ChatGPT become increasingly integrated into second language (L2) writing instruction, their emotional impact on student writers remains underexplored. Drawing on the Control-Value Theory of Achievement Emotions, this study investigates the affective experiences of Chinese university students using ChatGPT for feedback on English academic writing. Using Q-methodology, the research identifies typologies of subjective experiences regarding emotional engagement among 35 Chinese participants through Q-sorts and post-sort interviews. Four distinct emotional profiles emerged: Pragmatic Experimenters, Ambivalent Navigators, Relational Seekers, and Strategic Optimizers. These profiles reflect divergent combinations of perceived control and value appraisals, which in turn shape emotional responses ranging from empowerment and curiosity to anxiety, detachment, and frustration. While some students experienced ChatGPT as a supportive and efficient learning tool, others felt emotionally overwhelmed or alienated due to limited control or relational disconnect. The findings reveal that emotional engagement with AI-mediated feedback is not uniform but structured by underlying psychological mechanisms. This study underscores the importance of designing pedagogical interventions and GAI tools that are both cognitively and affectively responsive to students’ diverse needs. Implications are offered for EFL educators, developers, and researchers aiming to foster emotionally sustainable and learner-centered approaches to AI-assisted academic writing.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101045"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147600055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cubic effects of autonomous and controlled motivation on L2 self-regulated writing strategies: A polynomial regression analysis 自主动机和受控动机对二语自我调节写作策略的立方效应:一个多项式回归分析
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-04-02 DOI: 10.1016/j.asw.2026.101046
Yabing Wang , Jian Xu
{"title":"Cubic effects of autonomous and controlled motivation on L2 self-regulated writing strategies: A polynomial regression analysis","authors":"Yabing Wang ,&nbsp;Jian Xu","doi":"10.1016/j.asw.2026.101046","DOIUrl":"10.1016/j.asw.2026.101046","url":null,"abstract":"<div><div>There has been controversy regarding the co-existence of autonomous (AM) and controlled motivation (CM) and how their match or mismatch influences motivated learning behaviors. However, such inquiries have seldom been conducted in the L2 writing field, although learners’ L2 writing motivation is often multifaceted. To address this gap, we employed a quantitative, cross-sectional survey design and analyzed data using cubic regression and response surface analysis (RSA). Unlike traditional linear models, these methods allow us to test whether congruence (both high or both low) and incongruence (one high, the other low) between AM and CM predict self-regulation, thereby overcoming the limitations of approaches that assume only additive or linear effects. A total of 583 Chinese EFL undergraduates participated in the study by answering a battery of questionnaires measuring their AM, CM, emotional, behavioral, cognitive, and metacognitive self-regulation strategies in writing. Findings revealed that: (1) AM and CM were compatible and co-existent; (2) their congruence showed both linear and nonlinear associations with strategy use; and (3) learners used more cognitive strategies when AM was high, but fewer when CM exceeded AM. Theoretical, methodological, and pedagogical implications were discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101046"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147599954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How writing prompts influence analytic trait scores: A differential feature functioning analysis for English language learners 写作提示如何影响分析特质得分:英语学习者的差异特征功能分析
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-01-26 DOI: 10.1016/j.asw.2026.101018
İdil Sayın , Hacer Hande Uysal
{"title":"How writing prompts influence analytic trait scores: A differential feature functioning analysis for English language learners","authors":"İdil Sayın ,&nbsp;Hacer Hande Uysal","doi":"10.1016/j.asw.2026.101018","DOIUrl":"10.1016/j.asw.2026.101018","url":null,"abstract":"<div><div>This study examines whether analytic writing traits function equivalently across diverse thematic prompt categories for English language learners (ELLs). We utilized the ELLIPSE Corpus, which contains 6482 essays written by ELLs in response to 44 prompts during standardized annual testing in the United States. These essays were organized into six thematic prompt categories. To estimate the students’ underlying writing ability, we employed a unidimensional Item Response Theory (IRT) model. Subsequently, we conducted Differential Feature Functioning (DFF) analysis using a step-by-step ordinal logistic regression framework based on IRT. DFF analysis revealed that while Vocabulary and Grammar show statistically detectable category-related variation, effect sizes were negligible, indicating no practical impact on score interpretation. A more focused, category-by-category DFF analysis identified minor DFF in Cohesion, Vocabulary, and Grammar across the Education, Personal Development, and Society and Social Life categories, yet effects remained practically negligible. Diagnostic plots further confirmed the stability of trait functioning across prompt categories. Comprehensive sensitivity analyses supported the robustness of these findings. These results support the fairness and comparability of analytic trait-based scoring for ELL writing assessments. The study contributes to equitable writing assessment practice by offering evidence-based guidance for fair prompt design, targeted rater training, and rubric refinement.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101018"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing students’ feedback literacy in disciplinary academic writing through generative artificial intelligence 通过生成式人工智能培养学生在学科学术写作中的反馈素养
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-02-14 DOI: 10.1016/j.asw.2026.101030
Jianda Liu , Zihao Shi , Wanqing Li
{"title":"Developing students’ feedback literacy in disciplinary academic writing through generative artificial intelligence","authors":"Jianda Liu ,&nbsp;Zihao Shi ,&nbsp;Wanqing Li","doi":"10.1016/j.asw.2026.101030","DOIUrl":"10.1016/j.asw.2026.101030","url":null,"abstract":"<div><div>Recent research has explored how generative artificial intelligence (GenAI) may support peer feedback practices and foster students’ feedback literacy (SFL) in L2 writing, yet its distinctive role remains under-examined, particularly in disciplinary academic writing. Addressing this gap, this mixed-methods study investigated the effects of GenAI feedback on Chinese EFL students’ feedback literacy in disciplinary writing contexts. Changes in SFL were compared between a control group receiving peer feedback and an experimental group receiving GenAI feedback before and after an intervention, with qualitative and text data used to interpret quantitative results of the GenAI feedback. The experimental group showed significant gains across all dimensions of SFL. Qualitative findings further indicated improved ability to interpret, evaluate, and act on feedback. However, learners reported sustained cognitive load when engaging with GenAI feedback, and limited disciplinary knowledge and unfamiliarity with academic conventions continued to generate negative emotions and constrain help-seeking. Pedagogical implications for integrating GenAI feedback into disciplinary writing instruction are discussed.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101030"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating GPT ratings of EFL writing: A scoping review 评估英语写作的GPT评级:范围审查
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-04-06 DOI: 10.1016/j.asw.2026.101044
Yi Chen
{"title":"Evaluating GPT ratings of EFL writing: A scoping review","authors":"Yi Chen","doi":"10.1016/j.asw.2026.101044","DOIUrl":"10.1016/j.asw.2026.101044","url":null,"abstract":"<div><div>The rise of large language models (LLMs), exemplified by GPT, has opened new possibilities for automated essay scoring (AES) in L2 education. Over the past three years, a growing number of studies have investigated GPT’s potential as a rater of English as a foreign language (EFL) writing. However, with no synthesis existing, the literature remains relatively fragmented. To address this gap, this scoping review analyzed 26 identified studies in terms of their research designs, evaluation foci, reported findings, and summative evaluations. Collectively, these studies addressed three core aspects of the evaluation inference in a full validity argument—accuracy, consistency, and fairness, and presented a cautiously positive view of GPT’s performance in rating EFL essays. Preliminary insights include GPT-4 and GPT-4o’s superiority over standard GPT-3.5 in accuracy and consistency, the promise of few-shot learning prompts, and GPT’s tendency to score more severely in language-related dimensions. The review also identified some methodological limitations across the literature, and highlighted key areas for further investigation. By providing a structured overview of this emerging field for the first time, this scoping review offers guidance for future research and for L2 educators considering the use of GPT models in EFL writing assessment.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101044"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147649897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L2 learners’ engagement with AI-generated feedback on writing 第二语言学习者使用人工智能生成的写作反馈
IF 5.5 1区 文学
Assessing Writing Pub Date : 2026-04-01 Epub Date: 2026-02-12 DOI: 10.1016/j.asw.2026.101020
Xinyu Ma , Cong Zhang , Icy Lee
{"title":"L2 learners’ engagement with AI-generated feedback on writing","authors":"Xinyu Ma ,&nbsp;Cong Zhang ,&nbsp;Icy Lee","doi":"10.1016/j.asw.2026.101020","DOIUrl":"10.1016/j.asw.2026.101020","url":null,"abstract":"<div><div>While feedback has gained considerable attention in second language (L2) writing, little is known about generative AI-generated feedback, especially how students engage with it during the revision process. Drawing on qualitative data from multiple textual sources and in-depth interviews, the study explored the cognitive, behavioral, and affective engagement of six Chinese EFL students with AI feedback generated by ChatGPT. Cognitively, students demonstrated an improved understanding of AI feedback through sustained interactions with AI and engaged in cognitive and metacognitive operations including setting goals, retrieving prior knowledge, and evaluating. Behaviorally, students were more receptive to local feedback than global feedback. Affectively, most students displayed critical attitudes toward AI feedback, except for one student who expressed skeptical attitudes. They also experienced a range of emotional responses and fluctuations when dealing with feedback. They employed revision strategies such as planning the revision sequence and consulting external resources. Additionally, the three dimensions of engagement were found to be interconnected with each other, with learner agency playing a vital role in the tripartite framework of engagement. The study contributes to the understanding of applying AI feedback to L2 writing, highlighting its potential to foster student engagement and support writing development.</div></div>","PeriodicalId":46865,"journal":{"name":"Assessing Writing","volume":"68 ","pages":"Article 101020"},"PeriodicalIF":5.5,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书