关于当代考核制度的讨论

U. Mahboob
{"title":"关于当代考核制度的讨论","authors":"U. Mahboob","doi":"10.53708/hpej.v2i2.235","DOIUrl":null,"url":null,"abstract":"There are different apprehensions regarding the contemporary assessment system. Often, I listen to my colleagues saying that multiple-choice questions are seen as easier to score. Why can’t all assessments be multiple-choice tests? Some others would say, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? Others come up with concerns that there is a perception amongst students that some examiners are harsher than others and some tasks are easier than others. What can be done to evaluate whether this is the case? Sometimes, the students come up with queries that they are concerned about being observed when interacting with patients. They are not sure why this is needed. What rationale is there for using workplace-based assessment? Some of the students worry if the pass marks for the assessments are ‘correct’, and what is the evidence for the cut-off scores? All these questions are important, and I would deliberate upon them with evidence from the literature. Deliberating on the first query of using multiple-choice questions for everything, we know that assessment of a medical student is a complex process as there are multiple domains of learning such as cognition, skills, and behaviors (Norcini and McKinley, 2007)(Boulet and Raymond, 2018). Each of the domains further has multiple levels from simple to complex tasks (Norcini and McKinley, 2007). For example, the cognition is further divided into six levels, starting from recall (Cognition level 1 or C1) up to creativity (Cognition level 6 or C6) (Norcini and McKinley, 2007). Similarly, the skills and behaviors also have levels starting from observation up to performance and practice (Norcini and McKinley, 2007). Moreover, there are different competencies within each domain that further complicates our task as an assessor to appropriately assess a student (Boulet and Raymond, 2018). For instance, within the cognitive domain, it is not just making the learning objectives based on Bloom’s Taxonomy that would simplify our task because the literature suggests that individuals have different thinking mechanisms, such as fast and slow thinking to perform a task (Kahneman, 2011). We as educationalists do not know what sort of cognitive mechanism have we triggered through our exam items (Swanson and Case, 1998). Multiple Choice Questions is one of the assessment instruments to measure competencies related to the cognitive domain. This means that we cannot use multiple-choice questions to measure the skills and behaviors domains, so clearly multiple-choice questions cannot assess all domains of learning (Vleuten et al, 2010). Within the cognitive domain, there are multiple levels and different ways of thinking mechanisms (Kahneman, 2011). Each assessment instrument has its strength and limitations. Multiple-choice questions may be able to assess a few of the competencies, also with some added benefits in terms of marking but there always are limitations. The multiple-choice question is no different when it comes to the strengths and limitations profile of an assessment instrument (Swanson and Case, 1998). There are certain competencies that can be easily assessed using multiple-choice questions (Swanson and Case, 1998). For example, content that requires recall, application, and analysis can be assessed with the help of multiple-choice questions. However, creativity or synthesis which is cognition level six (C6) as per Blooms’ Taxonomy, cannot be assessed with closed-ended questions such as a multiple-choice question. This means that we need some additional assessment instruments to measure the higher levels of cognition within the cognitive domain. For example, asking students to explore an open-ended question as a research project can assess the higher levels of cognition because the students would be gathering information from different sources of literature, and then synthesizing it to answer the question. It is reported that marking and reading the essay questions would be time-consuming for the teachers (McLean and Gale, 2018). Hence, the teacher to student’s ratio in assessing the higher levels of cognition needs to be monitored so that teachers or assessors can give appropriate time to assess the higher levels of cognition of their students. Hence, we have to use other forms of assessment instruments along with multiple-choice questions to assess the cognitive domain. This will help to assess the different levels of cognition and will also incite the different thinking mechanisms. Regarding the concerns, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? It is one of an important issue for medical education and assessment directors whether the tests that they are taking are reflective of the students being competent practitioners? It is also quite challenging as some of the competencies such as professionalism or professional identity formation are difficult to be measured quantitatively with the traditional assessment instruments (Cruess, Cruess, & Steinert, 2016). Moreover, there is also a question if all the competencies that are required for a medical graduate can be assessed with the assessment instruments presently available? Hence, we as educationalists have to provide evidence for the assessment of required competencies and relevant content. One of the ways that we can opt is to carefully align the required content with their relevant assessment instruments. This can be done with the help of assessment blueprints, or also known as the table of specifications in some of the literature (Norcini and McKinley, 2013). An assessment blueprint enables us to demonstrate our planned curriculum, that is, what are our planned objectives, and how are we going to teach and assess them (Boulet and Raymond, 2018). We can also use the validity construct in addition to the assessment blueprints to provide evidence for testing the relevant content. Validity means that the test is able to measure what it is supposed to measure (Boulet and Raymond, 2018). There are different types of validity but one of the validity that is required in this situation to establish the appropriateness of the content is the Content Validity. Content validity is established by a number of subject experts who comment on the appropriateness and relevance of the content (Lawshe, 1975). The third method by which the relevance of content can be established is through standard-setting. A standard is a single cut-off score to qualitatively declare a student competent or incompetent based on the judgment of subject experts (Norcini and McKinley, 2013). There are different ways of standard-setting for example Angoff, Ebel, Borderline method, etc. (Norcini and McKinley, 2013). Although the main purpose is the establishment and decides the cut-off score during the process, the experts also debate on the appropriateness and relevance of the content. \nThis means that the standard-setting methods also have validity procedures that are in-built in their process of establishing the cut-off score. These are some of the methods by which we can provide evidence of the relevance of the content that is required to produce a competent practitioner. The next issue is the perception amongst students that some examiners are harsher than others and some tasks are easier than others. Both these observations have quite a lot of truth in them and can be evaluated following the contemporary medical education evaluation techniques. The first issue reported is that some examiners are harsher than others. In terms of assessment, it has been reported in the literature as ‘hawk dove effect’ (McManus et al, 2006, Murphy et al, 2009). There are different reasons identified in the literature for some of the examiners to be more stringent than others such as age, ethnic background, behavioral reasons, educational background, and experience in a number of years (McManus et al, 2006). Specifically, those examiners who are from ethnic minorities and have more experience show more stringency (McManus et al, 2006). Interestingly, it has been reported elsewhere how the glucose levels affect the decision making of the pass-fail judgments (Kahneman, 2011). There are psychometric methods reported in the literature, such as Rasch modeling that can help determine the ‘hawk dove effect’ of different examiners, and whether it is too extreme or within a zone of normal deviation (McManus et al, 2006, Murphy, et al, 2009). Moreover, the literature also suggests ways to minimize the hawk-dove effect by identifying and paring such examiners so the strictness of one can be compensated by the leniency of the other examiner (McManus et al, 2006). The other issue in this situation is that the students find some tasks easier than others. This is dependent on the complexity of tasks and also on the competence level of students. For example, a medical student may achieve independent measuring of blood pressure in his/her first year but even a consultant surgeon may not be able to perform complex surgery such as a Whipple procedure. This means that while developing tasks we as educationalists have to consider both the competence level of our students and the complexity of the tasks. One way to theoretically understand it is by taking help from the cognitive load theory (Merrienboer 2013). The cognitive load theory suggests that there are three types of cognitive loads; namely, the Intrinsic, Extraneous, and Germane loads (Merrienboer 2013). The intrinsic load is associated with the complexity of the task. The extraneous load is added to the working memory of students due to a teacher who does not plan his/her teaching session as per students' needs (Merrienboer 2013). The third load is the germane or the good load that helps the st","PeriodicalId":338468,"journal":{"name":"Health Professions Educator Journal","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Deliberations on the contemporary assessment system\",\"authors\":\"U. Mahboob\",\"doi\":\"10.53708/hpej.v2i2.235\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are different apprehensions regarding the contemporary assessment system. Often, I listen to my colleagues saying that multiple-choice questions are seen as easier to score. Why can’t all assessments be multiple-choice tests? Some others would say, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? Others come up with concerns that there is a perception amongst students that some examiners are harsher than others and some tasks are easier than others. What can be done to evaluate whether this is the case? Sometimes, the students come up with queries that they are concerned about being observed when interacting with patients. They are not sure why this is needed. What rationale is there for using workplace-based assessment? Some of the students worry if the pass marks for the assessments are ‘correct’, and what is the evidence for the cut-off scores? All these questions are important, and I would deliberate upon them with evidence from the literature. Deliberating on the first query of using multiple-choice questions for everything, we know that assessment of a medical student is a complex process as there are multiple domains of learning such as cognition, skills, and behaviors (Norcini and McKinley, 2007)(Boulet and Raymond, 2018). Each of the domains further has multiple levels from simple to complex tasks (Norcini and McKinley, 2007). For example, the cognition is further divided into six levels, starting from recall (Cognition level 1 or C1) up to creativity (Cognition level 6 or C6) (Norcini and McKinley, 2007). Similarly, the skills and behaviors also have levels starting from observation up to performance and practice (Norcini and McKinley, 2007). Moreover, there are different competencies within each domain that further complicates our task as an assessor to appropriately assess a student (Boulet and Raymond, 2018). For instance, within the cognitive domain, it is not just making the learning objectives based on Bloom’s Taxonomy that would simplify our task because the literature suggests that individuals have different thinking mechanisms, such as fast and slow thinking to perform a task (Kahneman, 2011). We as educationalists do not know what sort of cognitive mechanism have we triggered through our exam items (Swanson and Case, 1998). Multiple Choice Questions is one of the assessment instruments to measure competencies related to the cognitive domain. This means that we cannot use multiple-choice questions to measure the skills and behaviors domains, so clearly multiple-choice questions cannot assess all domains of learning (Vleuten et al, 2010). Within the cognitive domain, there are multiple levels and different ways of thinking mechanisms (Kahneman, 2011). Each assessment instrument has its strength and limitations. Multiple-choice questions may be able to assess a few of the competencies, also with some added benefits in terms of marking but there always are limitations. The multiple-choice question is no different when it comes to the strengths and limitations profile of an assessment instrument (Swanson and Case, 1998). There are certain competencies that can be easily assessed using multiple-choice questions (Swanson and Case, 1998). For example, content that requires recall, application, and analysis can be assessed with the help of multiple-choice questions. However, creativity or synthesis which is cognition level six (C6) as per Blooms’ Taxonomy, cannot be assessed with closed-ended questions such as a multiple-choice question. This means that we need some additional assessment instruments to measure the higher levels of cognition within the cognitive domain. For example, asking students to explore an open-ended question as a research project can assess the higher levels of cognition because the students would be gathering information from different sources of literature, and then synthesizing it to answer the question. It is reported that marking and reading the essay questions would be time-consuming for the teachers (McLean and Gale, 2018). Hence, the teacher to student’s ratio in assessing the higher levels of cognition needs to be monitored so that teachers or assessors can give appropriate time to assess the higher levels of cognition of their students. Hence, we have to use other forms of assessment instruments along with multiple-choice questions to assess the cognitive domain. This will help to assess the different levels of cognition and will also incite the different thinking mechanisms. Regarding the concerns, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? It is one of an important issue for medical education and assessment directors whether the tests that they are taking are reflective of the students being competent practitioners? It is also quite challenging as some of the competencies such as professionalism or professional identity formation are difficult to be measured quantitatively with the traditional assessment instruments (Cruess, Cruess, & Steinert, 2016). Moreover, there is also a question if all the competencies that are required for a medical graduate can be assessed with the assessment instruments presently available? Hence, we as educationalists have to provide evidence for the assessment of required competencies and relevant content. One of the ways that we can opt is to carefully align the required content with their relevant assessment instruments. This can be done with the help of assessment blueprints, or also known as the table of specifications in some of the literature (Norcini and McKinley, 2013). An assessment blueprint enables us to demonstrate our planned curriculum, that is, what are our planned objectives, and how are we going to teach and assess them (Boulet and Raymond, 2018). We can also use the validity construct in addition to the assessment blueprints to provide evidence for testing the relevant content. Validity means that the test is able to measure what it is supposed to measure (Boulet and Raymond, 2018). There are different types of validity but one of the validity that is required in this situation to establish the appropriateness of the content is the Content Validity. Content validity is established by a number of subject experts who comment on the appropriateness and relevance of the content (Lawshe, 1975). The third method by which the relevance of content can be established is through standard-setting. A standard is a single cut-off score to qualitatively declare a student competent or incompetent based on the judgment of subject experts (Norcini and McKinley, 2013). There are different ways of standard-setting for example Angoff, Ebel, Borderline method, etc. (Norcini and McKinley, 2013). Although the main purpose is the establishment and decides the cut-off score during the process, the experts also debate on the appropriateness and relevance of the content. \\nThis means that the standard-setting methods also have validity procedures that are in-built in their process of establishing the cut-off score. These are some of the methods by which we can provide evidence of the relevance of the content that is required to produce a competent practitioner. The next issue is the perception amongst students that some examiners are harsher than others and some tasks are easier than others. Both these observations have quite a lot of truth in them and can be evaluated following the contemporary medical education evaluation techniques. The first issue reported is that some examiners are harsher than others. In terms of assessment, it has been reported in the literature as ‘hawk dove effect’ (McManus et al, 2006, Murphy et al, 2009). There are different reasons identified in the literature for some of the examiners to be more stringent than others such as age, ethnic background, behavioral reasons, educational background, and experience in a number of years (McManus et al, 2006). Specifically, those examiners who are from ethnic minorities and have more experience show more stringency (McManus et al, 2006). Interestingly, it has been reported elsewhere how the glucose levels affect the decision making of the pass-fail judgments (Kahneman, 2011). There are psychometric methods reported in the literature, such as Rasch modeling that can help determine the ‘hawk dove effect’ of different examiners, and whether it is too extreme or within a zone of normal deviation (McManus et al, 2006, Murphy, et al, 2009). Moreover, the literature also suggests ways to minimize the hawk-dove effect by identifying and paring such examiners so the strictness of one can be compensated by the leniency of the other examiner (McManus et al, 2006). The other issue in this situation is that the students find some tasks easier than others. This is dependent on the complexity of tasks and also on the competence level of students. For example, a medical student may achieve independent measuring of blood pressure in his/her first year but even a consultant surgeon may not be able to perform complex surgery such as a Whipple procedure. This means that while developing tasks we as educationalists have to consider both the competence level of our students and the complexity of the tasks. One way to theoretically understand it is by taking help from the cognitive load theory (Merrienboer 2013). The cognitive load theory suggests that there are three types of cognitive loads; namely, the Intrinsic, Extraneous, and Germane loads (Merrienboer 2013). The intrinsic load is associated with the complexity of the task. The extraneous load is added to the working memory of students due to a teacher who does not plan his/her teaching session as per students' needs (Merrienboer 2013). The third load is the germane or the good load that helps the st\",\"PeriodicalId\":338468,\"journal\":{\"name\":\"Health Professions Educator Journal\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-06-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Health Professions Educator Journal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.53708/hpej.v2i2.235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health Professions Educator Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.53708/hpej.v2i2.235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

认知负荷理论认为认知负荷有三种类型;即内在负荷、外在负荷和密切负荷(Merrienboer 2013)。内在负荷与任务的复杂性有关。由于教师没有根据学生的需要计划他/她的教学课程,因此增加了学生工作记忆的额外负荷(Merrienboer 2013)。第三种负荷是帮助第一种负荷的相关负荷或良好负荷
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deliberations on the contemporary assessment system
There are different apprehensions regarding the contemporary assessment system. Often, I listen to my colleagues saying that multiple-choice questions are seen as easier to score. Why can’t all assessments be multiple-choice tests? Some others would say, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? Others come up with concerns that there is a perception amongst students that some examiners are harsher than others and some tasks are easier than others. What can be done to evaluate whether this is the case? Sometimes, the students come up with queries that they are concerned about being observed when interacting with patients. They are not sure why this is needed. What rationale is there for using workplace-based assessment? Some of the students worry if the pass marks for the assessments are ‘correct’, and what is the evidence for the cut-off scores? All these questions are important, and I would deliberate upon them with evidence from the literature. Deliberating on the first query of using multiple-choice questions for everything, we know that assessment of a medical student is a complex process as there are multiple domains of learning such as cognition, skills, and behaviors (Norcini and McKinley, 2007)(Boulet and Raymond, 2018). Each of the domains further has multiple levels from simple to complex tasks (Norcini and McKinley, 2007). For example, the cognition is further divided into six levels, starting from recall (Cognition level 1 or C1) up to creativity (Cognition level 6 or C6) (Norcini and McKinley, 2007). Similarly, the skills and behaviors also have levels starting from observation up to performance and practice (Norcini and McKinley, 2007). Moreover, there are different competencies within each domain that further complicates our task as an assessor to appropriately assess a student (Boulet and Raymond, 2018). For instance, within the cognitive domain, it is not just making the learning objectives based on Bloom’s Taxonomy that would simplify our task because the literature suggests that individuals have different thinking mechanisms, such as fast and slow thinking to perform a task (Kahneman, 2011). We as educationalists do not know what sort of cognitive mechanism have we triggered through our exam items (Swanson and Case, 1998). Multiple Choice Questions is one of the assessment instruments to measure competencies related to the cognitive domain. This means that we cannot use multiple-choice questions to measure the skills and behaviors domains, so clearly multiple-choice questions cannot assess all domains of learning (Vleuten et al, 2010). Within the cognitive domain, there are multiple levels and different ways of thinking mechanisms (Kahneman, 2011). Each assessment instrument has its strength and limitations. Multiple-choice questions may be able to assess a few of the competencies, also with some added benefits in terms of marking but there always are limitations. The multiple-choice question is no different when it comes to the strengths and limitations profile of an assessment instrument (Swanson and Case, 1998). There are certain competencies that can be easily assessed using multiple-choice questions (Swanson and Case, 1998). For example, content that requires recall, application, and analysis can be assessed with the help of multiple-choice questions. However, creativity or synthesis which is cognition level six (C6) as per Blooms’ Taxonomy, cannot be assessed with closed-ended questions such as a multiple-choice question. This means that we need some additional assessment instruments to measure the higher levels of cognition within the cognitive domain. For example, asking students to explore an open-ended question as a research project can assess the higher levels of cognition because the students would be gathering information from different sources of literature, and then synthesizing it to answer the question. It is reported that marking and reading the essay questions would be time-consuming for the teachers (McLean and Gale, 2018). Hence, the teacher to student’s ratio in assessing the higher levels of cognition needs to be monitored so that teachers or assessors can give appropriate time to assess the higher levels of cognition of their students. Hence, we have to use other forms of assessment instruments along with multiple-choice questions to assess the cognitive domain. This will help to assess the different levels of cognition and will also incite the different thinking mechanisms. Regarding the concerns, whether the tests given reflect what students will need to know as competent professionals? What evidence can be collected to make sure that test content is relevant? It is one of an important issue for medical education and assessment directors whether the tests that they are taking are reflective of the students being competent practitioners? It is also quite challenging as some of the competencies such as professionalism or professional identity formation are difficult to be measured quantitatively with the traditional assessment instruments (Cruess, Cruess, & Steinert, 2016). Moreover, there is also a question if all the competencies that are required for a medical graduate can be assessed with the assessment instruments presently available? Hence, we as educationalists have to provide evidence for the assessment of required competencies and relevant content. One of the ways that we can opt is to carefully align the required content with their relevant assessment instruments. This can be done with the help of assessment blueprints, or also known as the table of specifications in some of the literature (Norcini and McKinley, 2013). An assessment blueprint enables us to demonstrate our planned curriculum, that is, what are our planned objectives, and how are we going to teach and assess them (Boulet and Raymond, 2018). We can also use the validity construct in addition to the assessment blueprints to provide evidence for testing the relevant content. Validity means that the test is able to measure what it is supposed to measure (Boulet and Raymond, 2018). There are different types of validity but one of the validity that is required in this situation to establish the appropriateness of the content is the Content Validity. Content validity is established by a number of subject experts who comment on the appropriateness and relevance of the content (Lawshe, 1975). The third method by which the relevance of content can be established is through standard-setting. A standard is a single cut-off score to qualitatively declare a student competent or incompetent based on the judgment of subject experts (Norcini and McKinley, 2013). There are different ways of standard-setting for example Angoff, Ebel, Borderline method, etc. (Norcini and McKinley, 2013). Although the main purpose is the establishment and decides the cut-off score during the process, the experts also debate on the appropriateness and relevance of the content. This means that the standard-setting methods also have validity procedures that are in-built in their process of establishing the cut-off score. These are some of the methods by which we can provide evidence of the relevance of the content that is required to produce a competent practitioner. The next issue is the perception amongst students that some examiners are harsher than others and some tasks are easier than others. Both these observations have quite a lot of truth in them and can be evaluated following the contemporary medical education evaluation techniques. The first issue reported is that some examiners are harsher than others. In terms of assessment, it has been reported in the literature as ‘hawk dove effect’ (McManus et al, 2006, Murphy et al, 2009). There are different reasons identified in the literature for some of the examiners to be more stringent than others such as age, ethnic background, behavioral reasons, educational background, and experience in a number of years (McManus et al, 2006). Specifically, those examiners who are from ethnic minorities and have more experience show more stringency (McManus et al, 2006). Interestingly, it has been reported elsewhere how the glucose levels affect the decision making of the pass-fail judgments (Kahneman, 2011). There are psychometric methods reported in the literature, such as Rasch modeling that can help determine the ‘hawk dove effect’ of different examiners, and whether it is too extreme or within a zone of normal deviation (McManus et al, 2006, Murphy, et al, 2009). Moreover, the literature also suggests ways to minimize the hawk-dove effect by identifying and paring such examiners so the strictness of one can be compensated by the leniency of the other examiner (McManus et al, 2006). The other issue in this situation is that the students find some tasks easier than others. This is dependent on the complexity of tasks and also on the competence level of students. For example, a medical student may achieve independent measuring of blood pressure in his/her first year but even a consultant surgeon may not be able to perform complex surgery such as a Whipple procedure. This means that while developing tasks we as educationalists have to consider both the competence level of our students and the complexity of the tasks. One way to theoretically understand it is by taking help from the cognitive load theory (Merrienboer 2013). The cognitive load theory suggests that there are three types of cognitive loads; namely, the Intrinsic, Extraneous, and Germane loads (Merrienboer 2013). The intrinsic load is associated with the complexity of the task. The extraneous load is added to the working memory of students due to a teacher who does not plan his/her teaching session as per students' needs (Merrienboer 2013). The third load is the germane or the good load that helps the st
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信