{"title":"审查数字评估的可访问性、可信度和问责制:系统性文献综述","authors":"Richard Mulenga, Allan Mwenya","doi":"10.54536/ajiri.v3i3.2908","DOIUrl":null,"url":null,"abstract":"This paper examines the accessibility, credibility, and accountability of digital assessments through systematic literature reviews. Although the extant literature is replete with studies on educational digital assessments, it seems that, so far, no systematic review study has focused on re-examining the accessibility, credibility, and accountability of digital assessments in the pre-and post-COVID-19 period. The growing ubiquity of digital assessments in academic and professional contexts, especially in the post-COVID-19 era, makes it necessary to conduct this systematic review. The main finding of this study is that, despite the growing ubiquity of digital assessments post the COVID-19 crisis, digital assessments seem to be deficient in employing assistive technologies. Additionally,the sudden migration to digital assessments pose challenges of maintaining the same standards of validity and reliability commensurate with traditional in-person assessments Therefore, going forward, we recommend extensive integration of assistive technologies in academic and professional digital assessments to enhance accessibility. Additionally, digital assessing authorities should also establish mechanisms for detecting cheating and plagiarism in digital assessments. This study underscores the overarching need for holistic approaches that balance technological innovation with ethical imperatives in digital assessments. This study contributes to the 21st-century understanding of the complex dynamic digital assessment landscapes.","PeriodicalId":393771,"journal":{"name":"American Journal of Interdisciplinary Research and Innovation","volume":"5 8","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Examining Accessibility, Credibility, and Accountability in Digital Assessment: A Systematic Literature Review\",\"authors\":\"Richard Mulenga, Allan Mwenya\",\"doi\":\"10.54536/ajiri.v3i3.2908\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper examines the accessibility, credibility, and accountability of digital assessments through systematic literature reviews. Although the extant literature is replete with studies on educational digital assessments, it seems that, so far, no systematic review study has focused on re-examining the accessibility, credibility, and accountability of digital assessments in the pre-and post-COVID-19 period. The growing ubiquity of digital assessments in academic and professional contexts, especially in the post-COVID-19 era, makes it necessary to conduct this systematic review. The main finding of this study is that, despite the growing ubiquity of digital assessments post the COVID-19 crisis, digital assessments seem to be deficient in employing assistive technologies. Additionally,the sudden migration to digital assessments pose challenges of maintaining the same standards of validity and reliability commensurate with traditional in-person assessments Therefore, going forward, we recommend extensive integration of assistive technologies in academic and professional digital assessments to enhance accessibility. Additionally, digital assessing authorities should also establish mechanisms for detecting cheating and plagiarism in digital assessments. This study underscores the overarching need for holistic approaches that balance technological innovation with ethical imperatives in digital assessments. This study contributes to the 21st-century understanding of the complex dynamic digital assessment landscapes.\",\"PeriodicalId\":393771,\"journal\":{\"name\":\"American Journal of Interdisciplinary Research and Innovation\",\"volume\":\"5 8\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"American Journal of Interdisciplinary Research and Innovation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54536/ajiri.v3i3.2908\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Interdisciplinary Research and Innovation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54536/ajiri.v3i3.2908","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Examining Accessibility, Credibility, and Accountability in Digital Assessment: A Systematic Literature Review
This paper examines the accessibility, credibility, and accountability of digital assessments through systematic literature reviews. Although the extant literature is replete with studies on educational digital assessments, it seems that, so far, no systematic review study has focused on re-examining the accessibility, credibility, and accountability of digital assessments in the pre-and post-COVID-19 period. The growing ubiquity of digital assessments in academic and professional contexts, especially in the post-COVID-19 era, makes it necessary to conduct this systematic review. The main finding of this study is that, despite the growing ubiquity of digital assessments post the COVID-19 crisis, digital assessments seem to be deficient in employing assistive technologies. Additionally,the sudden migration to digital assessments pose challenges of maintaining the same standards of validity and reliability commensurate with traditional in-person assessments Therefore, going forward, we recommend extensive integration of assistive technologies in academic and professional digital assessments to enhance accessibility. Additionally, digital assessing authorities should also establish mechanisms for detecting cheating and plagiarism in digital assessments. This study underscores the overarching need for holistic approaches that balance technological innovation with ethical imperatives in digital assessments. This study contributes to the 21st-century understanding of the complex dynamic digital assessment landscapes.