{"title":"结合YOLOv5模型的面部标志对在线考试学生异常行为的检测","authors":"Muhanad Abdul Elah Alkhalisy, Saad Hameed Abid","doi":"10.25195/ijci.v49i1.380","DOIUrl":null,"url":null,"abstract":"The popularity of massive open online courses (MOOCs) and other forms of distance learning has increased recently. Schools and institutions are going online to serve their students better. Exam integrity depends on the effectiveness of proctoring remote online exams. Proctoring services powered by computer vision and artificial intelligence have also gained popularity. Such systems should employ methods to guarantee an impartial examination. This research demonstrates how to create a multi-model computer vision system to identify and prevent abnormal student behaviour during exams. The system uses You only look once (YOLO) models and Dlib facial landmarks to recognize faces, objects, eye, hand, and mouth opening movement, gaze sideways, and use a mobile phone. Our approach offered a model that analyzes student behaviour using a deep neural network model learned from our newly produced dataset\" StudentBehavioralDS.\" On the generated dataset, the \"Behavioral Detection Model\" had a mean Average Precision (mAP) of 0.87, while the \"Mouth Opening Detection Model\" and \"Person and Objects Detection Model\" had accuracies of 0.95 and 0.96, respectively. This work demonstrates good detection accuracy. We conclude that using computer vision and deep learning models trained on a private dataset, our idea provides a range of techniques to spot odd student behaviour during online tests.","PeriodicalId":53384,"journal":{"name":"Iraqi Journal for Computers and Informatics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Detection of Students' Abnormal Behavior in Online Exams Using Facial Landmarks in Conjunction with the YOLOv5 Models\",\"authors\":\"Muhanad Abdul Elah Alkhalisy, Saad Hameed Abid\",\"doi\":\"10.25195/ijci.v49i1.380\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The popularity of massive open online courses (MOOCs) and other forms of distance learning has increased recently. Schools and institutions are going online to serve their students better. Exam integrity depends on the effectiveness of proctoring remote online exams. Proctoring services powered by computer vision and artificial intelligence have also gained popularity. Such systems should employ methods to guarantee an impartial examination. This research demonstrates how to create a multi-model computer vision system to identify and prevent abnormal student behaviour during exams. The system uses You only look once (YOLO) models and Dlib facial landmarks to recognize faces, objects, eye, hand, and mouth opening movement, gaze sideways, and use a mobile phone. Our approach offered a model that analyzes student behaviour using a deep neural network model learned from our newly produced dataset\\\" StudentBehavioralDS.\\\" On the generated dataset, the \\\"Behavioral Detection Model\\\" had a mean Average Precision (mAP) of 0.87, while the \\\"Mouth Opening Detection Model\\\" and \\\"Person and Objects Detection Model\\\" had accuracies of 0.95 and 0.96, respectively. This work demonstrates good detection accuracy. We conclude that using computer vision and deep learning models trained on a private dataset, our idea provides a range of techniques to spot odd student behaviour during online tests.\",\"PeriodicalId\":53384,\"journal\":{\"name\":\"Iraqi Journal for Computers and Informatics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Iraqi Journal for Computers and Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.25195/ijci.v49i1.380\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Iraqi Journal for Computers and Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25195/ijci.v49i1.380","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
大规模在线开放课程(MOOCs)和其他形式的远程学习最近越来越受欢迎。学校和机构正在走向网络,以更好地为学生服务。考试的完整性取决于监考远程在线考试的有效性。由计算机视觉和人工智能驱动的监考服务也越来越受欢迎。这种制度应采用保证公正审查的方法。本研究演示了如何创建一个多模型计算机视觉系统来识别和预防考试中的异常学生行为。该系统使用You only look once (YOLO)模型和Dlib面部地标来识别人脸、物体、眼睛、手和嘴的张开动作,并注视侧面,使用手机。我们的方法提供了一个模型,该模型使用从我们新生成的数据集“学生行为”中学习到的深度神经网络模型来分析学生的行为。在生成的数据集上,“行为检测模型”的平均平均精度(mAP)为0.87,“张嘴检测模型”和“人和物体检测模型”的精度分别为0.95和0.96。这项工作证明了良好的检测精度。我们的结论是,使用计算机视觉和在私人数据集上训练的深度学习模型,我们的想法提供了一系列技术来发现在线测试中的奇怪学生行为。
The Detection of Students' Abnormal Behavior in Online Exams Using Facial Landmarks in Conjunction with the YOLOv5 Models
The popularity of massive open online courses (MOOCs) and other forms of distance learning has increased recently. Schools and institutions are going online to serve their students better. Exam integrity depends on the effectiveness of proctoring remote online exams. Proctoring services powered by computer vision and artificial intelligence have also gained popularity. Such systems should employ methods to guarantee an impartial examination. This research demonstrates how to create a multi-model computer vision system to identify and prevent abnormal student behaviour during exams. The system uses You only look once (YOLO) models and Dlib facial landmarks to recognize faces, objects, eye, hand, and mouth opening movement, gaze sideways, and use a mobile phone. Our approach offered a model that analyzes student behaviour using a deep neural network model learned from our newly produced dataset" StudentBehavioralDS." On the generated dataset, the "Behavioral Detection Model" had a mean Average Precision (mAP) of 0.87, while the "Mouth Opening Detection Model" and "Person and Objects Detection Model" had accuracies of 0.95 and 0.96, respectively. This work demonstrates good detection accuracy. We conclude that using computer vision and deep learning models trained on a private dataset, our idea provides a range of techniques to spot odd student behaviour during online tests.