CHEAT DETECTION IN ONLINE EXAMINATIONS USING ARTIFICIAL INTELLIGENCE

Authors

  • Santosh Gopane K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Radhika Kotecha K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Janhavi Obhan K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Ritesh Kumar Pandey K. J. Somaiya Institute of Technology, University of Mumbai, India

DOI:

https://doi.org/10.11113/aej.v14.20188

Keywords:

Artificial Intelligence, Computer Vision, Cheat Detection, Image Processing, Online Examinations

Abstract

With increasing use of ICT and technical advancements in the education sector, distant and online education as well as examinations are being carried out frequently. However, online examination, as a method of assessment offers the risk of an unmonitored setting where students have full access to external resources. Online-proctored exams are the most efficient way for educational institutions to ensure academic honesty and ethics to counteract this. Typically, proctoring requires human assistance in the form of online proctors who remotely monitor students' performance. Yet, due to the rising demand for personnel and the intrusive nature of human proctoring, it is imperative to explore other areas. To tackle this pressing issue, this research work aims to devise a novel architecture that, through the development of a robust and automated Artificial Intelligence system, enables students to take exams remotely and reduces proctor involvement. The method overcomes the shortcomings of the previous automated proctoring system by combining important components of online exam cheating detection with cost-effective and efficient hardware. By proposing a Hybrid of FaceNet Model, Lucas Kanade Algorithm, and Active Appearance Model for Face Detection and Activity Monitoring of the student, the proposed system extracts semantic indicators to evaluate whether an applicant is cheating in an online examination. The proposed Cheat Detection system's experimental results measured via an F-score of 0.94 demonstrate its efficacy, and promising performance compared to the standard baseline techniques.

References

M. Nichols, N. Choudhary, and D. Standring, 2020. “Exploring Transformative Learning in Vocational Online and Distance Education,” Journal of Open, Flexible and Distance Learning, 24(2): 43-55, doi: https://doi.org/10.61468/jofdl.v24i2.435.

Ministry of Human Resource Development, Government of India, 2020. “National Education Policy (NEP) 2020,” Available at: https://www.education.gov.in/sites/upload_files/mhrd/files/NEP_Final_English_0.pdf. Accessed: 07 July 2023.

Jung and H. Yeom, 2009. “Enhanced Security for Online Exams using Group Cryptography,” IEEE Transactions on Education, IEEE, 52(3): 340-349, doi: https://doi.org/10.1109/TE.2008.928909.

R. Kadthim and Z. Ali, 2022. “Survey: Cheating Detection in Online Exams,” International Journal of Engineering Research and Advanced Technology, 8(1): 01-05, doi: https://doi.org/10.31695/IJERAT.2022.8.1.1.

Y. Atoum, L. Chen, A. Liu, S. Hsu and X. Liu, 2017. “Automated Online Exam Proctoring,” IEEE Transactions on Multimedia, IEEE, 19(7): 1609-1624, doi: https://doi.org/10.1109/TMM.2017.2656064.

M. Pantic and L. Rothkrantz, 2004. “Facial action recognition for facial expression analysis from static face images,” IEEE Transactions on Systems, Man, and Cybernetics, 34(3): 1449-1461, IEEE, doi: https://doi.org/10.1109/TSMCB.2004.825931.

K. Henderson and J. Crawford, 2020. “A systematic review of online examinations: A pedagogical innovation for scalable authentication and integrity,” Computers and Education, 159: 104024, Elsevier, doi: https://doi.org/10.1016/j.compedu.2020.104024.

T. Reisenwitz, 2020. “Examining the necessity of proctoring online exams,” Journal of Higher Education Theory and Practice, 20(1): 118-124, doi: https://doi.org/10.33423/jhetp.v20i1.2782.

Y. Hyatt, 2021. “The Role of Adaptive Vision AI in Autonomous Machine Vision,” Photonics Views, 18(2), doi: https://doi.org/10.1002/phvs.202100033.

H. Nguyen and A. Caplier. 2015. “Local Patterns of Gradients for Face Recognition,” IEEE Transactions on Information Forensics and Security, 10(8): 1739-1751, IEEE doi: https://doi.org/10.1109/TIFS.2015.2426144.

R. Ranjan, A. Bansal, J. Zheng, H. Xu, J. Gleason, B. Lu, A. Nanduri, J. Chen, C. Castillo and R. Chellappa, 2019. “A Fast and Accurate System for Face Detection, Identification, and Verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2): 82-96, IEEE, doi: https://doi.org/10.1109/TBIOM.2019.2908436.

A. Tawari, S. Martin, and M. Trivedi, 2014. “Continuous Head Movement Estimator for Driver Assistance: Issues, Algorithms, and On-Road Evaluations,” IEEE Transactions on Intelligent Transportation Systems, 15(2): 818-830, IEEE, doi: https://doi.org/10.1109/TITS.2014.2300870.

P. Smith, M. Shah, and N. Lobo, 2003. “Determining driver visual attention with one camera,” IEEE Transactions on Intelligent Transportation Systems, 4(4): 205-218, IEEE, doi: https://doi.org/10.1109/TITS.2003.821342.

M. Pantic and L. Rothkrantz, 2004. “Facial Action Recognition for Facial Expression Analysis from Static Face Images,” IEEE Transactions on Systems, Man, and Cybernetics, 34(3): 1449-1461, IEEE, doi: https://doi.org/10.1109/TSMCB.2004.825931.

H. Li, J. Sun, Z. Xu, and L. Chen, 2017. “Multimodal 2D+3D Facial Expression Recognition with Deep Fusion Convolutional Neural Network,” IEEE Transactions on Multimedia, 19(12): 2816-2831, IEEE,doi: https://doi.org/10.1109/TMM.2017.2713408.

D. Hansen and Q. Ji, 2010. “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(3): 478-500, IEEE, doi: https://doi.org/10.1109/TPAMI.2009.30.

D. Pappusetty, H. Kalva, and H. Hock, 2017. “Pupil response to quality and content transitions in videos,” IEEE Transactions on Consumer Electronics, 63(4): 410-418, IEEE, doi: https://doi.org/10.1109/TCE.2017.015109.

Bacivarov, M. Ionita, and P. Corcoran, 2008. “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” IEEE Transactions on Consumer Electronics, 54(3): 1312-1320, IEEE, doi: https://doi.org/10.1109/TCE.2008.4637622.

X. Zhang, S. Yuan, M. Chen and X. Liu, 2018. “A Complete System for Analysis of Video Lecture Based on Eye Tracking,” IEEE Access, 6: 49056-49066, doi: https://doi.org/10.1109/ACCESS.2018.2865754.

A. Kiruluta, M. Eizenman, and S. Pasupathy, 1997. Predictive head movement tracking using a Kalman filter. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 27(2): 326-331, IEEE, doi: https://doi.org/10.1109/3477.558841.

S. Vora, A. Rangesh and M. Trivedi, 2018. “Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis,” IEEE Transactions on Intelligent Vehicles, 3(3): 254-265, IEEE. doi: https://doi.org/10.1109/TIV.2018.2843120.

R. Anushka, S. Jagadish, V. Satyanarayana, and M. K. Singh, 2021. “Lens less Cameras for Face Detection and Verification,” 6th International Conference on Signal Processing, Computing and Control, 242-246, doi: https://doi.org/10.1109/ISPCC53510.2021.9609392.

S. Schroff, D. Kalenichenko and J. Philbin, 2015. “FaceNet: A unified embedding for face recognition and clustering,” IEEE Conference on Computer Vision and Pattern Recognition, 815-823, IEEE, doi: https://doi.org/10.1109/CVPR.2015.7298682.

C. Patel, S. Garg, T. Zaveri, A. Banerjee, and R. Patel, 2018. “Human action recognition using fusion of features for unconstrained video sequences,” Computers & Electrical Engineering. 70: 284-301, Elsevier doi: https://doi.org/10.1016/j.compeleceng.2016.06.004.

E. Antonakos, J. Alabort-i-Medina, G. Tzimiropoulos, and S. Zafeiriou, 2015. “Feature-Based Lucas-Kanade and Active Appearance Models,” IEEE Transactions on Image Processing, 24(9): 2617-2632, IEEE. doi: https://doi.org/10.1109/TIP.2015.2431445.

T. Cootes, G. Edwards, and C. Taylor, 2001. “Active Appearance Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6): 681-685, IEEE. doi: https://doi.org/10.1109/34.927467.

S. Tikoo and N. Malik, 2017. “Detection of Face using Viola Jones and Recognition using Back Propagation Neural Network,” International Journal of Computer Science and Mobile Computing, 5: 288- 295, doi: https://doi.org/10.48550/arXiv.1701.08257.

A. Garcia-Dopico, J. Pedraza, M. Nieto, A. Pérez, S. Rodríguez, and J. Navas, 2014. “Parallelization of the optical flow computation in sequences from moving cameras,” Journal of Image and Video Processing, 18(1), doi: https://doi.org/10.1186/1687-5281-2014-18.

L. Alzubaidi, J. Zhang, A. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. Fadhel, M. Al-Amidie and L. Farhan, 2021. “Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions,” Journal of Big Data, 53(8): 1-74, doi: https://doi.org/10.1186/s40537-021-00444-8.

C. Chui and G. Chen, 2009. “Kalman Filtering with Real-Time Applications,” Springer Series in Information Sciences, Springer, 4th Edition, doi: https://doi.org/10.1007/978-3-540- 87849-0.

Downloads

Published

2024-02-29

Issue

Section

Articles

How to Cite

CHEAT DETECTION IN ONLINE EXAMINATIONS USING ARTIFICIAL INTELLIGENCE. (2024). ASEAN Engineering Journal, 14(1), 121-128. https://doi.org/10.11113/aej.v14.20188