• Santosh Gopane K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Radhika Kotecha K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Janhavi Obhan K. J. Somaiya Institute of Technology, University of Mumbai, India
  • Ritesh Kumar Pandey K. J. Somaiya Institute of Technology, University of Mumbai, India



Artificial Intelligence, Computer Vision, Cheat Detection, Image Processing, Online Examinations


With increasing use of ICT and technical advancements in the education sector, distant and online education as well as examinations are being carried out frequently. However, online examination, as a method of assessment offers the risk of an unmonitored setting where students have full access to external resources. Online-proctored exams are the most efficient way for educational institutions to ensure academic honesty and ethics to counteract this. Typically, proctoring requires human assistance in the form of online proctors who remotely monitor students' performance. Yet, due to the rising demand for personnel and the intrusive nature of human proctoring, it is imperative to explore other areas. To tackle this pressing issue, this research work aims to devise a novel architecture that, through the development of a robust and automated Artificial Intelligence system, enables students to take exams remotely and reduces proctor involvement. The method overcomes the shortcomings of the previous automated proctoring system by combining important components of online exam cheating detection with cost-effective and efficient hardware. By proposing a Hybrid of FaceNet Model, Lucas Kanade Algorithm, and Active Appearance Model for Face Detection and Activity Monitoring of the student, the proposed system extracts semantic indicators to evaluate whether an applicant is cheating in an online examination. The proposed Cheat Detection system's experimental results measured via an F-score of 0.94 demonstrate its efficacy, and promising performance compared to the standard baseline techniques.


M. Nichols, N. Choudhary, and D. Standring, 2020. “Exploring Transformative Learning in Vocational Online and Distance Education,” Journal of Open, Flexible and Distance Learning, 24(2): 43-55, doi:

Ministry of Human Resource Development, Government of India, 2020. “National Education Policy (NEP) 2020,” Available at: Accessed: 07 July 2023.

Jung and H. Yeom, 2009. “Enhanced Security for Online Exams using Group Cryptography,” IEEE Transactions on Education, IEEE, 52(3): 340-349, doi:

R. Kadthim and Z. Ali, 2022. “Survey: Cheating Detection in Online Exams,” International Journal of Engineering Research and Advanced Technology, 8(1): 01-05, doi:

Y. Atoum, L. Chen, A. Liu, S. Hsu and X. Liu, 2017. “Automated Online Exam Proctoring,” IEEE Transactions on Multimedia, IEEE, 19(7): 1609-1624, doi:

M. Pantic and L. Rothkrantz, 2004. “Facial action recognition for facial expression analysis from static face images,” IEEE Transactions on Systems, Man, and Cybernetics, 34(3): 1449-1461, IEEE, doi:

K. Henderson and J. Crawford, 2020. “A systematic review of online examinations: A pedagogical innovation for scalable authentication and integrity,” Computers and Education, 159: 104024, Elsevier, doi:

T. Reisenwitz, 2020. “Examining the necessity of proctoring online exams,” Journal of Higher Education Theory and Practice, 20(1): 118-124, doi:

Y. Hyatt, 2021. “The Role of Adaptive Vision AI in Autonomous Machine Vision,” Photonics Views, 18(2), doi:

H. Nguyen and A. Caplier. 2015. “Local Patterns of Gradients for Face Recognition,” IEEE Transactions on Information Forensics and Security, 10(8): 1739-1751, IEEE doi:

R. Ranjan, A. Bansal, J. Zheng, H. Xu, J. Gleason, B. Lu, A. Nanduri, J. Chen, C. Castillo and R. Chellappa, 2019. “A Fast and Accurate System for Face Detection, Identification, and Verification,” IEEE Transactions on Biometrics, Behavior, and Identity Science, 1(2): 82-96, IEEE, doi:

A. Tawari, S. Martin, and M. Trivedi, 2014. “Continuous Head Movement Estimator for Driver Assistance: Issues, Algorithms, and On-Road Evaluations,” IEEE Transactions on Intelligent Transportation Systems, 15(2): 818-830, IEEE, doi:

P. Smith, M. Shah, and N. Lobo, 2003. “Determining driver visual attention with one camera,” IEEE Transactions on Intelligent Transportation Systems, 4(4): 205-218, IEEE, doi:

M. Pantic and L. Rothkrantz, 2004. “Facial Action Recognition for Facial Expression Analysis from Static Face Images,” IEEE Transactions on Systems, Man, and Cybernetics, 34(3): 1449-1461, IEEE, doi:

H. Li, J. Sun, Z. Xu, and L. Chen, 2017. “Multimodal 2D+3D Facial Expression Recognition with Deep Fusion Convolutional Neural Network,” IEEE Transactions on Multimedia, 19(12): 2816-2831, IEEE,doi:

D. Hansen and Q. Ji, 2010. “In the Eye of the Beholder: A Survey of Models for Eyes and Gaze,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(3): 478-500, IEEE, doi:

D. Pappusetty, H. Kalva, and H. Hock, 2017. “Pupil response to quality and content transitions in videos,” IEEE Transactions on Consumer Electronics, 63(4): 410-418, IEEE, doi:

Bacivarov, M. Ionita, and P. Corcoran, 2008. “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” IEEE Transactions on Consumer Electronics, 54(3): 1312-1320, IEEE, doi:

X. Zhang, S. Yuan, M. Chen and X. Liu, 2018. “A Complete System for Analysis of Video Lecture Based on Eye Tracking,” IEEE Access, 6: 49056-49066, doi:

A. Kiruluta, M. Eizenman, and S. Pasupathy, 1997. Predictive head movement tracking using a Kalman filter. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 27(2): 326-331, IEEE, doi:

S. Vora, A. Rangesh and M. Trivedi, 2018. “Driver Gaze Zone Estimation Using Convolutional Neural Networks: A General Framework and Ablative Analysis,” IEEE Transactions on Intelligent Vehicles, 3(3): 254-265, IEEE. doi:

R. Anushka, S. Jagadish, V. Satyanarayana, and M. K. Singh, 2021. “Lens less Cameras for Face Detection and Verification,” 6th International Conference on Signal Processing, Computing and Control, 242-246, doi:

S. Schroff, D. Kalenichenko and J. Philbin, 2015. “FaceNet: A unified embedding for face recognition and clustering,” IEEE Conference on Computer Vision and Pattern Recognition, 815-823, IEEE, doi:

C. Patel, S. Garg, T. Zaveri, A. Banerjee, and R. Patel, 2018. “Human action recognition using fusion of features for unconstrained video sequences,” Computers & Electrical Engineering. 70: 284-301, Elsevier doi:

E. Antonakos, J. Alabort-i-Medina, G. Tzimiropoulos, and S. Zafeiriou, 2015. “Feature-Based Lucas-Kanade and Active Appearance Models,” IEEE Transactions on Image Processing, 24(9): 2617-2632, IEEE. doi:

T. Cootes, G. Edwards, and C. Taylor, 2001. “Active Appearance Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(6): 681-685, IEEE. doi:

S. Tikoo and N. Malik, 2017. “Detection of Face using Viola Jones and Recognition using Back Propagation Neural Network,” International Journal of Computer Science and Mobile Computing, 5: 288- 295, doi:

A. Garcia-Dopico, J. Pedraza, M. Nieto, A. Pérez, S. Rodríguez, and J. Navas, 2014. “Parallelization of the optical flow computation in sequences from moving cameras,” Journal of Image and Video Processing, 18(1), doi:

L. Alzubaidi, J. Zhang, A. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaría, M. Fadhel, M. Al-Amidie and L. Farhan, 2021. “Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions,” Journal of Big Data, 53(8): 1-74, doi:

C. Chui and G. Chen, 2009. “Kalman Filtering with Real-Time Applications,” Springer Series in Information Sciences, Springer, 4th Edition, doi: 87849-0.







How to Cite