Full Document

Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

RANDOM INTERVAL QUERY AND FACE RECOGANITION

ATTENDANCE SYSTEM FOR VIRTUAL CLASSROOM


USING DEEP LEARNING
A PROJECT REPORT

Submitted by

KARTHIKEYAN S (621318104024)
MOHAMEDYOUSUF I (621318104032)
RAMANAN N (621318104042)

in partial fulfillment for the award of the degree

of

BACHELOR OF ENGINEERING

IN

COMPUTER SCIENCE AND ENGINEERING

KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY

(AUTONOMOUS)

THOTTIAM, TRICHY

ANNA UNIVERSITY:: CHENNAI 600 025

MAY- 2022

i
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY,
(AUTONOMOUS)
Tholurpatti (Po), Thottiam (Tk), Trichy (Dt) – 621 215

COLLEGE VISION & MISSION STATEMENT

VISION
“To become an Internationally Renowned Institution in Technical Education, Research and
Development by Transforming the Students into Competent Professionals with Leadership Skills and
Ethical Values.”

MISSION
● Providing the Best Resources and Infrastructure.
● Creating Learner centric Environment and continuous –Learning.
● Promoting Effective Links with Intellectuals and Industries.
● Enriching Employability and Entrepreneurial Skills.
● Adapting to Changes for Sustainable Development.

COMPUTER SCIENCE AND ENGINEERING

VISION
To produce competent software professionals, academicians, researchers and entrepreneurs
with moral values through quality education in the field of Computer Science and Engineering.

MISSION
● Enrich the students' knowledge and computing skills through innovative teaching-learning
process with state- of- art- infrastructure facilities.

● Endeavour the students to become an entrepreneur and employable through adequate


industry institute interaction.

● Inculcating leadership skills, professional communication skills with moral and ethical
values to serve the society and focus on students' overall development

ii
PROGRAM EDUCATIONAL OBJECTIVES (PEO’s)

PEO I: Graduates shall be professionals with expertise in the fields of Software Engineering,
Networking, Data Mining and Cloud computing and shall undertake Software Development,
Teaching and Research.

PEO II: Graduates will analyses problems, design solutions and develop programs with sound
Domain Knowledge.

PEO III: Graduates shall have professional ethics, team spirit, life-long learning, good oral and
written communication skills and adopt corporate culture, core values and leadership skills.

PROGRAM SPECIFIC OUTCOMES (PSO's)

PSO1: Professional skills: Students shall understand, analyses and develop computer applications
in the field of Data Mining/Analytics, Cloud Computing, Networking etc., to meet the requirements
of industry and society.

PSO2: Competency: Students shall qualify at the State, National and International level competitive
examination for employment, higher studies and research.

iii
PROGRAM OUTCOMES (PO’s)
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
an engineering specialization to the solution of complex engineering problems.

2. Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.

3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering
and IT tools including prediction and modeling to complex engineering activities with an understanding
of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.

7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.

9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.

12. Life Long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.

iv
KONGUNADU COLLEGE OF ENGINEERING AND
TECHNOLOGY
(AUTONOMOUS)
ANNA UNIVERSITY :: CHENNAI 600 025
BONAFIDE CERTIFICATE

Certified that this project report “RANDOM INTERVAL QUERY AND FACE
RECOGANITION ATTENDANCE SYSTEM FOR VIRTUAL CLASSROOM
USING DEEP LEARNING “is the bonafide work of KARTHIKEYAN S
(621318104024), MOHAMEDYOUSUF I (621318104032), RAMANAN N
(621318104042), who carried out the project work under my supervision.

SIGNATURE SIGNATURE

Dr.C.Saravanabhavan, M.Tech., Ph.D., Mrs.L.Nivetha, M.E.,


HEAD OF THE DEPARTMENT SUPERVISOR
Professor, Assistant Professor,
Department of Computer Science and Department of Computer Science
and Engineering, Engineering,

Kongunadu College of Engineering and Kongunadu College of Engineering and


Technology, Thottiam, Trichy. Technology, Thottiam, Trichy.

Submitted for the Project Viva-Voce examination held on ___________

Internal Examiner External Examiner

v
ACKNOWLEDGEMENT

We wish to express our sincere thanks to our beloved Chairman


Dr.PSK.R.PERIASWAMY for providing immense facilities in our institution.

We proudly render our thanks to our Principal Dr.R.ASOKAN, M.S.,


M.Tech., Ph.D., for the facilities and the encouragement given by him to the
progress and completion of our project.

We proudly render our immense gratitude and the sincere thanks to our Head
of the Department of Computer Science and Engineering
Dr.C.SARAVANABHAVAN, M.Tech., Ph.D., for his effective leadership,
encouragement and guidance in the project.

We highly indebted to provide our heart full thanks to our supervisor


Mrs.L.NIVETHA, M.E., for her valuable suggestion during execution of our
project work and for continued encouragement in conveying us for making many
constructive comments for improving comments the operation of this project report.

We highly indebted to provide our heart full thanks to our project coordinator
Mr.K.KARTHICK, M.E.,(Ph.D)., for his valuable ideas, constant encouragement
and supportive guidance throughout the project.

We wish to extend our sincere thanks to all teaching and non-teaching staff
of Computer Science and Engineering department for their valuable suggestions,
co-operation and encouragement on successful completion of this project.

We wish to acknowledge the help received from various departments and


various individuals during the preparation and editing stages of the manuscript.
vi
ABSTRACT

The COVID-19 pandemic outbreak has resulted in an unprecedented crisis across

the globe. The pandemic created an enormous demand for innovative technologies to

solve crisis-specific problems in different sectors of society. In the case of the education

sector and allied learning technologies, significant issues have emerged while substituting

face-to-face learning with online virtual learning. Several countries have closed

educational institutions temporarily to alleviate the COVID-19 spread. The closure of

educational institutions compelled the teachers across the globe to use online meeting

platforms extensively. The virtual classrooms created by online meeting platforms are

adopted as the only alternative for face-to-face interaction in physical classrooms. In this

regard, student’s attendance management in virtual classes is a major challenge

encountered by the teachers. Student attendance is a measure of their engagement in a

course, which has a direct relationship with their active learning. However, during virtual

learning, it is exceptionally challenging to keep track of the attendance of students.

Calling students names in virtual classrooms to take attendance is both trivial and time

consuming. Thus, in the backdrop of the COVID-19 pandemic and the extensive usage

of virtual meeting platforms, there is a crisis-specific immediate necessity to develop a

proper tracking system to monitor student’s attendance and engagement during virtual

learning. In this project, we are addressing the pandemic induced crucial necessity by

introducing a novel approach. In order to realize a highly efficient and robust attendance

management system for virtual learning, we introduce the Random Interval Query and

Face Recognition Attendance Management System.

vii
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO


ABSTRACT vii
LIST OF FIGURES X
LIST OF ABBREVIATIONS xi
1. INTRODUCTION 1
1.1 OVERVIEW 1
2. LITERATURE SURVEY 4
3. SYSTEM ANALYSIS 9
3.1 EXISTING SYSTEM 9
3.1.1 DISADVANTAGES 9
3.2 PROPOSED SYSTEM 10
3.2.1 ADVANTAGES 10
4. SYSTEM REQUIREMENTS 11
4.1 HARDWARE REQUIREMENTS 11
4.2 SOFTWARE REQUIREMENTS 11
5. SYSTEM DESIGN 12
5.1 ARCHIECTURE DIAGARM 12
5.2 DATA FLOW DIAGRAM 16
6. SYSTEM IMPLEMENTATION 19

6.1 MODULES 19
6.2 MODULES DESCRIPTION 19

6.2.1 VIRTUAL MEET DASHBOARD


19
WEB API

6.2.2 AI PRESENT MODULE 19

6.2.3 LEARNING ATTENDTIVE


PREDICTION 21

viii
6.2.4 USER CONTROL PANEL 23

6.2.5 NOTIFICATION MODULE 25

6.2.6 PERFORMANCE ANALYSIS 25

7. SIMULATION RESULTS 27

7.1 ANALYSIS OF RIAMS FACE 28


RECOGNITION MODULE OUTPUT

7.2 ANALYSIS OF ANCILLARY MODULE 30


OUTPUTS

7.3 COMBINING MODALITIES AND THE 31


FINAL ATTENDANCE DECISION
MAKING

8. APPENDICES 35

8.1 SAMPLE OUTPUT 35


8.2 SCREENSHOT 43

9. CONCLUSION AND FUTURE 47


ENHANCEMENT
9.1 CONCLUSION 47

9.2 FUTURE ENHANCEMENT 47

REFERENCES 48

PUBLICATION 50

ix
LIST OF FIGURES

FIGURE NO NAME OF THE FIGURE PAGE NO

5.1 ARCHITECTURE DIAGRAM 12

5.2 DATA FLOW DIAGRAM 16

7.1 ANALYSIS CHART OF RIAMS 33


FACE RECOGNITION MODULE
AND ANCILLARY MODALITIES

x
LIST OF ABBREVIATIONS

CNN Convolution Neural Network

WSGI Web Server Gateway Interface

FCL Fully Connected Layer

GPL General Public License

VFR Video based Face Recognition

DDL Data Definition Language

DML Data Manipulation Language

DCL Data Control Language

TCL Transaction Control Language

WAMP Windows Apache MySQL PHP

LAMP Linux Apache MySQL PHP

MAMP Mac Apache MySQL PHP

GD Graphics Draw

RSS Really Simple Syndication

UIN Unique Identification Number

xi
CHAPTER 1
INTRODUCTION

1.1 OVERVIEW

A virtual classroom is an online teaching and learning environment where


teachers and students can present course materials, engage and interact with other
members of the virtual class, and work in groups together. The key distinction of a
virtual classroom is that it takes place in a live, synchronous setting. Online coursework
can involve the viewing of pre-recorded, asynchronous material, but virtual classroom
settings involve live interaction between instructors and participants.

Virtual classrooms and distance learning, as alternate technology-driven learning


methods, have been growing at a reasonable space. The increasing popularity of social
and microlearning strategies, fostered by general social media platforms like YouTube
and Twitter, and major educational technology disruptions like edX, have added to the
increasing acceptance of virtual modes of learning. It is expected that the predominant
use of virtual classrooms would increase by a whopping 16.2% compounded annual
growth rate by 2023.Things have started to look different, however, in the wake of the
current, novel coronavirus COVID-19 pandemic, since the entire world is under
lockdown. It is the time of the year when academic and teaching activities are in full
swing in most parts of the world. The current pandemic situation has paved the way for
a ground test of virtual classrooms as a prominent tool of learning in the current times.
Schools, colleges, universities, corporates, and even world bodies and multilateral
organizations like the UNO, WHO, and G20 have had to switch to the lesser-used virtual
mode of learning and communications.

These emergent circumstances stand as a conducive test for companies offering


virtual classroom platforms and services like Blackboard, Cisco, Microsoft, etc. The test
parameters are varied, some predominant ones being bandwidth management, network
traffic, server response time, and a number of concurrent users.

1
1.1.1. Problems Identified

A cloud auditor is a party that can perform an independent examination of


cloud service controls with the intent to express an opinion there on. Audits are
performed to verify conformance to standards through review of objective
evidence. In Cloud computing, the term cloud is a metaphor for the Internet, so
the phrase Cloud computing is defined as a type of Internet-based computing,
where different services are delivered to an organization’s computers and devices
through the Internet. Cloud computing is very promising for the Information
Technology (IT) applications; however, there are still some issues to be solved
for personal users and enterprises to store data and deploy applications in the
Cloud computing environment. Data security is one of the most significant
barriers to its adoption and it is followed by issues including compliance, privacy,
trust and legal matters.

1.1.2. Artificial Intelligence

Artificial Intelligence (AI) is the field of computer science dedicated to


solving cognitive problems commonly associated with human intelligence, such
as learning, problem solving, and pattern recognition. Artificial Intelligence,
often abbreviated as “Artificial Intelligence” may connote robotics or futuristic
scenes, Artificial Intelligence goes well beyond the automatons of science fiction,
into the non-fiction of modern-day advanced computer science. Professor Pedro
Domingo’s, a prominent researcher in this field, describes “Five Tribes” of
machine learning, comprised of symbolists, with origins in logic and philosophy,
connectionists, stemming from neuroscience, revolutionaries, relating to
evolutionary biology, Bayesians, engaged with statistics and probability and
analogizes with origins in psychology. Recently, advances in the efficiency of
statistical computation have led to Bayesians being successful at furthering the
field in a number of areas, under the name “machine learning”.

2
1.1.3. Scope of the project

In order to realize a highly efficient and robust attendance management


system for virtual learning, this project introduce the Random Interval
Attendance Management System. To the best of our knowledge no such
automated system has been proposed so far for tracking student’s attendance and
ensuring their engagement during virtual learning. The proposed method is the
simplest and the best approach to automatically capture the attendance during
virtual learning. Moreover, the novel random attendance tracking approach can
also prevent the dropping out of participants from the virtual classroom.
Randomness ensures that students cannot predict at which instant of time the
attendance is registered. Another added advantage of the RIAMS approach is that
it requires only nominal internet bandwidth in comparison with the existing face
recognition-based attendance tracking systems. Neither the students nor the
teachers will have to face any difficulties in virtual classrooms with the AIPresent
design. As the random intervals required for executing AIPresent attendance
tracking modalities are too short 30 seconds or less, the teaching learning process
is not affected. The proposed model can be easily scaled and integrated into a
wide variety of virtual meetings, including business meetings.

1.1.4. Objective

The key objective of AIPresent is to develop a robust system that monitor


student’s attendance and engagement in a virtual classroom, at random intervals
of time. It encompasses a novel design using the AI Deep CNN (Convolution
Neural Network) model to capture face biometric randomly from student’s video
stream and record their attendance automatically. Thus, the main component of
the proposed model is a face recognition module built using the AI-DL tools.
RIAMS also incorporates ancillary submodules for assessing student’s responses
to CAPTCHAs and UIN queries, to ensure active engagement in virtual
classrooms.
3
CHAPTER 2
LITERATURE SURVEY

[1] TITLE: A Student Attendance Management Method Based on Crowdsensing


in Classroom Environment (2021).

AUTHOR: Zhigang Gao; Yucai Huang; Leilei Zheng; Xiaodong Li; Huijuan Lu;
Jianhui Zhang; Qingling Zhao; Wenjie Diao; Qiming.

This paper presents a student attendance management method that combines the
active reporting and sampling check of students’ location information, which has the
advantages of high real-time performance and low disturbance. The author proposes an
intelligent attendance management method named AMMoC. AMMoC need neither
deploy additional hardware devices in the classroom, nor collect the biological
characteristics of students. AMMoC only needs to install two Android applications on
mobile devices of teachers and students respectively, and uses mutual verification
between students to complete attendance checking. AMMoC divides the classroom into
several subregions, and assigns students to verify the student number of subregions.
After AMMoC obtains the location information of students, it uses an algorithm based
on intelligent search, selects several students to complete the crowdsensing tasks which
require to submit the number of students of a specific subregion, etc. AMMoC will
analyze the truth of the initial location information based on the results of the
crowdsensing tasks submitted by the students.

Techniques: Student attendance management method.

Merits: The active reporting and sampling check of student location.

Overview: The advantages of high real-time performance and low disturbance.

4
[2] TITLE: Online Attendance Monitoring System Using QR Code (2021).

AUTHOR: Shubham Mishra; Chandan Kumar; Ahmad Ali; Jeevan Bala.

“QR Code Based Attendance Management System is a mix of two web


applications created for taking and putting attendance of the students on the regular
routine in the school. The framework eliminates the drawn-out assignment of physically
keeping up the attendance records via automating it. The system contains two interfaces
one for student and another for the teacher. The main and important interface is of
teacher’s interface. All the important and additional information about the attendance
monitoring system will be available in this interface only. The QR codes will not be static
it will be dynamic, it will keep on changing in every 30 seconds, so that no one can click
photo and send to his friends or colleague to get a proxy attendance. After scanning the
QR codes successfully the attendance will get punched automatically, but it does not
mean that this will be the final submission, The teacher still will have an option to mark
someone present or absent according to his wish like, if someone’s mobile or pc is not
responding then teacher can give him attendance manually. Face recognition is a
technique for distinguishing or confirming the identity of an individual using their face.
Face recognition frameworks can be utilized to distinguish individuals in photographs,
video, or progressively. For better accuracy of face-log generation, we employed face
tracking technique like node face recognition API and libraries. Taking a gander at the
current circumstance, we have considered utilizing the versatile innovation to
productively profit by the total allotted time appointed to a lecture.

Techniques: QR Code Based Attendance Management System.

Merits: Taking and putting attendance of the student.

Overview: Two web applications created for taking and putting attendance of the
students on the regular routine in the school, college.

5
[3] TITLE: Face Recognition Attendance System Based on Real-Time Video
Processing (2020).

AUTHOR: Hao Yang; Xiaofeng Han.

This article aims to design a face recognition time and attendance system based on real-
time video processing. Faces in surveillance videos often suffer from serious image blur,
posture changes, and occlusion. In order to overcome the challenges of video-based face
recognition (VFR), Ding C has proposed a comprehensive framework based on
convolutional neural network (CNN). First, in order to learn a fuzzy and robust face
representation, Ding C artificially blurs the training data composed of clear still images
to make up for the lack of real video training data. Using training data composed of still
images and artificial fuzzy data, CNN is encouraged to automatically learn fuzzy
insensitive features. Second, in order to enhance the robustness of CNN features to pose
changes and occlusion, CNN has proposed a trunk branch CNN model (TBE-CNN),
which extracts complementarity from the overall face image and the patches around the
face parts Information. A face recognition attendance system based on real-time video
processing is designed, and two colleges in a province are selected for real-time check-
in and inspection of student attendance. The accuracy rate of the face recognition system
in the actual check-in, the stability of the face recognition attendance system with real-
time video processing, and the truancy rate of the face recognition attendance system
with real-time video processing It is difficult to analyze the interface settings of the face
recognition attendance system using real-time video processing. Research data shows
that the accuracy of the video face recognition system is about 82%.

Techniques: CNN is encouraged to automatically learn fuzzy insensitive features.

Merits: Attendance system based on real-time video processing.

Overview: In order to overcome the challenges of video-based face recognition (VFR).

6
[4] TITLE: Design of an E-Attendance Checker through Facial Recognition using
Histogram of Oriented Gradients with Support Vector Machine (2020).

AUTHOR: Allan Jason C.Arceo; Renee Ylka N.Borejon; Mia Chantal R.Hortinela;
Alejandro H.Ballado; Arnold C.Paglinawan.

The main purpose of this paper is to design an electronic attendance (E-


Attendance) checker through facial recognition system using Histogram of Oriented
Gradients with Support Vector Machine by considering its (a) conceptual framework, (b)
process flow, and (c) implementation while considering the camera and class size
optimization. As for the facial recognition algorithms, three of the more common
techniques are assessed in research done by Adouani. These three algorithms are namely
Hааr-like cascade, Histogram of Oriented Gradients (HOG) with Support Vector
Machine and Liner Binary Pattern cascade (LBP). It was determined that the Histogram
of Oriented Gradients (HOG) with Support Vector Machine algorithm outperforms the
other two algorithms in terms of confidence factors. HOG with SVM is deemed to be the
most accurate and efficient face recognition algorithm available in the OpenCV library.
The process flows from taking an image and then implementing face detection via HOG
which transpires and ends with the face recognition using the SVM algorithm.

Techniques: An e-attendance checker was established using HOG and SVM algorithms
for face detection and face recognition, respectively.

Merits: Its output was shown in an online database.

Overview: It design a electronic attendance (E-Attendance) checker through facial


recognition system using Histogrаm of Oriented Grаdients with Support Vector Machine.

7
[5] TITLE: Deep Unified Model for Face Recognition Based on Convolution Neural
Network and Edge Computing (2019).

AUTHOR: Muhammad Zeeshan Khan; Saad Harous; Solet Ul Hassan; Muhammad


Usman Ghani Khan; Razi Iqbal; Shahid Mumtaz.

To achieve better results, proposed algorithm utilizes the Convolution Neural


Network, which is a deep learning approach and state-of the-art in computer vision. The
proposed methodology is able to recognize the people even when frame has multiple
faces. This system is capable of recognizing the people from different positions and
under different lighting conditions, as light does not have much effect on the system.
Moreover, to improve the data latency and response time, edge computing has been
utilized for implementing the smart class rooms in real time. This paper proposes an
algorithm for face detection and recognition based on convolution neural networks
(CNN), which outperform the traditional techniques. In order to validate the efficiency
of the proposed algorithm, a smart classroom for the student’s attendance using face
recognition has been proposed. Although the system is achieving higher accuracy, but
the main limitation of the system is distance, naturally as a distance increase, the picture
becomes blurry, so the system produces false results on the blurry faces in some cases.

Techniques: Faster Region Convolution Neural Network along with the Edge Computing
techniques are utilized to achieve the state of the art results.

Merits: The conventional attendance taking system to validate the efficiency of the
proposed algorithm.
Overview: To achieve better results, proposed algorithm utilizes the Convolution
Neural Network, which is a deep learning approach and state-of the-art in computer
vision.

8
CHAPTER 3

SYSTEM ANALYSIS
3.1 EXISTING SYSTEM

Zoom, Google Meet, Microsoft Teams, and Cisco Webex Meetings are used to
create virtual classrooms. Manual attendance calling, self-reporting attendance systems
(using tools like Google forms), video calling students, short quizzes or polls, questions
and discussions by selecting random students, and timed assignments. In the case of
physical classrooms, biometric-based attendance monitoring systems are essentially
based on face, fingerprint, and iris recognition technologies Facial recognition is a
technology that is capable of recognizing a person based on their face. Early approaches
mainly focused on extracting different types of hand-crafted features with domain
experts in computer vision and training effective classifiers for detection with traditional
machine learning algorithms. Such methods are limited in that they often require
computer vision experts in crafting effective features, and each individual component is
optimized separately, making the whole detection pipeline often sub-optimal. There are
many existing FR methods that achieve a good performance

3.1.1 DISADVANTAGES

● Calling students names in virtual classrooms to take attendance is both trivial


and time-consuming.
● Students may resort to unethical activities like not attending the class but still
keeping their status as `online’.

● Student can go offline at any time without letting the teacher know.

● It is not easy to find out whether the student is really attending the class.
● Student might have turned off the video camera

9
3.2 PROPOSED SYSTEM

Proposed System of the project introduces the novel feature of randomness in an


AI-based face recognition system to effectively track and manage student’s attendance
and engagement in virtual classrooms. Enhances the efficacy of the attendance
management in virtual classrooms by integrating two ancillary modalities student’s real-
time response to captchas, Concept QA and UIN (Unique Identification Number)
queries. Monitors student’s attendance and engagement during virtual learning without
affecting their focus on learning. Proposed two ancillary modalities - verifying student’s
responses to Subjects and UIN (Unique Identification) queries at random intervals of
time. Develops a user-friendly attendance recording system for teachers that can
automatically record student’s attendance and generate attendance reports for virtual
classrooms. Deep learning in the form of Convolutional Neural Networks (CNNS) to
perform the face recognition.

3.2.1 ADVANTAGES

• Randomness ensures that students cannot predict at which instant of time the
attendance is registered.

• Highly efficient and robust attendance management system for virtual learning,

• Their focus on learning.

• Introduces the novel feature of randomness

• Face-embedding learning approach that yielded a recognition accuracy of 98.95%

• Provide authorized access.

• Ease of use

10
CHAPTER 4

SYSTEM REQUIREMENTS
4.1 HARDWARE REQUIREMENT

• Processors: Intel Core i5 processor 4300M at 2.60 Ghz or 2.59 Ghz (1 socket, 2
cores, 2 threads per core), 8 GB of DRAM.

• Disk space: 320 GB.

• Operating systems: Windows 10, Macos, and Linux.

• Web Camera.

4.2 SOFTWARE REQUIREMENT


• Server Side: Python 3.7.4 (64-bit) or (32-bit).
• Client Side: HTML, CSS, Bootstrap.

• IDE: Flask 1.1.1

• Back end: MySQL 5.

• Server: WampServer 2i.

• OS: Windows 10 64 –bit or Ubuntu 18.04 LTS “Bionic Beaver”.

11
CHAPTER 5
SYSTEM DESIGN

5.1 ARCHITECTURE DIAGRAM

Figure 5.1. Architecture diagram

12
Support Vector Machine (SVM)

Support Vector Machines (SVM) are a popular training tool which can be used to
generate a model based on several classes of data, and then distinguish between them.
For the basic two- class classification problem, the goal of an SVM is to separate the
two classes by a function induced from available examples. In the case of facial
recognition, a class represents a unique face, and the SVM attempts to find what best
separates the multiple feature vectors of one unique face from those of another unique
face.

Principal Component Analysis (PCA)

One of the most used and cited statistical method is the Principal Component
Analysis. A mathematical procedure performs a dimensionality reduction by extracting
the principal component of multi-dimensional data. Simply Principal component
analysis is used for a wide range of variety in different applications such as Digital image
processing, Computer vision and Pattern recognition. The main principal of principal
component analysis is reducing the dimensionality of a database. In the communication
of large number of interrelated features and those retaining as much as possible of the
variation in the database.

Linear Discriminant Analysis (LDA)

LDA is widely used to find the linear combination of features while preserving
class separability. Unlike PCA, the LDA tries to model to the difference between levels.
For each level the LDA obtains differenced in multiple projection vectors. Linear
discriminant analysis method is related to fisher discriminant analysis. Features are
extracting the form of pixels in images these features are known as shape feature, color
feature and texture feature. The linear discriminant analysis is using for identifying the
linear separating vectors between features of the pattern in the images. This procedure
is using maximization between class scatter, when minimizing the intra class variance
in face identification.

13
Neural Network (NN)

Neural Network has continued to use pattern recognition and classification.


Kohonen was the first to show that a neuron network could be used to recognize aligned
and normalized faces. There are methods, which perform feature extraction using neural
networks. There are many methods, which combined with tools like PCA or LCA and
make a hybrid classifier for face recognition. These are like Feed Forward Neural
Network with additional bias, Self-Organizing Maps with PCA, and Convolutional
Neural Networks with multi-layer perception, etc. These can increase the efficiency of
the models.

K-Nearest Neighbors

One of the basic classification algorithms in machine learning is known to be the


k-NN algorithm. In machine learning, the k-NN algorithm is considered a well
monitored type of learning. It is commonly used in the sorting of related elements in
searching apps. By constructing a vector representation of objects and then measuring
them using appropriate distance metrics, the similarities between the items are
determined.

Other Application used FR

● Face Recognition Applications are Security System and Smart Home


Automation System.

● Face recognition-based voting system are proposed.

Working of Facial Recognition

1. Every Machine Learning algorithm takes a dataset as input and learns from
this data. The algorithm goes through the data and identifies patterns in the data. The
challenging part is to convert a particular face into numbers Machine Learning
algorithms only understand numbers.

14
2. This numerical representation of a face (or an element in the training set) is
termed as a feature vector. A feature vector comprises of various numbers in a specific
order.

3. You can take various attributes to define a face like:

• Height/width of face (cm).

• Color of face (R, G, B).

• Height/width of parts of face like nose and lips (cm).

• We can consider the ratios as feature vector after rescaling.

4. A feature vector can be created by organizing these attributes to into a table,


say, for a certain set of values of attributes your table may look like this Height of
face(cm).

Height of face Width of face Average color Width of lips Height of


(cm) of face(R,G,B) (cm) nose(cm)
(cm)

23.1 15.8 255,224,189 5.2 4.4

Now can add a number of other features like hair color spectacles. Keep in mind
that a simple model gives the best result. Adding a greater number of features may not
give accurate result (See overfitting and underfitting).

Machine learning helps you with two main things

Deriving the feature vector

As it is a difficult process to involve all features by name, we convert it to feature


vector. This is then used by the algorithm. A Machine Learning algorithm can
intelligently label out many of such features.

15
5.2 DATA FLOW DIAGRAM

Figure 5.2. Data Flow Diagram

Convolutional Layer

This layer is the first layer that is used to extract the various features from the
input images. In this layer, the mathematical operation of convolution is performed
between the input image and a filter of a particular size M x M. By sliding the filter over
the input image, the dot product is taken between the filter and the parts of the input
image with respect to the size of the filter M x M. The output is termed as the Feature
map which gives us information about the image such as the corners and edges. Later,
this feature map is fed to other layers to learn several other features of the input image.

16
Pooling Layer

In most cases, a Convolutional Layer is followed by a Pooling Layer. The primary


aim of this layer is to decrease the size of the convolved feature map to reduce the
computational costs. This is performed by decreasing the connections between layers
and independently operates on each feature map. Depending upon method used, there
are several types of Pooling operations. In Max Pooling, the largest element is taken
from feature map. Average Pooling calculates the average of the elements in a
predefined sized Image section. The total sum of the elements in the predefined section
is computed in Sum Pooling. The Pooling Layer usually serves as a bridge between the
Convolutional Layer and the FC Layer.

Fully Connected Layer

The Fully Connected (FC) layer consists of the weights and biases along with the
neurons and is used to connect the neurons between two different layers. These layers
are usually placed before the output layer and form the last few layers of a CNN
Architecture. In this, the input image from the previous layers is flattened and fed to the
FC layer. The flattened vector then undergoes few more FC layers where the
mathematical functions operations usually take place. In this stage, the classification
process begins to take place.

Dropout

Usually, when all the features are connected to the FC layer, it can cause
overfitting in the training dataset. Overfitting occurs when a particular model works so
well on the training data causing a negative impact in the model’s performance when
used on a new data. To overcome this problem, a dropout layer is utilized wherein a few
neurons are dropped from the neural network during training process resulting in
reduced size of the model. On passing a dropout of 0.3, 30% of the nodes are dropped
out randomly from the neural network.

17
Activation Functions

Finally, one of the most important parameters of the CNN model is the activation
function. They are used to learn and approximate any kind of continuous and complex
relationship between variables of the network. In simple words, it decides which
information of the model should fire in the forward direction and which ones should not
at the end of the network. It adds non- linearity to the network. There are several
commonly used activation functions such as the ReLU, SoftMax, tanH and the Sigmoid
functions. Each of these functions have a specific usage. For a binary classification
DCNN model, sigmoid and SoftMax functions are preferred a for a multi-class
classification, generally SoftMax us used.

18
CHAPTER 6
SYSTEM IMPLEMENTATION

6.1 MODULES

● Virtual Meet Dashboard Web API

● AI Present Module

● Learning Attentive Prediction

● User Control Panel

● Notification Module

● Performance Analysis

6.2 MODULES DESCRIPTION

6.2.1 Virtual Meet Dashboard Web API

In this module, this module we are going to develop virtual meet API. A virtual
meet API is a video conferencing tool where instructors and participants engage with
each other and with the learning material. The interface between AIPresent and virtual
meeting platforms are facilitated through a web interface that runs on the teachers and
students’ smart devices in master and slave modes, respectively. The faculty, as well as
students, should log in to the online learning platform with their smart devices. The web
interface page should remain active during the entire course of the class. Here, the web
interface at the teacher’s smart device facilitates two things.

6.2.2. AI Present Module

6.2.2.1 Enrollment Phase

Student Databases Server maintained in this system are student information


database, face database and attendance database. Student information database consists
of roll number, name and class of student.

19
Attendance database includes attendance status of student for every day. The face
database consists of face images of student’s according to their roll numbers.

6.2.2.1.1 Face Image Acquisition

This module is initial part of the system. Logitech C270 (3MP) is used for image
acquisition.

6.2.2.1.2 Preprocessing

The acquired images are converted to grayscale image and then resize. After the
removal of noise using mean and Gaussian filters all further operations are performed on
this image.

6.2.2.1.3 Face Detection

After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. Face detection purpose,
Region Proposal Network (RPN) draws anchors and outputs the one which most likely
contains the objects. The detected face regions are cropped and scaled to 200x200
resolution and then used for further recognition task.

6.2.2.1.4 Feature Extraction

After the face detection, face image is given as input to the feature extraction
module to find the key features that will be used classification.

6.2.2.1.5 Feature Classification

The module composes a very short feature vector that is well enough to represent
the face image. Here, it is done with DCNN method. Then the classified result is stored
into the database. The module composes a very short feature vector that is well enough
to represent the face image. Here, it is done with DCNN with the help of a pattern
classifier, the extracted features of face image are compared with the ones stored in the
face database.

20
6.2.2.2 Verification Module

After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. After the face detection
using RPN, face image is given as input to the feature extraction module to find the key
features that will be used for classification. The module composes a very short feature
vector that is well enough to represent the face image. Here, it is done with DCNN with
the help of a pattern classifier, the extracted features of face image are compared with
the ones stored in the face database. The face image is then classified as either known or
unknown. If the image face is known, corresponding student is identified. Face
recognition is the next step after face detection. The faces cropped from the image are
compared with the stored images in the face database. Here, Eigen face method is used
for face recognition.

6.2.2.3 Attendance System

After the verification of faces and successful recognition is done, the attendance
of the student is marked in front of his/her roll number. If the face is not recognized, an
error page is displayed. It involves the attendance report generation. The module takes
student information and daily attendance status from student database. The attendance is
calculated as per requirement. There are options for calculating day-wise, student wise
and class-wise attendance. The attendance reports are generated and saved in a file.

6.2.3 Learning Attentive Prediction

In order to improve the efficiency of the classroom learning attentive, we


introduced two ancillary modalities verifying student’s responses to Concept QA and
UIN (Unique Identification Number) queries.

This distinctive feature of randomness in our design ensures that student’s


attention and engagement in virtual learning are enhanced. The efficiency of the
proposed system is improved by introducing student’s responses to Concept QA (P2)
that pop-up k2-times in the student’s device at random intervals. Also, the students have

21
to enter their UIN k3- times (P3), when they are directed to do it randomly. The random
intervals of time are designed in such a way that it follows the attention span distribution
of the students.

6.2.3.1 Enrollment Phase

Student Databases Server maintained in this system are student information


database, face database and attendance database. Student information database consists
of roll number, name and class of student. Attendance database includes attendance
status of student for every day. The face database consists of face images of student’s
according to their roll numbers.

6.2.3.1.1 Face Image Acquisition

This module is initial part of the system. Logitech C270 (3MP) is used for image
acquisition.

6.2.3.1.2 Preprocessing

The acquired images are converted to grayscale image and then resize. After the
removal of noise using mean and Gaussian filters all further operations are performed on
this image.

6.2.3.1.3 Face Detection

After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. Face detection purpose,
Region Proposal Network (RPN) draws anchors and outputs the one which most likely
contains the objects.

6.2.3.1.4 Feature Extraction

After the face detection, face image is given as input to the feature extraction
module to find the key features that will be used for classification,

22
6.2.3.1.5 Feature Classification

The module composes a very short feature vector that is well enough to represent
the face image. Here, it is done with DCNN method. Then the classified result is stored
into the database.

6.2.4 User Control Panel

6.2.4.1 Student Management

In this module, institute can enroll students to the college after counselling.
Manage personal details, assign class, roll number and generate ID cards to the students.

● Student profile with photo and documents

● Parents and Guardian details

● Id card and certificates

● Detailed profiles

● Progress tracking progress report

6.2.4.2 HR And Staff Management

Manage HRM activities of the college including registering technical non-


technical staff, manage staff designations, personal details and professional details,
generate ID cards, staff performance analysis and appraisals.

• Staff records

• Staff attendance
• Leave management

6.2.4.3 Academic Management

• Time-table

• Lesson progress tracking

• Classwork and Homework

• Exam management
23
6.2.4.4 Learning Management

• Organized content sharing

• Live classes

• Assignments

6.2.4.5 Communication

• Notes/announcements

• Events and activities

• Schedules

• Achievements
• Birthday greeting

• Academic calendar
• Holiday updates

6.2.4.6 Attendance Management

Student attendance management enables easy tracking attendance information of


students. Generate quick attendance reports with class wise analysis, monthly analysis
and yearly analysis. There is also provision for Faculty/AP/HODs to take attendance
with an Android based phone or tablet. Staff attendance module maintain quick and
accurate recording of staff attendance and automatically calculate the total leaves,
pending leaves, working days.

6.2.4.7 Reports And Analysis

• Exam and attendance reports

• User data reports

• Fees reports

• Inquiry reports

• Staff and HR reports

24
6.2.4.8 End User

6.2.4.8.1 Admin

This module is handled by top management to create role wise user logins to staffs
accessing College management ERP System. Admin can generate notifications for
students and staff send SMS, emails, reminders time to time.

6.2.4.8.2 Student

Here Student can view profile, task, class schedules, exam report card, attend Live
Class Session.

6.2.4.8.3 Teaching Staff

Faculty/AP/HOD can view profile, add task, exam reports, schedules. Here, they
will be able to access the information of Students Profile, his detailed Fees account, his
Term wise and Daily attendance and his appraisal report i.e., result statement along with
the comparative graphical analysis Term wise and Subject wise, which enables them to
get evaluate the student performance in the Class and last but not the least his
performance in various Co-curricular activities organized in the Institution.

6.2.5 Notification Module

Here Facility to send SMS/Email like Class Schedule, Fees


Payment/Attendance/Exam, Meetings, Seminars etc. to Students, Parents/Guardians,
Employees, management etc.

6.2.6 Performance Analysis

There are various parameters with the help of which one can measure the
performance of any biometric authentication techniques. These factors are described
below.

6.2.6.1 False Accept Rate (FAR) or False Match Rate (FMR)

The probability that the system incorrectly declares a successful match between
the input pattern and a non matching pattern in the database. It measures the percent of
25
invalid matches. These systems are critical since they are commonly used to forbid
certain actions by disallowed people.

6.2.6.2 False Reject Rate (FRR) or False Non-Match Rate (FNMR)

The probability that the system incorrectly declares failure of match between the
input pattern and the matching template in the database. It measures the percent of valid
inputs being rejected.

6.2.6.3 Equal Error Rate (EER)

The rates at which both accept and reject errors are equal. ROC plotting is used
because how FAR and FRR can be changed, is shown clearly. Obtained from the ROC
plot by taking the point where FAR and FRR have the same value. The lower the EER,
the more accurate the system is considered to be.

6.2.6.4 Failure to Enroll Rate (FTE or FER)

The percentage of data input is considered invalid and fails to input into the
system. Failure to enroll happens when the data obtained by the sensor are considered
invalid or of poor quality.

26
CHAPTER 7
SIMULATION RESULTS

This section provides the results of the experimental prose dares carried out
while realising the RIAMS prototype model. Specific results obtained by
implementing the RIAMS face recognition module and the ancillary modalities are
critically discussed. The results from the face recognition module are discussed in
accordance with its training and testing phases. The training and testing phases are
implemented at the Google Collab platform using Python 3.8 programming language.
Our software code has been tested in Windows and Ubuntu machines, and optimal
performance is observed in both cases. As mentioned earlier, we developed the RIAMS
face recognition module based on the Dlib open-source software library. The face
recognition package in the Dlib has an accuracy of 99.38% with the Labelled Faces in
the Wild (LFW) benchmark dataset.
A database containing students’ facial images and their UINs was developed
initially for training the RIAMS modules. The face image database was created using
students’ passport-size photos and the image frames obtained from the video stream
captured during the first virtual class. For the pilot study, a face image database
comprising of ten students was developed as depicted in part (a) of Fig.7.1. The
performance evaluation of the RIAMS model is validated with the prototype database.
Even though five images of every student were used for training the face recognition
mod ule of RIAMS, we have shown only one training set in part (a) of Fig. 7.1. In part
(b), (c) and (d) of Fig. 7.1, three such video frames (frame-1, frame-2, frame-3), which
were captured in real-time are displayed for illustration. We used the most common
Zoom platform to create a virtual classroom and captured video frames from the same
during random intervals of time. As evident from the part (b), (c), and (d) of Fig. 7.1,
face images of some students were not seen in every frame. This can be attributed to
the physical absence of students or the poor internet connection so that some students
may not be able to switch on their video cameras.

27
For testing and verification, each test sample was compared with all the five
images stored in the database. Face recognition can be regarded as successful if at least
one of the five training samples matches with the test sample. Thus, if the test images
extracted from the video frames of the virtual class are matching with the training
samples, attendance from the face recognition module is recorded. A detailed analysis
of the RIAMS face recognition module output is presented next.

7.1 ANALYSIS OF RIAMS FACE RECOGNITION MODULE OUTPUT

In the RIAMS prototype model, the captured frames-1, 2, and 3 (part (b), (c) and
(d)) are used for comparison. with the images previously stored in the database. In this
regard, Table 7.1 shows the face recognition output of captured frame-1 by comparing
it with the database images.

Table 7.1. Face recognition output of captured frame-1 in comparison with


database images.

Here, Once represents successful matching status with the database images,
whereas 0’s represents the failure matching. Each test image in captured frame-1 has
been compared with all the five training images corresponding to that student’s
image stored in the database.

28
Table 7.2. Output of RIAMS face recognition module (method 1).

For instance, in the case of the student with UIN TKMCE03, the test image from
the captured video frame has been matched with the first and the fifth training images,
which is represented by 1’s in the corresponding row of Table 7.1. The output of the
face recognition attempt is obtained by the ‘logical OR’ (∨) operation, Ai = I1 ∨ I2 ∨I3
∨I4 ∨I5. Hence, for extracted frame-1, the face recognition status of student with UIN
TKMCE03 is recorded success-fully and is indicated as 1. The same process is repeated
for each student as shown in Table 7.2. Likewise, such tables can be constructed for the
other extracted frames-2, and 3, in comparison with the trained images in the database.
For illustration purposes, the face recognition status of only one extracted frame is
shown as in Table 7.1.

The final output of the face recognition module is obtained by a linear


combination of each face recognition attempts (A1, A2, . . . Ap) happening at k1 times
in a virtual class. We have performed three such attempts which are illustrated in Table
7.3 Here, the face recognition process in the RIAMS is referred to as Method-1 (P1).
The first column of Table 7.3 indicates students’ UINs. Subsequent columns point out
the face recognition attempts (A1, A2, A3) performed at random intervals, and the last
two columns give the face recognition outputs P1 and their normalized values P1n. Thus,
the face recognition output can be represented as,

29
For instance, In the case of UIN - TKMCE008, the student’s face is not recognized
in the first attempt, whereas it is recognized in subsequent attempts. Consequently, the
linear sum of the three attempts (0, 1, 1) results in face recognition output ‘2’.

7.2 ANALYSIS OF ANCILLARY MODULE OUTPUTS

The output of the first ancillary modality, the CAPTCHA verification unit, is
shown in Table 7.4 Here, the CAPTCHA verification process is referred to as Method-
2 (P2). In the proposed design, we implemented two such CAPTCHA verification at
random intervals of time, in every virtual class. In Table 7.4, the last two columns give
the CAPTCHA verification outputs P2 and their normalized values P2n respectively.
Students’ response to CAPTCHAs (C) in a virtual class is at k2 times, and a linear sum
of the same is represented by the following equation.

Table 7.3. Output of captcha verification process (method 2).

30
Table 7.4. Output of UIN verification process (method 3).

The implementation of the second ancillary modality, the response to UIN queries
(Method-3, P3) by students taken at random intervals is demonstrated in Table 7.4. In the
reported method, we implemented two such UIN query verification in a virtual class. The
last two columns of Table 7.4 give the students UIN verification outputs P3 and their
normalized values P3n. Response to UIN queries (U) is taken at k3 times. The following
equation represents a linear sum of such responses.

7.3 COMBINING MODALITIES AND THE FINAL ATTENDANCE DECISION


MAKING

The decision fusion from the face, CAPTCHA, and UIN sub-modules (P1, P2, P3)
leads to the final result of the RIAMS attendance registration, which is demonstrated in
Table 7.5. The weighted sum of decisions from each sub modality is considered for a
concluding result (R), which can be represented as, n the proposed design, we considered
dissimilar weights w1 = 0.5, w2 = 0.3 and w3 = 0.2 for illustration purpose.

31
The rationale for giving greater weightage (w1) to face recognition can be
attributed to its higher significance as compared to the other modalities. The final
attendance status will be registered as ‘Present’, if the weighted sum (R) is greater than
a predefined threshold (λ2). During the training phase, we observed that λ2 = 65% gives
optimal results, and hence the same value has been used in the testing phase to compare
with the ‘R’ values. The normalized value of ‘R’ in percentage can be obtained by the
following equation.

In the proposed system, we considered k1 = 3, k2 = 2, and k3 = 2 and the results


obtained therein are depicted in the last column of Table 7.5. The high performance of
the system is evident from the analysis of Table 7.5. For instance, the status of two
students (TKMCE05 and 08) are registered ‘Absent’, as their ‘R’ values are less than
the decision threshold. Here, the first student is ‘Absent’ because, the corresponding
outputs of Face, CAPTCHA, and UIN submodules were zero. This can be observed from
the captured video frames and from the respective (Tables 7.2-7.4). In the second case,
(TKMCE08), even though face recognition output is 66%, the response to CAPTCHA
and UIN queries made by the student are only 50% each. Thus, the student is absent as
per the final decision, even though he is having a 66% score from the face recognition
module. This indicates the efficacy of the ancillary modalities in continuously
monitoring students’ attendance in virtual classrooms even without their facial image
input.

32
Table 7.5. Final result of RIAMS attendance registration
(decision fusion from face, captcha and UIN sub-modules).

Figure 7.1. Analysis chart of RIAMS face recognition module


(method-1) and ancillary modalities (methods 2 & 3).

33
The performance analysis chart of the RIAMS face recognition module (method-
1) and ancillary modalities (methods 2 & 3) is demonstrated. The normalized values of
the outputs of methods-1, 2, and 3 (P1n, P2n, and P3n) are depicted in the bar chart. It
also illustrates the normalized final results in comparison with a predefined threshold
(λ2n = 0.65), shown by a horizontal line. It is evident from that the attendance is
registered if the normalized output is greater than the threshold value. Thus, the RIAMS
submodules are of high performance which ensures proper response of students to
random queries as well as physical presence in terms of face recognition.

RIAMS face recognition process took approximately one millisecond for


verifying a single face, as it uses the Dlib face recognition module. This implies that the
module can process 10 frames of video stream per second in real-time. However, the
present RIAMS model is a prototype that does not require real-time processing.
Detecting individual faces is not a time-critical issue in the case of attendance
management. Students’ faces can be detected from the screenshots fetched at any later
point in time. Detecting faces and sending the information in real-time is not necessary.
The screenshot, once captured, can be used to detect the face at any time during the
entire period of the class or even after class.

The accuracy of the face recognition module is high, as we designed it using the
Dib library, which has an accuracy of 99.38%. However, we have obtained 100%
accuracy for the pilot study which incorporates facial images of a sample of 10 students.
Similarly, for the UIN and Captcha verification performed by the ancillary modalities,
we obtained 100% accuracy for the pilot study. We did not analyze the performance and
accuracy of RIAMS because It is irrelevant to compare a prototype model with other
systems, which are in no way related to the proposed model. Since the proposed model
is unique in terms of the introduction of randomness and ancillary modalities in its
design, it cannot be compared with other models available in the literature.

34
CHAPTER 8
APPENDICES
8.1 SAMPLE CODE

from flask import Flask

from flask import Flask, render_template, Response, redirect, request, session,


abort, url_for

from camera import VideoCamera

from datetime import datetime

from datetime import date

import datetime

import random

from random import seed

from random import randint

import threading

import os

import time

import shutil

import cv2

import imagehash

import PIL.Image

from PIL import Image

from flask import send_file

from werkzeug.utils import secure_filename

import urllib.request
35
import urllib.parse

from urllib.request import urlopen

import webbrowser

import mysql.connector

mydb = mysql.connector.connect(

host="localhost",

user="root",

passwd="",

charset="utf8",

database="virtual_class"

app = Flask(__name__)

session key

app.secret_key = '123456'

UPLOAD_FOLDER = 'static/upload'

ALLOWED_EXTENSIONS = { 'png', 'jpg', 'jpeg', 'gif'}

app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER

app.route('/',methods=['POST','GET'])

def index():

cnt=0

act=""

msg=""

36
ff=open("det.txt","w")

ff.write("1")

ff.close()

ff1=open("photo.txt","w")

ff1.write("1")

ff1.close()

ff11=open("img.txt","w")

ff11.write("1")

ff11.close()

ff12=open("start.txt","w")

ff12.write("1")

ff12.close()

if request.method == 'POST':

username1 = request.form['uname']

password1 = request.form['pass']

mycursor = mydb.cursor()

mycursor.execute("SELECT count(*) FROM ci_admin where


username=%s && password=%s",(username1,password1))

myresult = mycursor.fetchone()[0]

if myresult>0:

session['username'] = username1

result=" Your Logged in sucessfully**"

37
return redirect(url_for('category'))

else:

result="Your logged in fail!!!"

return render_template('index.html',msg=msg,act=act)

def allowed_file(filename):

return '.' in filename and \

filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS

app.route('/index_ins',methods=['POST','GET'])

def index_ins():

cnt=0

act=""

msg=""

if request.method == 'POST':

username1 = request.form['uname']

password1 = request.form['pass']

mycursor = mydb.cursor()

mycursor.execute("SELECT count(*) FROM ci_user where uname=%s &&


pass=%s",(username1,password1))

myresult = mycursor.fetchone()[0]

if myresult>0:

session['username'] = username1

38
result=" Your Logged in sucessfully**"

return redirect(url_for('ins_home'))

else:

result="Your logged in fail!!!"

return render_template('index_ins.html',msg=msg,act=act)

app.route('/index_stu',methods=['POST','GET'])

def index_stu():

cnt=0

act=""

msg=""

if request.method == 'POST':

username1 = request.form['uname']

password1 = request.form['pass']

mycursor = mydb.cursor()

mycursor.execute("SELECT count(*) FROM ci_student where regno=%s


&& pass=%s",(username1,password1))

myresult = mycursor.fetchone()[0]

if myresult>0:

session['username'] = username1

result=" Your Logged in sucessfully**"

return redirect(url_for('stu_home'))

else:

result="Your logged in fail!!!"

39
return render_template('index_stu.html',msg=msg,act=act)

app.route('/category',methods=['POST','GET'])

def category():

result=""

act=""

if request.method=='POST':

category=request.form['category']

mycursor = mydb.cursor()

mycursor.execute("SELECT max(id)+1 FROM ci_category")

maxid = mycursor.fetchone()[0]

if maxid is None:

maxid=1

sql = "INSERT INTO ci_category(id, category) VALUES (%s, %s)"

val = (maxid, category)

mycursor.execute(sql, val)

mydb.commit()

print(mycursor.rowcount, "record inserted.")

return redirect(url_for('category',act='success'))

if request.method=='GET':

act=request.args.get('act')

did=request.args.get('did')

if act=="del":

cursor1 = mydb.cursor()

40
cursor1.execute('delete from ci_category WHERE id = %s', (did, ))

mydb.commit()

return redirect(url_for('category'))

cursor = mydb.cursor()

cursor.execute('select * from ci_category')

data=cursor.fetchall()

return render_template('category.html',act=act,data=data)

app.route('/view_ins')

def view_ins():

act=""

mycursor = mydb.cursor()

mycursor.execute("SELECT * FROM ci_user order by id")

value = mycursor.fetchall()

if request.method=='GET':

did = request.args.get('did')

act = request.args.get('act')

if act=="del":

mycursor.execute("delete from ci_user where id=%s",(did,))

mydb.commit()

return redirect(url_for('view_ins'))

return render_template('view_ins.html', data=value)

app.route('/view_stu')

def view_stu():

41
value=[]

mycursor = mydb.cursor()

mycursor.execute("SELECT distinct(category) FROM ci_category")

value1 = mycursor.fetchall()

mycursor.execute("SELECT distinct(year) FROM ci_student")

value2 = mycursor.fetchall()

if request.method=='POST':

dept=request.form['dept']

year=request.form['year']

if dept!="" and year!="":

mycursor.execute("SELECT * FROM ci_student where dept=%s &&


year=%s",(dept,year))

value = mycursor.fetchall()

else:

mycursor.execute("SELECT * FROM ci_student")

value = mycursor.fetchall()

session['username'] = username1

result=" Your Logged in sucessfully**"

return redirect(url_for('stu_home'))

else:

return render_template('index_stu.html',msg=msg,act=act)

42
8.2 SCREENSHOTS

Admin Page

43
Faculty Page

44
Student Page

45
Queries Box

Student Attendance

46
CHAPTER 9

CONCLUSION AND FUTURE ENHANCEMENT


9.1 CONCLUSION

Random Interval Attendance Management System (AIPresent) is an innovation


based on Artificial Intelligence Deep Learning, specially designed to help the
teachers/instructors across the globe for effective management of attendance during
virtual learning. AI Present facilitates precise and automatic tracking of students'
attendance in virtual classrooms. It incorporates a customized face recognition module
along with specially designed ancillary submodules. Both the face recognition and the
sub modalities are for students' attendance monitoring in virtual classrooms. The
submodules check students' responses to CAPTCHAs, Concept QA and UIN queries.
The system captures face biometric from the video stream of participants and gathers the
timely responses of students to Concept QA and UIN queries, at random intervals of
time. An intelligible and adaptive weighting strategy is employed for finalizing the
decisions from the three modalities. AIPresent could be integrated with any existing
virtual meeting platform through an application interface like a web page or a specific
App.

9.2 FUTURE ENHANCEMENT

By incorporating other ancillary modalities like speech recognition and adding


suitable adaptive weights for each modality, the efficiency and reliability of the system
can be further enhanced. Further implement this system to online examination.

47
REFERENCES

[1] D. Sunaryono, J. Siswantoro, and R. Anggoro, (2021) ‘‘An Android based course
attendance system using face recognition,’’ J. King Saud Univ. Comput. Inf. Sci., vol.
33, no. 3, pp. 304–312.
[2] L. Li, Q. Zhang, X. Wang, J. Zhang, T. Wang, T.-L. Gao, W. Duan, K. K.-F. Tsoi,
and F.-Y. Wang (2020), “Characterizing the propagation of situational information in
social media during COVID-19 epidemic: A case study on Weibo,'' IEEE Trans.
Computer. Social Syst., vol. 7, no. 2, pp. 556-562.
[3] J. T. Wu, K. Leung, and G. M. Leung, (2020) “Nowcasting and forecasting the
potential domestic and international spread of the 2019-nCoV outbreak originating in
Wuhan, China: A modelling study,'' Lancet, vol. 395, no. 10225, pp. 689-697.
[4] T. Alamo, D. G. Reina, M. Mammarella, and A. Abella, (2020) “Covid-19: Open
data resources for monitoring, modeling, and forecasting the epidemic,'' Electronics, vol.
9, no. 5, pp. 1-30.
[5] C. Rapanta, L. Botturi, P. Goodyear, L. Guàrdia, and M. Koole, (2020) “Online
university teaching during and after the Covid-19 crisis: Refocusing teacher presence
and learning activity,'' Post digital Sci. Educ., vol. 2, no. 3, pp. 923-945.
[6] Y. A. U. Rehman, L. M. Po, and M. Liu, (2018) ‘‘LiveNet: Improving features
generalization for face liveness detection using convolution neural networks,’’ Expert
Syst. Appl., vol. 108, pp. 159–169.
[7] A. Pardo, F. Han, and R. A. Ellis, (2017) “Combining university student selfregulated
learning indicators and engagement with online learning events to predict l academic
performance,’’ IEEE Trans. Learn. Technol., vol. 10, no. 1, pp. 82–92.
[8] C. R. Henrie, L. R. Halverson, and C. R. Graham, (2015) ‘‘Measuring student
engagement in technology-mediated learning: A review,’’ Comput. Educ., vol. 90, pp.
36–53.

48
[9] R. Ellis and P. Goodyear, (2013) Students Experiences of E-Learning in Higher
Education: The Ecology of Sustainable Innovation. New York, NY, USA: Taylor &
Francis.
[10] H.-K. Wu, S. W.-Y. Lee, H.-Y. Chang, and J.-C. Liang, (2013) ‘‘Current status,
opportunities and challenges of augmented reality in education,’’ Comput. Educ., vol.
62, pp. 41–49.
[11] M. K. Dehnavi and N. P. Fard, (2011) ‘‘Presenting a multimodal biometric model
for tracking the students in virtual classes,’’ Procedia-Social Behav. Sci., vol. 15, pp.
3456–3462.
[12] D. E. King, (2009) ‘‘Dlib-ml: A machine learning toolkit,’’ J. Mach. Learn. Res.,
vol. 10, pp. 1755–1758.

49
PUBLICATION

KARTHIKEYAN S, MOHAMEDYOUSUF I, RAMANAN N, MRS.NIVETHA L,


‘RANDOM INTERVAL QUERY AND FACE RECOGANITION ATTENDANCE
SYSTEM FOR VIRTUAL CLASSROOM USING DEEP LEARNING, in The
National Conference on Inventive System and Control Technologies-2022 [NCISCT-
2022] at ARASU ENGINEERING COLLEGE, Tamil Nadu, India held on 6 th & 7th April
2022.

50

You might also like