Full Document
Full Document
Full Document
Submitted by
KARTHIKEYAN S (621318104024)
MOHAMEDYOUSUF I (621318104032)
RAMANAN N (621318104042)
of
BACHELOR OF ENGINEERING
IN
(AUTONOMOUS)
THOTTIAM, TRICHY
MAY- 2022
i
KONGUNADU COLLEGE OF ENGINEERING AND TECHNOLOGY,
(AUTONOMOUS)
Tholurpatti (Po), Thottiam (Tk), Trichy (Dt) – 621 215
VISION
“To become an Internationally Renowned Institution in Technical Education, Research and
Development by Transforming the Students into Competent Professionals with Leadership Skills and
Ethical Values.”
MISSION
● Providing the Best Resources and Infrastructure.
● Creating Learner centric Environment and continuous –Learning.
● Promoting Effective Links with Intellectuals and Industries.
● Enriching Employability and Entrepreneurial Skills.
● Adapting to Changes for Sustainable Development.
VISION
To produce competent software professionals, academicians, researchers and entrepreneurs
with moral values through quality education in the field of Computer Science and Engineering.
MISSION
● Enrich the students' knowledge and computing skills through innovative teaching-learning
process with state- of- art- infrastructure facilities.
● Inculcating leadership skills, professional communication skills with moral and ethical
values to serve the society and focus on students' overall development
ii
PROGRAM EDUCATIONAL OBJECTIVES (PEO’s)
PEO I: Graduates shall be professionals with expertise in the fields of Software Engineering,
Networking, Data Mining and Cloud computing and shall undertake Software Development,
Teaching and Research.
PEO II: Graduates will analyses problems, design solutions and develop programs with sound
Domain Knowledge.
PEO III: Graduates shall have professional ethics, team spirit, life-long learning, good oral and
written communication skills and adopt corporate culture, core values and leadership skills.
PSO1: Professional skills: Students shall understand, analyses and develop computer applications
in the field of Data Mining/Analytics, Cloud Computing, Networking etc., to meet the requirements
of industry and society.
PSO2: Competency: Students shall qualify at the State, National and International level competitive
examination for employment, higher studies and research.
iii
PROGRAM OUTCOMES (PO’s)
1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering fundamentals, and
an engineering specialization to the solution of complex engineering problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex engineering
problems reaching substantiated conclusions using first principles of mathematics, natural sciences, and
engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and design
system components or processes that meet the specified needs with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research methods
including design of experiments, analysis and interpretation of data, and synthesis of the information to
provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern engineering
and IT tools including prediction and modeling to complex engineering activities with an understanding
of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess societal,
health, safety, legal and cultural issues and the consequent responsibilities relevant to the professional
engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering solutions in
societal and environmental contexts, and demonstrate the knowledge of, and need for sustainable
development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and norms of the
engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or leader in diverse
teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the engineering
community and with society at large, such as, being able to comprehend and write effective reports and
design documentation, make effective presentations, and give and receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the engineering and
management principles and apply these to one’s own work, as a member and leader in a team, to manage
projects and in multidisciplinary environments.
12. Life Long Learning: Recognize the need for, and have the preparation and ability to engage in
independent and life-long learning in the broadest context of technological change.
iv
KONGUNADU COLLEGE OF ENGINEERING AND
TECHNOLOGY
(AUTONOMOUS)
ANNA UNIVERSITY :: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report “RANDOM INTERVAL QUERY AND FACE
RECOGANITION ATTENDANCE SYSTEM FOR VIRTUAL CLASSROOM
USING DEEP LEARNING “is the bonafide work of KARTHIKEYAN S
(621318104024), MOHAMEDYOUSUF I (621318104032), RAMANAN N
(621318104042), who carried out the project work under my supervision.
SIGNATURE SIGNATURE
v
ACKNOWLEDGEMENT
We proudly render our immense gratitude and the sincere thanks to our Head
of the Department of Computer Science and Engineering
Dr.C.SARAVANABHAVAN, M.Tech., Ph.D., for his effective leadership,
encouragement and guidance in the project.
We highly indebted to provide our heart full thanks to our project coordinator
Mr.K.KARTHICK, M.E.,(Ph.D)., for his valuable ideas, constant encouragement
and supportive guidance throughout the project.
We wish to extend our sincere thanks to all teaching and non-teaching staff
of Computer Science and Engineering department for their valuable suggestions,
co-operation and encouragement on successful completion of this project.
the globe. The pandemic created an enormous demand for innovative technologies to
solve crisis-specific problems in different sectors of society. In the case of the education
sector and allied learning technologies, significant issues have emerged while substituting
face-to-face learning with online virtual learning. Several countries have closed
educational institutions compelled the teachers across the globe to use online meeting
platforms extensively. The virtual classrooms created by online meeting platforms are
adopted as the only alternative for face-to-face interaction in physical classrooms. In this
course, which has a direct relationship with their active learning. However, during virtual
Calling students names in virtual classrooms to take attendance is both trivial and time
consuming. Thus, in the backdrop of the COVID-19 pandemic and the extensive usage
proper tracking system to monitor student’s attendance and engagement during virtual
learning. In this project, we are addressing the pandemic induced crucial necessity by
introducing a novel approach. In order to realize a highly efficient and robust attendance
management system for virtual learning, we introduce the Random Interval Query and
vii
TABLE OF CONTENTS
6.1 MODULES 19
6.2 MODULES DESCRIPTION 19
viii
6.2.4 USER CONTROL PANEL 23
7. SIMULATION RESULTS 27
8. APPENDICES 35
REFERENCES 48
PUBLICATION 50
ix
LIST OF FIGURES
x
LIST OF ABBREVIATIONS
GD Graphics Draw
xi
CHAPTER 1
INTRODUCTION
1.1 OVERVIEW
1
1.1.1. Problems Identified
2
1.1.3. Scope of the project
1.1.4. Objective
AUTHOR: Zhigang Gao; Yucai Huang; Leilei Zheng; Xiaodong Li; Huijuan Lu;
Jianhui Zhang; Qingling Zhao; Wenjie Diao; Qiming.
This paper presents a student attendance management method that combines the
active reporting and sampling check of students’ location information, which has the
advantages of high real-time performance and low disturbance. The author proposes an
intelligent attendance management method named AMMoC. AMMoC need neither
deploy additional hardware devices in the classroom, nor collect the biological
characteristics of students. AMMoC only needs to install two Android applications on
mobile devices of teachers and students respectively, and uses mutual verification
between students to complete attendance checking. AMMoC divides the classroom into
several subregions, and assigns students to verify the student number of subregions.
After AMMoC obtains the location information of students, it uses an algorithm based
on intelligent search, selects several students to complete the crowdsensing tasks which
require to submit the number of students of a specific subregion, etc. AMMoC will
analyze the truth of the initial location information based on the results of the
crowdsensing tasks submitted by the students.
4
[2] TITLE: Online Attendance Monitoring System Using QR Code (2021).
Overview: Two web applications created for taking and putting attendance of the
students on the regular routine in the school, college.
5
[3] TITLE: Face Recognition Attendance System Based on Real-Time Video
Processing (2020).
This article aims to design a face recognition time and attendance system based on real-
time video processing. Faces in surveillance videos often suffer from serious image blur,
posture changes, and occlusion. In order to overcome the challenges of video-based face
recognition (VFR), Ding C has proposed a comprehensive framework based on
convolutional neural network (CNN). First, in order to learn a fuzzy and robust face
representation, Ding C artificially blurs the training data composed of clear still images
to make up for the lack of real video training data. Using training data composed of still
images and artificial fuzzy data, CNN is encouraged to automatically learn fuzzy
insensitive features. Second, in order to enhance the robustness of CNN features to pose
changes and occlusion, CNN has proposed a trunk branch CNN model (TBE-CNN),
which extracts complementarity from the overall face image and the patches around the
face parts Information. A face recognition attendance system based on real-time video
processing is designed, and two colleges in a province are selected for real-time check-
in and inspection of student attendance. The accuracy rate of the face recognition system
in the actual check-in, the stability of the face recognition attendance system with real-
time video processing, and the truancy rate of the face recognition attendance system
with real-time video processing It is difficult to analyze the interface settings of the face
recognition attendance system using real-time video processing. Research data shows
that the accuracy of the video face recognition system is about 82%.
6
[4] TITLE: Design of an E-Attendance Checker through Facial Recognition using
Histogram of Oriented Gradients with Support Vector Machine (2020).
AUTHOR: Allan Jason C.Arceo; Renee Ylka N.Borejon; Mia Chantal R.Hortinela;
Alejandro H.Ballado; Arnold C.Paglinawan.
Techniques: An e-attendance checker was established using HOG and SVM algorithms
for face detection and face recognition, respectively.
7
[5] TITLE: Deep Unified Model for Face Recognition Based on Convolution Neural
Network and Edge Computing (2019).
Techniques: Faster Region Convolution Neural Network along with the Edge Computing
techniques are utilized to achieve the state of the art results.
Merits: The conventional attendance taking system to validate the efficiency of the
proposed algorithm.
Overview: To achieve better results, proposed algorithm utilizes the Convolution
Neural Network, which is a deep learning approach and state-of the-art in computer
vision.
8
CHAPTER 3
SYSTEM ANALYSIS
3.1 EXISTING SYSTEM
Zoom, Google Meet, Microsoft Teams, and Cisco Webex Meetings are used to
create virtual classrooms. Manual attendance calling, self-reporting attendance systems
(using tools like Google forms), video calling students, short quizzes or polls, questions
and discussions by selecting random students, and timed assignments. In the case of
physical classrooms, biometric-based attendance monitoring systems are essentially
based on face, fingerprint, and iris recognition technologies Facial recognition is a
technology that is capable of recognizing a person based on their face. Early approaches
mainly focused on extracting different types of hand-crafted features with domain
experts in computer vision and training effective classifiers for detection with traditional
machine learning algorithms. Such methods are limited in that they often require
computer vision experts in crafting effective features, and each individual component is
optimized separately, making the whole detection pipeline often sub-optimal. There are
many existing FR methods that achieve a good performance
3.1.1 DISADVANTAGES
● Student can go offline at any time without letting the teacher know.
● It is not easy to find out whether the student is really attending the class.
● Student might have turned off the video camera
9
3.2 PROPOSED SYSTEM
3.2.1 ADVANTAGES
• Randomness ensures that students cannot predict at which instant of time the
attendance is registered.
• Highly efficient and robust attendance management system for virtual learning,
• Ease of use
10
CHAPTER 4
SYSTEM REQUIREMENTS
4.1 HARDWARE REQUIREMENT
• Processors: Intel Core i5 processor 4300M at 2.60 Ghz or 2.59 Ghz (1 socket, 2
cores, 2 threads per core), 8 GB of DRAM.
• Web Camera.
11
CHAPTER 5
SYSTEM DESIGN
12
Support Vector Machine (SVM)
Support Vector Machines (SVM) are a popular training tool which can be used to
generate a model based on several classes of data, and then distinguish between them.
For the basic two- class classification problem, the goal of an SVM is to separate the
two classes by a function induced from available examples. In the case of facial
recognition, a class represents a unique face, and the SVM attempts to find what best
separates the multiple feature vectors of one unique face from those of another unique
face.
One of the most used and cited statistical method is the Principal Component
Analysis. A mathematical procedure performs a dimensionality reduction by extracting
the principal component of multi-dimensional data. Simply Principal component
analysis is used for a wide range of variety in different applications such as Digital image
processing, Computer vision and Pattern recognition. The main principal of principal
component analysis is reducing the dimensionality of a database. In the communication
of large number of interrelated features and those retaining as much as possible of the
variation in the database.
LDA is widely used to find the linear combination of features while preserving
class separability. Unlike PCA, the LDA tries to model to the difference between levels.
For each level the LDA obtains differenced in multiple projection vectors. Linear
discriminant analysis method is related to fisher discriminant analysis. Features are
extracting the form of pixels in images these features are known as shape feature, color
feature and texture feature. The linear discriminant analysis is using for identifying the
linear separating vectors between features of the pattern in the images. This procedure
is using maximization between class scatter, when minimizing the intra class variance
in face identification.
13
Neural Network (NN)
K-Nearest Neighbors
1. Every Machine Learning algorithm takes a dataset as input and learns from
this data. The algorithm goes through the data and identifies patterns in the data. The
challenging part is to convert a particular face into numbers Machine Learning
algorithms only understand numbers.
14
2. This numerical representation of a face (or an element in the training set) is
termed as a feature vector. A feature vector comprises of various numbers in a specific
order.
Now can add a number of other features like hair color spectacles. Keep in mind
that a simple model gives the best result. Adding a greater number of features may not
give accurate result (See overfitting and underfitting).
15
5.2 DATA FLOW DIAGRAM
Convolutional Layer
This layer is the first layer that is used to extract the various features from the
input images. In this layer, the mathematical operation of convolution is performed
between the input image and a filter of a particular size M x M. By sliding the filter over
the input image, the dot product is taken between the filter and the parts of the input
image with respect to the size of the filter M x M. The output is termed as the Feature
map which gives us information about the image such as the corners and edges. Later,
this feature map is fed to other layers to learn several other features of the input image.
16
Pooling Layer
The Fully Connected (FC) layer consists of the weights and biases along with the
neurons and is used to connect the neurons between two different layers. These layers
are usually placed before the output layer and form the last few layers of a CNN
Architecture. In this, the input image from the previous layers is flattened and fed to the
FC layer. The flattened vector then undergoes few more FC layers where the
mathematical functions operations usually take place. In this stage, the classification
process begins to take place.
Dropout
Usually, when all the features are connected to the FC layer, it can cause
overfitting in the training dataset. Overfitting occurs when a particular model works so
well on the training data causing a negative impact in the model’s performance when
used on a new data. To overcome this problem, a dropout layer is utilized wherein a few
neurons are dropped from the neural network during training process resulting in
reduced size of the model. On passing a dropout of 0.3, 30% of the nodes are dropped
out randomly from the neural network.
17
Activation Functions
Finally, one of the most important parameters of the CNN model is the activation
function. They are used to learn and approximate any kind of continuous and complex
relationship between variables of the network. In simple words, it decides which
information of the model should fire in the forward direction and which ones should not
at the end of the network. It adds non- linearity to the network. There are several
commonly used activation functions such as the ReLU, SoftMax, tanH and the Sigmoid
functions. Each of these functions have a specific usage. For a binary classification
DCNN model, sigmoid and SoftMax functions are preferred a for a multi-class
classification, generally SoftMax us used.
18
CHAPTER 6
SYSTEM IMPLEMENTATION
6.1 MODULES
● AI Present Module
● Notification Module
● Performance Analysis
In this module, this module we are going to develop virtual meet API. A virtual
meet API is a video conferencing tool where instructors and participants engage with
each other and with the learning material. The interface between AIPresent and virtual
meeting platforms are facilitated through a web interface that runs on the teachers and
students’ smart devices in master and slave modes, respectively. The faculty, as well as
students, should log in to the online learning platform with their smart devices. The web
interface page should remain active during the entire course of the class. Here, the web
interface at the teacher’s smart device facilitates two things.
19
Attendance database includes attendance status of student for every day. The face
database consists of face images of student’s according to their roll numbers.
This module is initial part of the system. Logitech C270 (3MP) is used for image
acquisition.
6.2.2.1.2 Preprocessing
The acquired images are converted to grayscale image and then resize. After the
removal of noise using mean and Gaussian filters all further operations are performed on
this image.
After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. Face detection purpose,
Region Proposal Network (RPN) draws anchors and outputs the one which most likely
contains the objects. The detected face regions are cropped and scaled to 200x200
resolution and then used for further recognition task.
After the face detection, face image is given as input to the feature extraction
module to find the key features that will be used classification.
The module composes a very short feature vector that is well enough to represent
the face image. Here, it is done with DCNN method. Then the classified result is stored
into the database. The module composes a very short feature vector that is well enough
to represent the face image. Here, it is done with DCNN with the help of a pattern
classifier, the extracted features of face image are compared with the ones stored in the
face database.
20
6.2.2.2 Verification Module
After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. After the face detection
using RPN, face image is given as input to the feature extraction module to find the key
features that will be used for classification. The module composes a very short feature
vector that is well enough to represent the face image. Here, it is done with DCNN with
the help of a pattern classifier, the extracted features of face image are compared with
the ones stored in the face database. The face image is then classified as either known or
unknown. If the image face is known, corresponding student is identified. Face
recognition is the next step after face detection. The faces cropped from the image are
compared with the stored images in the face database. Here, Eigen face method is used
for face recognition.
After the verification of faces and successful recognition is done, the attendance
of the student is marked in front of his/her roll number. If the face is not recognized, an
error page is displayed. It involves the attendance report generation. The module takes
student information and daily attendance status from student database. The attendance is
calculated as per requirement. There are options for calculating day-wise, student wise
and class-wise attendance. The attendance reports are generated and saved in a file.
21
to enter their UIN k3- times (P3), when they are directed to do it randomly. The random
intervals of time are designed in such a way that it follows the attention span distribution
of the students.
This module is initial part of the system. Logitech C270 (3MP) is used for image
acquisition.
6.2.3.1.2 Preprocessing
The acquired images are converted to grayscale image and then resize. After the
removal of noise using mean and Gaussian filters all further operations are performed on
this image.
After capturing the image, the image is given to face detection module. This
module detects the image regions which are likely to be human. Face detection purpose,
Region Proposal Network (RPN) draws anchors and outputs the one which most likely
contains the objects.
After the face detection, face image is given as input to the feature extraction
module to find the key features that will be used for classification,
22
6.2.3.1.5 Feature Classification
The module composes a very short feature vector that is well enough to represent
the face image. Here, it is done with DCNN method. Then the classified result is stored
into the database.
In this module, institute can enroll students to the college after counselling.
Manage personal details, assign class, roll number and generate ID cards to the students.
● Detailed profiles
• Staff records
• Staff attendance
• Leave management
• Time-table
• Exam management
23
6.2.4.4 Learning Management
• Live classes
• Assignments
6.2.4.5 Communication
• Notes/announcements
• Schedules
• Achievements
• Birthday greeting
• Academic calendar
• Holiday updates
• Fees reports
• Inquiry reports
24
6.2.4.8 End User
6.2.4.8.1 Admin
This module is handled by top management to create role wise user logins to staffs
accessing College management ERP System. Admin can generate notifications for
students and staff send SMS, emails, reminders time to time.
6.2.4.8.2 Student
Here Student can view profile, task, class schedules, exam report card, attend Live
Class Session.
Faculty/AP/HOD can view profile, add task, exam reports, schedules. Here, they
will be able to access the information of Students Profile, his detailed Fees account, his
Term wise and Daily attendance and his appraisal report i.e., result statement along with
the comparative graphical analysis Term wise and Subject wise, which enables them to
get evaluate the student performance in the Class and last but not the least his
performance in various Co-curricular activities organized in the Institution.
There are various parameters with the help of which one can measure the
performance of any biometric authentication techniques. These factors are described
below.
The probability that the system incorrectly declares a successful match between
the input pattern and a non matching pattern in the database. It measures the percent of
25
invalid matches. These systems are critical since they are commonly used to forbid
certain actions by disallowed people.
The probability that the system incorrectly declares failure of match between the
input pattern and the matching template in the database. It measures the percent of valid
inputs being rejected.
The rates at which both accept and reject errors are equal. ROC plotting is used
because how FAR and FRR can be changed, is shown clearly. Obtained from the ROC
plot by taking the point where FAR and FRR have the same value. The lower the EER,
the more accurate the system is considered to be.
The percentage of data input is considered invalid and fails to input into the
system. Failure to enroll happens when the data obtained by the sensor are considered
invalid or of poor quality.
26
CHAPTER 7
SIMULATION RESULTS
This section provides the results of the experimental prose dares carried out
while realising the RIAMS prototype model. Specific results obtained by
implementing the RIAMS face recognition module and the ancillary modalities are
critically discussed. The results from the face recognition module are discussed in
accordance with its training and testing phases. The training and testing phases are
implemented at the Google Collab platform using Python 3.8 programming language.
Our software code has been tested in Windows and Ubuntu machines, and optimal
performance is observed in both cases. As mentioned earlier, we developed the RIAMS
face recognition module based on the Dlib open-source software library. The face
recognition package in the Dlib has an accuracy of 99.38% with the Labelled Faces in
the Wild (LFW) benchmark dataset.
A database containing students’ facial images and their UINs was developed
initially for training the RIAMS modules. The face image database was created using
students’ passport-size photos and the image frames obtained from the video stream
captured during the first virtual class. For the pilot study, a face image database
comprising of ten students was developed as depicted in part (a) of Fig.7.1. The
performance evaluation of the RIAMS model is validated with the prototype database.
Even though five images of every student were used for training the face recognition
mod ule of RIAMS, we have shown only one training set in part (a) of Fig. 7.1. In part
(b), (c) and (d) of Fig. 7.1, three such video frames (frame-1, frame-2, frame-3), which
were captured in real-time are displayed for illustration. We used the most common
Zoom platform to create a virtual classroom and captured video frames from the same
during random intervals of time. As evident from the part (b), (c), and (d) of Fig. 7.1,
face images of some students were not seen in every frame. This can be attributed to
the physical absence of students or the poor internet connection so that some students
may not be able to switch on their video cameras.
27
For testing and verification, each test sample was compared with all the five
images stored in the database. Face recognition can be regarded as successful if at least
one of the five training samples matches with the test sample. Thus, if the test images
extracted from the video frames of the virtual class are matching with the training
samples, attendance from the face recognition module is recorded. A detailed analysis
of the RIAMS face recognition module output is presented next.
In the RIAMS prototype model, the captured frames-1, 2, and 3 (part (b), (c) and
(d)) are used for comparison. with the images previously stored in the database. In this
regard, Table 7.1 shows the face recognition output of captured frame-1 by comparing
it with the database images.
Here, Once represents successful matching status with the database images,
whereas 0’s represents the failure matching. Each test image in captured frame-1 has
been compared with all the five training images corresponding to that student’s
image stored in the database.
28
Table 7.2. Output of RIAMS face recognition module (method 1).
For instance, in the case of the student with UIN TKMCE03, the test image from
the captured video frame has been matched with the first and the fifth training images,
which is represented by 1’s in the corresponding row of Table 7.1. The output of the
face recognition attempt is obtained by the ‘logical OR’ (∨) operation, Ai = I1 ∨ I2 ∨I3
∨I4 ∨I5. Hence, for extracted frame-1, the face recognition status of student with UIN
TKMCE03 is recorded success-fully and is indicated as 1. The same process is repeated
for each student as shown in Table 7.2. Likewise, such tables can be constructed for the
other extracted frames-2, and 3, in comparison with the trained images in the database.
For illustration purposes, the face recognition status of only one extracted frame is
shown as in Table 7.1.
29
For instance, In the case of UIN - TKMCE008, the student’s face is not recognized
in the first attempt, whereas it is recognized in subsequent attempts. Consequently, the
linear sum of the three attempts (0, 1, 1) results in face recognition output ‘2’.
The output of the first ancillary modality, the CAPTCHA verification unit, is
shown in Table 7.4 Here, the CAPTCHA verification process is referred to as Method-
2 (P2). In the proposed design, we implemented two such CAPTCHA verification at
random intervals of time, in every virtual class. In Table 7.4, the last two columns give
the CAPTCHA verification outputs P2 and their normalized values P2n respectively.
Students’ response to CAPTCHAs (C) in a virtual class is at k2 times, and a linear sum
of the same is represented by the following equation.
30
Table 7.4. Output of UIN verification process (method 3).
The implementation of the second ancillary modality, the response to UIN queries
(Method-3, P3) by students taken at random intervals is demonstrated in Table 7.4. In the
reported method, we implemented two such UIN query verification in a virtual class. The
last two columns of Table 7.4 give the students UIN verification outputs P3 and their
normalized values P3n. Response to UIN queries (U) is taken at k3 times. The following
equation represents a linear sum of such responses.
The decision fusion from the face, CAPTCHA, and UIN sub-modules (P1, P2, P3)
leads to the final result of the RIAMS attendance registration, which is demonstrated in
Table 7.5. The weighted sum of decisions from each sub modality is considered for a
concluding result (R), which can be represented as, n the proposed design, we considered
dissimilar weights w1 = 0.5, w2 = 0.3 and w3 = 0.2 for illustration purpose.
31
The rationale for giving greater weightage (w1) to face recognition can be
attributed to its higher significance as compared to the other modalities. The final
attendance status will be registered as ‘Present’, if the weighted sum (R) is greater than
a predefined threshold (λ2). During the training phase, we observed that λ2 = 65% gives
optimal results, and hence the same value has been used in the testing phase to compare
with the ‘R’ values. The normalized value of ‘R’ in percentage can be obtained by the
following equation.
32
Table 7.5. Final result of RIAMS attendance registration
(decision fusion from face, captcha and UIN sub-modules).
33
The performance analysis chart of the RIAMS face recognition module (method-
1) and ancillary modalities (methods 2 & 3) is demonstrated. The normalized values of
the outputs of methods-1, 2, and 3 (P1n, P2n, and P3n) are depicted in the bar chart. It
also illustrates the normalized final results in comparison with a predefined threshold
(λ2n = 0.65), shown by a horizontal line. It is evident from that the attendance is
registered if the normalized output is greater than the threshold value. Thus, the RIAMS
submodules are of high performance which ensures proper response of students to
random queries as well as physical presence in terms of face recognition.
The accuracy of the face recognition module is high, as we designed it using the
Dib library, which has an accuracy of 99.38%. However, we have obtained 100%
accuracy for the pilot study which incorporates facial images of a sample of 10 students.
Similarly, for the UIN and Captcha verification performed by the ancillary modalities,
we obtained 100% accuracy for the pilot study. We did not analyze the performance and
accuracy of RIAMS because It is irrelevant to compare a prototype model with other
systems, which are in no way related to the proposed model. Since the proposed model
is unique in terms of the introduction of randomness and ancillary modalities in its
design, it cannot be compared with other models available in the literature.
34
CHAPTER 8
APPENDICES
8.1 SAMPLE CODE
import datetime
import random
import threading
import os
import time
import shutil
import cv2
import imagehash
import PIL.Image
import urllib.request
35
import urllib.parse
import webbrowser
import mysql.connector
mydb = mysql.connector.connect(
host="localhost",
user="root",
passwd="",
charset="utf8",
database="virtual_class"
app = Flask(__name__)
session key
app.secret_key = '123456'
UPLOAD_FOLDER = 'static/upload'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
app.route('/',methods=['POST','GET'])
def index():
cnt=0
act=""
msg=""
36
ff=open("det.txt","w")
ff.write("1")
ff.close()
ff1=open("photo.txt","w")
ff1.write("1")
ff1.close()
ff11=open("img.txt","w")
ff11.write("1")
ff11.close()
ff12=open("start.txt","w")
ff12.write("1")
ff12.close()
if request.method == 'POST':
username1 = request.form['uname']
password1 = request.form['pass']
mycursor = mydb.cursor()
myresult = mycursor.fetchone()[0]
if myresult>0:
session['username'] = username1
37
return redirect(url_for('category'))
else:
return render_template('index.html',msg=msg,act=act)
def allowed_file(filename):
app.route('/index_ins',methods=['POST','GET'])
def index_ins():
cnt=0
act=""
msg=""
if request.method == 'POST':
username1 = request.form['uname']
password1 = request.form['pass']
mycursor = mydb.cursor()
myresult = mycursor.fetchone()[0]
if myresult>0:
session['username'] = username1
38
result=" Your Logged in sucessfully**"
return redirect(url_for('ins_home'))
else:
return render_template('index_ins.html',msg=msg,act=act)
app.route('/index_stu',methods=['POST','GET'])
def index_stu():
cnt=0
act=""
msg=""
if request.method == 'POST':
username1 = request.form['uname']
password1 = request.form['pass']
mycursor = mydb.cursor()
myresult = mycursor.fetchone()[0]
if myresult>0:
session['username'] = username1
return redirect(url_for('stu_home'))
else:
39
return render_template('index_stu.html',msg=msg,act=act)
app.route('/category',methods=['POST','GET'])
def category():
result=""
act=""
if request.method=='POST':
category=request.form['category']
mycursor = mydb.cursor()
maxid = mycursor.fetchone()[0]
if maxid is None:
maxid=1
mycursor.execute(sql, val)
mydb.commit()
return redirect(url_for('category',act='success'))
if request.method=='GET':
act=request.args.get('act')
did=request.args.get('did')
if act=="del":
cursor1 = mydb.cursor()
40
cursor1.execute('delete from ci_category WHERE id = %s', (did, ))
mydb.commit()
return redirect(url_for('category'))
cursor = mydb.cursor()
data=cursor.fetchall()
return render_template('category.html',act=act,data=data)
app.route('/view_ins')
def view_ins():
act=""
mycursor = mydb.cursor()
value = mycursor.fetchall()
if request.method=='GET':
did = request.args.get('did')
act = request.args.get('act')
if act=="del":
mydb.commit()
return redirect(url_for('view_ins'))
app.route('/view_stu')
def view_stu():
41
value=[]
mycursor = mydb.cursor()
value1 = mycursor.fetchall()
value2 = mycursor.fetchall()
if request.method=='POST':
dept=request.form['dept']
year=request.form['year']
value = mycursor.fetchall()
else:
value = mycursor.fetchall()
session['username'] = username1
return redirect(url_for('stu_home'))
else:
return render_template('index_stu.html',msg=msg,act=act)
42
8.2 SCREENSHOTS
Admin Page
43
Faculty Page
44
Student Page
45
Queries Box
Student Attendance
46
CHAPTER 9
47
REFERENCES
[1] D. Sunaryono, J. Siswantoro, and R. Anggoro, (2021) ‘‘An Android based course
attendance system using face recognition,’’ J. King Saud Univ. Comput. Inf. Sci., vol.
33, no. 3, pp. 304–312.
[2] L. Li, Q. Zhang, X. Wang, J. Zhang, T. Wang, T.-L. Gao, W. Duan, K. K.-F. Tsoi,
and F.-Y. Wang (2020), “Characterizing the propagation of situational information in
social media during COVID-19 epidemic: A case study on Weibo,'' IEEE Trans.
Computer. Social Syst., vol. 7, no. 2, pp. 556-562.
[3] J. T. Wu, K. Leung, and G. M. Leung, (2020) “Nowcasting and forecasting the
potential domestic and international spread of the 2019-nCoV outbreak originating in
Wuhan, China: A modelling study,'' Lancet, vol. 395, no. 10225, pp. 689-697.
[4] T. Alamo, D. G. Reina, M. Mammarella, and A. Abella, (2020) “Covid-19: Open
data resources for monitoring, modeling, and forecasting the epidemic,'' Electronics, vol.
9, no. 5, pp. 1-30.
[5] C. Rapanta, L. Botturi, P. Goodyear, L. Guàrdia, and M. Koole, (2020) “Online
university teaching during and after the Covid-19 crisis: Refocusing teacher presence
and learning activity,'' Post digital Sci. Educ., vol. 2, no. 3, pp. 923-945.
[6] Y. A. U. Rehman, L. M. Po, and M. Liu, (2018) ‘‘LiveNet: Improving features
generalization for face liveness detection using convolution neural networks,’’ Expert
Syst. Appl., vol. 108, pp. 159–169.
[7] A. Pardo, F. Han, and R. A. Ellis, (2017) “Combining university student selfregulated
learning indicators and engagement with online learning events to predict l academic
performance,’’ IEEE Trans. Learn. Technol., vol. 10, no. 1, pp. 82–92.
[8] C. R. Henrie, L. R. Halverson, and C. R. Graham, (2015) ‘‘Measuring student
engagement in technology-mediated learning: A review,’’ Comput. Educ., vol. 90, pp.
36–53.
48
[9] R. Ellis and P. Goodyear, (2013) Students Experiences of E-Learning in Higher
Education: The Ecology of Sustainable Innovation. New York, NY, USA: Taylor &
Francis.
[10] H.-K. Wu, S. W.-Y. Lee, H.-Y. Chang, and J.-C. Liang, (2013) ‘‘Current status,
opportunities and challenges of augmented reality in education,’’ Comput. Educ., vol.
62, pp. 41–49.
[11] M. K. Dehnavi and N. P. Fard, (2011) ‘‘Presenting a multimodal biometric model
for tracking the students in virtual classes,’’ Procedia-Social Behav. Sci., vol. 15, pp.
3456–3462.
[12] D. E. King, (2009) ‘‘Dlib-ml: A machine learning toolkit,’’ J. Mach. Learn. Res.,
vol. 10, pp. 1755–1758.
49
PUBLICATION
50