Controlling Features of System Muhammad Ullah Ahad

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Controlling Multiple Features of Computer System

using Human Gestures

Final Year Project Proposal

A project submitted in partial fulfilment of the Degree


of
BS in Computer Science

Muhammad Ullah Roll No: 2

Abdul Ahad Roll No: 6

Supervisor

Mr. Yasir Ali

DEPARTMENT OF COMPUTER SCIENCE


BACHA KHAN UNIVERSITY CHARSADDA
(SESSION 2020-2024)
Project Registration

Project ID (for office use)

Type (Nature of project) [ ] Development [ ] Research [ ] R&D

Area of specialization

Project Group Members

Sr.# Reg. # Student Name CGPA Email ID Phone # Signature

(i) Group Leader

(ii)
Name & Signature of Batch Advisor
(If students are eligible for FYP)

Plagiarism Free Certificate


This is to certify that, I am ________________________ S/D/o _______________________, group leader
of FYP under registration no CIIT/_________________/LHR at Computer Science Department, Bacha Khan
University Charasdda. I declare that my FYP proposal is checked by my supervisor and the similarity index
is ________% that is less than 20%, an acceptable limit by HEC. Report is attached herewith as Appendix
A.

Date: ____________ Name of Group Leader: ________________________ Signature: _____________

Name of Supervisor: _____________________ Co-Supervisor (if any):____________________


Designation: _____________________ Designation: _____________________

Signature: _____________________ Signature: _____________________

Approval of FYP Management Committee


Committee Member 1: Name: _____________________
[ ] Accept [ ] *Defer [ ] *Reject Signature: _______________
*Remarks: _____________________________________________________________________

Committee Member 2: Name: _____________________


[ ] Accept [ ] *Defer [ ] *Reject Signature: _______________
*Remarks: _____________________________________________________________________

Convener: Name: _____________________


[ ] Accept [ ] *Defer [ ] *Reject Signature: _______________
*Remarks: _____________________________________________________________________

2 of 8
Abstract
The project of Controlling Multiple Features of Computer System using Human gestures
presents an innovative approach to human-computer interaction by using gesture
recognition technology. The aim of this project is to develop a system that allows users to
control various features and functionalities of a computer through natural hand movements.
This novel approach eliminates the need for traditional input devices like keyboards and
mouse, providing a more interactive and accessible computing experience.

The system utilizes advanced computer vision and machine learning algorithms to interpret
and respond to a user's gestures. With the use of camera, the software can accurately
recognize a wide range of gestures, including hand movements. The project integrates these
gestures into meaningful commands, allowing users to execute tasks such as navigating
through applications, adjusting volume, switching between open windows, and initiating
specific actions within software applications.

Key features of the system include real-time gesture tracking, a user-friendly interface, and
adaptability to user environments. The project not only focuses on the technical aspects of
gesture recognition but also prioritizes user experience, ensuring that the system is
responsive, reliable, and capable of enhancing overall interaction with the computer. The
potential applications extend to individuals with physical disabilities, as the gesture-based
control provides an alternative input method that can be customized to accommodate various
user needs.

Introduction

In an era of rapid technological advancement, the way we interact with computers is


evolving beyond traditional input devices. This project aims to revolutionize the
conventional means of controlling computers by connecting the power of gesture
recognition technology. By translating natural hand and body movements into actionable
commands, this innovative system seeks to redefine the user experience and provide a more
intuitive and engaging computing environment.

Conventional input devices such as keyboards and mice have long been the primary means
of interacting with computers. However, these interfaces have limitations in terms of
intuitiveness and adaptability. The project builds upon advancements in computer vision
and machine learning, offering an alternative input method. Inspired by the vision of a

3 of 8
hands-free computing experience, the project addresses the growing demand for more
natural, efficient, and accessible ways to control computer systems.

The primary purpose of this project is to design and implement a robust system capable of
recognizing and responding to a diverse range of human gestures, most importantly the hand
movements. By doing so, the project aims to enable users to seamlessly control multiple
features of a computer system without relying on traditional input devices. This project
holds significant promise in various domains, including accessibility and user experience
enhancement. The gesture-based control system not only provides a novel and engaging
way for users to interact with their computers but also holds potential applications for
individuals with physical disabilities.

In conclusion, the project of Controlling Multiple Features of Computer System using


Human Gestures introduces an exciting paradigm shift in human-computer interaction,
promoting a more natural and engaging way to control computer systems. This innovation
has the potential to redefine the user interface landscape, making computing more
accessible, broad, and responsive to the diverse ways in which users interact with
technology.

Motivation and Scope

Motivation

The motivation behind the Controlling Multiple Features of Computer using Human
gestures stems from the desire to overcome limitations associated with traditional input
methods and enhance the overall human-computer interaction experience. Conventional
interfaces like keyboards and mouse, may have challenges in terms of accessibility, user
engagement, and adaptability. This project is motivated by the vision of creating a more
natural and intuitive computing experience, where users can seamlessly control various
features of their computer systems through gestures, promoting a deeper connection
between humans and technology. The advancements in computer vision technologies have
opened new possibilities for gesture recognition. The project aims to empower users by
providing an alternative means of interaction that is not only responsive and efficient but
also capable of accommodating diverse user preferences and abilities. Additionally, the
project aligns with the broader trend of exploring innovative ways to make computing more
accessible and inclusive, providing to a wide range of users, including those with physical
limitations.

4 of 8
Scope

The scope of the project encompasses the development of a comprehensive gesture


recognition system that can effectively control multiple features of a computer system. This
includes, but is not limited to, features such as navigating through applications, adjusting
system settings, switching between tasks, and executing commands within software
applications. Furthermore, the project aims to explore the potential applications of gesture-
based control in diverse contexts, ranging from consumer electronics to assistive
technologies. The system's adaptability is a key consideration, intending to provide users
with a flexible and customizable interface that aligns with their unique preferences and
requirements.

Related Work

KP Vijayakumar et al. [1] presented an electronic system which will help the mute people
to exchange their ideas with the normal person in emergency situations. The system consists
of a glove that can be worn by the subject which will convert the hand gestures to speech
and text. The message displayed will also help deaf people to understand their thoughts.

Marvin S et al. [2] introduced android based hand gestures-controlled interface system for
home appliances that serves as an alternative to existing remote controllers. The camera of
android device used to detect hand gestures for processing.

Goals and Objectives

Goals

 To develop a gesture recognition system that significantly enhances the overall user
experience by providing a natural and intuitive means of controlling multiple features
of a computer system.
 To increase accessibility by providing users with diverse abilities, including those with
physical limitations, and providing an alternative input method that complements
traditional interfaces.
 To achieve real-time responsiveness in gesture recognition to ensure that the system can
accurately interpret and respond to users' gestures without noticeable delays.

5 of 8
 Enable users to efficiently control various features of a computer system, such as
navigating through applications, adjusting settings, and executing commands within
software applications, solely through gestures.

Objectives

 To conduct an in-depth analysis of existing gesture recognition technologies, identifying


strengths, weaknesses, and potential improvements.
 To design a robust system architecture that incorporates computer vision and machine
learning algorithms for effective gesture recognition.
 To integrate the gesture recognition system with various computer features, enabling
users to control tasks ranging from basic navigation to more complex actions within
software applications.

By achieving these goals and objectives, the project aims to deliver a robust, adaptable, and
user-centric gesture recognition system that transforms the way users interact with their
computer systems.

Individual Tasks
This project involves two group members. The tasks and responsibilities assigned to each member
is described below:

Research and Analysis

Member 1: Investigate existing gesture recognition technologies, focusing on algorithms, libraries,


and implementation details.

Member 2: Analyze user preferences and expectations in gesture-based interfaces, researching


design principles for intuitive interactions.

System Architecture Design

Member 1: Design the overall system architecture, including the integration of computer vision and
machine learning algorithms for gesture recognition.

Member 2: To ensure that the system architecture aligns with user interface design principles and is
visually intuitive.

Algorithm Development

Member 1: Implement and fine-tune gesture recognition algorithms, ensuring accuracy and
adaptability to different gestures and user environments.

6 of 8
Member 2: Provide input on user interface elements that may impact the effectiveness of gesture
recognition, ensuring visual feedback aligns with user expectations.

Testing

Member 1: Conduct usability testing to gather feedback on the technical performance and
responsiveness of the gesture recognition system.

Member 2: Gather feedback on the user interface design, ensuring that it aligns with user
expectations and preferences.

Documentation

Member 1: Document technical aspects of the gesture recognition system, providing a


comprehensive guide for developers and maintainers.

Member 2: Develop user guides and documentation for end-users, focusing on setup, customization,
and effective use of the gesture-based interface.

Gantt Chart
Figure 1 shows the Gantt chart of the project.

Figure 1 Gantt Chart

Tools and Technologies


The development of project of Controlling Multiple Features of Computer System using
Human Gestures involves a combination of hardware and software tools, as well as
technologies that facilitate gesture recognition and user interface design. Here's a list of tools
and technologies that can be considered for different aspects of the project:

Hardware: Camera, Webcam, Microphone,

Programming Languages: Python is used to develop gesture recognition algorithms and


system logic.

7 of 8
Computer Vision Libraries: TensorFlow or PyTorch will be used to implement computer
vision algorithms for gesture recognition.

Machine Learning Frameworks: TensorFlow or scikit-learn will be used to train machine


learning models for recognizing specific gestures.

Development Environment: PyCharm or Visual Studio Code will be used to provide a


comfortable and efficient development environment.

These tools and technologies provide a foundation for developing a robust and user-friendly
system that allows users to control multiple features of a computer system using human
gestures.

References
1 Vijayakumar, K. P., Nair, A., & Tomar, N. (2020). Hand gesture to speech and text
conversion device. Int J Innov Technol Explor Eng (IJITEE), 9(5).

2 M. S. Verdadero, C. O. Martinez-Ojeda and J. C. D. Cruz, "Hand Gesture


Recognition System as an Alternative Interface for Remote Controlled Home
Appliances," 2018 IEEE 10th International Conference on Humanoid,
Nanotechnology, Information Technology,Communication and Control,
Environment and Management (HNICEM), Baguio City, Philippines, 2018, pp. 1-
5, doi: 10.1109/HNICEM.2018.8666291.

8 of 8

You might also like