Controlling Features of System Muhammad Ullah Ahad
Controlling Features of System Muhammad Ullah Ahad
Controlling Features of System Muhammad Ullah Ahad
Supervisor
Area of specialization
(ii)
Name & Signature of Batch Advisor
(If students are eligible for FYP)
2 of 8
Abstract
The project of Controlling Multiple Features of Computer System using Human gestures
presents an innovative approach to human-computer interaction by using gesture
recognition technology. The aim of this project is to develop a system that allows users to
control various features and functionalities of a computer through natural hand movements.
This novel approach eliminates the need for traditional input devices like keyboards and
mouse, providing a more interactive and accessible computing experience.
The system utilizes advanced computer vision and machine learning algorithms to interpret
and respond to a user's gestures. With the use of camera, the software can accurately
recognize a wide range of gestures, including hand movements. The project integrates these
gestures into meaningful commands, allowing users to execute tasks such as navigating
through applications, adjusting volume, switching between open windows, and initiating
specific actions within software applications.
Key features of the system include real-time gesture tracking, a user-friendly interface, and
adaptability to user environments. The project not only focuses on the technical aspects of
gesture recognition but also prioritizes user experience, ensuring that the system is
responsive, reliable, and capable of enhancing overall interaction with the computer. The
potential applications extend to individuals with physical disabilities, as the gesture-based
control provides an alternative input method that can be customized to accommodate various
user needs.
Introduction
Conventional input devices such as keyboards and mice have long been the primary means
of interacting with computers. However, these interfaces have limitations in terms of
intuitiveness and adaptability. The project builds upon advancements in computer vision
and machine learning, offering an alternative input method. Inspired by the vision of a
3 of 8
hands-free computing experience, the project addresses the growing demand for more
natural, efficient, and accessible ways to control computer systems.
The primary purpose of this project is to design and implement a robust system capable of
recognizing and responding to a diverse range of human gestures, most importantly the hand
movements. By doing so, the project aims to enable users to seamlessly control multiple
features of a computer system without relying on traditional input devices. This project
holds significant promise in various domains, including accessibility and user experience
enhancement. The gesture-based control system not only provides a novel and engaging
way for users to interact with their computers but also holds potential applications for
individuals with physical disabilities.
Motivation
The motivation behind the Controlling Multiple Features of Computer using Human
gestures stems from the desire to overcome limitations associated with traditional input
methods and enhance the overall human-computer interaction experience. Conventional
interfaces like keyboards and mouse, may have challenges in terms of accessibility, user
engagement, and adaptability. This project is motivated by the vision of creating a more
natural and intuitive computing experience, where users can seamlessly control various
features of their computer systems through gestures, promoting a deeper connection
between humans and technology. The advancements in computer vision technologies have
opened new possibilities for gesture recognition. The project aims to empower users by
providing an alternative means of interaction that is not only responsive and efficient but
also capable of accommodating diverse user preferences and abilities. Additionally, the
project aligns with the broader trend of exploring innovative ways to make computing more
accessible and inclusive, providing to a wide range of users, including those with physical
limitations.
4 of 8
Scope
Related Work
KP Vijayakumar et al. [1] presented an electronic system which will help the mute people
to exchange their ideas with the normal person in emergency situations. The system consists
of a glove that can be worn by the subject which will convert the hand gestures to speech
and text. The message displayed will also help deaf people to understand their thoughts.
Marvin S et al. [2] introduced android based hand gestures-controlled interface system for
home appliances that serves as an alternative to existing remote controllers. The camera of
android device used to detect hand gestures for processing.
Goals
To develop a gesture recognition system that significantly enhances the overall user
experience by providing a natural and intuitive means of controlling multiple features
of a computer system.
To increase accessibility by providing users with diverse abilities, including those with
physical limitations, and providing an alternative input method that complements
traditional interfaces.
To achieve real-time responsiveness in gesture recognition to ensure that the system can
accurately interpret and respond to users' gestures without noticeable delays.
5 of 8
Enable users to efficiently control various features of a computer system, such as
navigating through applications, adjusting settings, and executing commands within
software applications, solely through gestures.
Objectives
By achieving these goals and objectives, the project aims to deliver a robust, adaptable, and
user-centric gesture recognition system that transforms the way users interact with their
computer systems.
Individual Tasks
This project involves two group members. The tasks and responsibilities assigned to each member
is described below:
Member 1: Design the overall system architecture, including the integration of computer vision and
machine learning algorithms for gesture recognition.
Member 2: To ensure that the system architecture aligns with user interface design principles and is
visually intuitive.
Algorithm Development
Member 1: Implement and fine-tune gesture recognition algorithms, ensuring accuracy and
adaptability to different gestures and user environments.
6 of 8
Member 2: Provide input on user interface elements that may impact the effectiveness of gesture
recognition, ensuring visual feedback aligns with user expectations.
Testing
Member 1: Conduct usability testing to gather feedback on the technical performance and
responsiveness of the gesture recognition system.
Member 2: Gather feedback on the user interface design, ensuring that it aligns with user
expectations and preferences.
Documentation
Member 2: Develop user guides and documentation for end-users, focusing on setup, customization,
and effective use of the gesture-based interface.
Gantt Chart
Figure 1 shows the Gantt chart of the project.
7 of 8
Computer Vision Libraries: TensorFlow or PyTorch will be used to implement computer
vision algorithms for gesture recognition.
These tools and technologies provide a foundation for developing a robust and user-friendly
system that allows users to control multiple features of a computer system using human
gestures.
References
1 Vijayakumar, K. P., Nair, A., & Tomar, N. (2020). Hand gesture to speech and text
conversion device. Int J Innov Technol Explor Eng (IJITEE), 9(5).
8 of 8