Unmanned Aerial Vehicle UAVDrone Controlled by Hand Gestureresearchpaper

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/380215136

Unmanned Aerial Vehicle (UAV) Drone Controlled By Hand Gesture

Research · May 2024


DOI: 10.13140/RG.2.2.21558.10563

CITATIONS READS

0 72

4 authors, including:

Vighnesh Iyer
Pillai Institute of Information Technology, Engineering, Media Studies and Research
1 PUBLICATION 0 CITATIONS

SEE PROFILE

All content following this page was uploaded by Vighnesh Iyer on 01 May 2024.

The user has requested enhancement of the downloaded file.


Unmanned Aerial Vehicle (UAV) Drone Controlled By Hand
Gesture
Vighnesh Iyer1, Prem Khanderao2, Aryan Tawar3, Rahul Hate4
Department of Mechanical Engineering, Pillai College of Engineering Navi Mumbai, India
[email protected], [email protected], [email protected],
[email protected].

ABSTRACT - Smart cameras have currently I. INTRODUCTION


seen a huge diffusion and represent a low-value In our research, we delved into the realm of aerial
solution for improving public safety in lots of surveillance, specifically focusing on the utilization of
scenarios. Moreover, they may be mild enough to be deep neural networks for face verification. Our study
lifted by a drone. Face popularity enabled through involved a meticulous comparison between cutting-
drones ready with smart cameras has already been edge neural networks and facial recognition systems
said withinside the literature. However, using a trained on low-resolution images. To accomplish this,
we fine-tuned a ResNet-50 model augmented with
drone commonly imposes tighter constraints than
Squeeze-and-Excitation blocks, leveraging the
different facial popularity scenarios. First, weather
VGGFace2 dataset for training. Subsequently, we
conditions, which include the presence of wind, pose
rigorously evaluated the performance of our model on
an excessive restriction on photograph stability. the IJB-B dataset, particularly emphasizing the 1:1
Moreover, the distance the drones fly is commonly verification task. Transitioning from theoretical
much greater than constant floor cameras, which exploration to practical application, our project aimed
necessarily translates into a degraded decision of to transform these insights into a marketable product.
the face snapshots. Furthermore, the drones’ This endeavor required careful consideration of several
operational altitudes typically require using optical factors: ease of use, affordability, and seamless
zoom, for that reason amplifying the dangerous integration with existing technology. To optimize
effects of their movements. For these types of usability, we devised a system wherein users could
reasons, in drone scenarios, photograph control drones through universally understood hand
gestures, thereby transcending language and cultural
degradation strongly influences the conduct of face
barriers. Moreover, to ensure economic feasibility, we
detection and popularity systems. In this work, we
meticulously scrutinized the cost implications of each
studied the overall performance of deep neural component during the prototype development phase.
networks for face re-identity particularly designed This diligent approach allowed us to refine our design
for low-nice snap shots, and implemented them on a iteratively, ultimately streamlining production costs
drone. We also added some other functionality that without compromising functionality.
is line following, Applications for this functionality Furthermore, recognizing the prevalence of personal
include things like product delivery, warehouse computing devices, we engineered our product to
management, library management, etc. To execute seamlessly interface with laptops or PCs, thereby
this functionality, we will use open-cv with an eliminating the need for specialized control hardware.
algorithm that can detect the line and execute the This decision not only bolstered user convenience but
commands based on the line pattern. In the end, if also facilitated widespread adoption. Beyond
theoretical and practical endeavors, our project
there is any emergency and the user wants to alert
underscored the broader significance of drones across
someone, the user can send an SOS message to the
diverse industries, ranging from security and
drone, which follows one pattern, and start flying in agriculture to logistics and emergency services. By
that pattern to grab the attention of a third person harnessing visual cameras as primary sensors, drones
and alert them. have emerged as versatile tools capable of executing a
myriad of tasks, from surveillance to package delivery.
Keywords— deep neural networks, image Central to our innovation was the proposal of a novel
degradation, line following, open-cv, algorithm, control framework centered around hand gestures,
emergency, SOS message. enabling intuitive and agentless communication with
drones. This approach not only mitigated the need for facial recognition software in cell phones is already
cumbersome control devices but also paved the way being utilized by corporations like iProov and
for autonomous operations, thus enhancing efficiency Mastercard to authenticate bills and different high-
and safety. give-up authentication tasks.
In summary, our research and development efforts
culminated in a comprehensive framework that not III. CONSTRAINS
only advanced the state-of-the-art in aerial With our planning come many different constraints
surveillance but also translated theoretical that drive our decisions and the path we take with our
advancements into tangible, market-ready solutions project. These are divided into different types, such as
with far-reaching implications for various industries. economic, environmental, ethics, health,
manufacturing, safety, social, and sustainability. From
an ecological point of view, our drone is limited in that
II. PROJECT MOTIVATION
it can only fly indoors. Finding indoor spaces where a
Both drones and computer vision are very popular, and drone is allowed to fly without permission is not easy.
in some cases, both have been combined for tracking
It is important that our drone is able to perform well in
purposes. That means there are still no commercial
products strictly controlled by hand gestures. Being the difficult situations. The limitations of four walls around
first to do so would be an extremely rewarding the drone can complicate some of the testing. We are
achievement. Flying a drone for the first time can be quite also legally restricted ecologically. A license is required
difficult and cause a lot of trouble for the user. With your to fly the drone outdoors in the state of Florida. To save
own hand gestures, you can add a sense of lightness and money and time, we decided to avoid flying the drone
fluidity not found on a typical smartphone or handheld outdoors altogether and just focus on flying it indoors.
controller. In addition, our solution includes controlling The benefit of this is the control environment we have
the drone with one hand, unlike traditional drones where
within us. There are no factors like wind and rain to
you use both hands to operate a physical remote control.
Controller. Our solution allows the user to control the worry about. An indoor drone will lessen the
drone with just one hand, as long as that hand stays in the constraints of unpredictable conditions outside. Two of
correct field of view. This allows your alternate hand to the biggest limitations we have to work around are the
move freely, which is something that is often overlooked number of recognizable gestures and battery life. It's
when it comes to drones. Drones are often used to capture important that we define hand gestures differently
something in motion, whether it's outdoor action sports or enough for the webcam to recognize them. If one
photographers and videographers trying to get a bird's-eye hand motion is too close to another, there could be an
view that isn't easily achievable without them. A free hand error. As we test our initial flight gestures, we will
will immediately benefit drone users.
better understand how similar we can make them. Our
starting gestures vary widely, but as we grow, we need
A. Aim of Project to find hand gestures that are uniqueenough not to
The aim of this project is to develop a facial disturb. Battery life is another hurdle that we must work
recognition and face tracking system with good around or try to overcome.
performance implemented in an UAV to be able to
recognize people and UAV have to follow the path
based on human hand gesture. IV. STANDARDS
When working on a project, standards are essential
as they set a level of quality and expectations across the
B. Objectives of the Project
board. They also help to make the project adaptable and
The main objectives of this system are: easy to integrate. If another company or team integrated
our project, they could easily adapt to our industry
1. To program a drone to create a more personalized
standard protocols.
approach to controlling it using hand gestures.
2. To program a drone so that it will be capable of face
tracking. It will be able to follow theperson while The standard protocols we follow are:
keeping a specific distance from him/her whose face 1. I2C communication protocol
the drone would have detected.
2. UART communication protocol
c. Scope of the Project 3. ISO/TC 20/SC 16 (Unmanned Aircraft Systems)
Facial recognition answers are anticipated to be
4. IPC-A-610
found in 1.33 billion gadgets by 2024. Powered by AI,
Before building a neural network application, an
Both I2C and UART are widely used understanding of neural network components must
communication peripherals, and most of the sensors have the following characteristics and limitations:
and third-party devices we use are compatible. Most Neural networks are a subset of machine learning in
flights the controllers use UARTs, while all the sensors terms of hierarchy. Artificial intelligence is a broad
we looked at reside in our I2C. Using the I2C bus can term that encompasses machine learning. Machine
greatly simplify our design and make it more efficient. learning allows a system to learn and make progress
All drones must comply with the ISO/TC 20/SC 16 from data entered in the past without being explicitly
standard for unmanned aerial systems. This is an programmed. Neural networks are a subset of machine
unmanned aerial vehicle system, and these standards learning. Certain components and properties of neural
outline what is and isn't allowed in terms of places to networks enable this learning. Essentially, a neural
fly your drone. This allows for a more responsible and network is a set of algorithms designed to recognize
educated population of drone operators, which is and learn from patterns, allowing these patterns to
especially important as drones become more popular. perform some tasks without being explicitly
IPC- A-610 is the Acceptance of Electronic programmed to do so. Some popular neural network
Assemblies. This standard will ensure that our product applications include speech recognition, object
features extremely reliable printed wiring. This is a recognition, image processing, and text recognition.
crucial criterion that our project must meet as it will Neural networks are modelled by ours. The brain and
further verify our product and make it more how our brain processes information. They are made up
marketable. This also allows us to proceed with of interlocking nodes or neurons that pass input and
manufacturing the fastest product as it already meets output to different neurons. All nodes are connected by
the industry standard and does not need to be verified weighted edges. A weight represents the strength of a
against it again. connection between nodes and determines the
influence one has. The node has switched to
V. METHODOLOGY miscellaneous. The greater the weight between two
A. Hand Gesture Control nodes, the greater the influence that node has on the
We use hair features to represent each image in the other. Neural networks are usually trained on some set
dataset. Although Haar's features were introduced in of data. As this workout takes place, the weights are
1910, they only became popular for image recognition updated to give optimal results. Neural networks are
problems after extensive analysis. A haar based also divided into 3 generalized layers: The input layer
feature uses rectangular areas at different locations in consists of the input layer, the hidden layers, and the
the detection window when pixels are merged in output layer. The input layer provides the output data
intensity at each detected window location and for the neural network. The hidden layers are those
calculates the difference between these sum values. between the input and output layers. And this is where
These differences are then used to categorize the all the arithmetic and learning takes place. The more
image. In our scenario, the feature extraction engine hidden layers there are, the deeper the neural network,
uses the pattern. Hand gestures are generated by many the greater the number of hidden layers. It all depends
local Haar functions. Later, the classifier maps the on the machine learning application itself. The last
function vector of the gestures to either one of the layer in the network and produces the final result. The
existing gesture labels or empty. The reasons for idea of having a machine train itself to process data and
choosing Haar classifiers over other algorithms are as learn from it without explicitly teaching the machine is
follows: The hair cascade has a better detection rate called deep learning. The hidden layers of the neural
than other functions. It's also easy to implement with network enable this learning Currently, there are many
descriptors like "Preserve fewer clear images," different types of neural networks. Some examples
achieves higher accuracy with training images, and include: recurrent neural networks; long-term and
consumes less memory than GPU-enabled image short-term memory, convolutional neural networks,
classification systems like Convolutional. etc. For our project, the neural network that we will
implement is the Convolutional Neural Network. This
particular type of neural network helps bridge the gap
B. Key Control between computer vision and deep learning.
1. Neural Network Convolutional neural networks have proven effective
in areas such as image recognition and classification
and have had great success tasks related to object of features extracted from facial images holds greater
recognition. Based on these facts, we decided to discriminative power in distinguishing between
implement a convolutional neural network in our individuals compared to the traditional approach
project. The challenge of accurately detecting and relying solely on facial landmarks. Citing previous
classifying hand gestures in real time can be solved. research by Amato, Giuseppe, Falchi, Fabrizio,
Training a convolutional neural network input CNN Gennaro, Claudio, and Vairo, Claudio (2018), our
takes an image as input. study contributes valuable insights into the realm of
face verification, shedding light on the superiority of
In our project, this will be an image of a hand. deep feature-based methodologies over landmark-
Gesture The image is then sent through hidden layers based techniques.
where the image is refracted. Different properties of
the hand. The image of the gesture is extracted and b) Paper 2
learned through the net. For example, some features
that can be extracted are edges and corners. A closed Paper Title :- Okutama-Action: An Aerial View Video
hand gesture image has different looking edges than Dataset for Concurrent Human Action Detection.
an open hand gesture image. As features are extracted
and learned, the weights assigned to each node in the In our paper titled "Okutama-Action: An Aerial
network are modified to account for the learning View Video Dataset for Concurrent Human Action
features. This is called the "feature." learning phase. In Detection," we introduced a novel dataset aimed at
the prediction phase, the network makes a prediction facilitating research in concurrent human action
about what it thinks the input image is, or classifies the detection from aerial view videos. Employing tools and
image based on features extracted from the network. technologies such as the Single Shot Multi Box
Detector (SSD) and VGG net, we conducted extensive
In our project, an input hand gesture image can only analyses to evaluate the performance of models in this
be one of eight. Therefore, the network must classify domain. Our findings indicate a notable discrepancy
the entry point. Gesture image as one of eight different between the localization and classification capabilities
classes. The specific components that go into learning of the models. While the models demonstrate
and classifying features are called the building blocks proficiency in localizing objects within the aerial view
of CNN. videos, they exhibit limitations in accurately
distinguishing between different classes of human
actions. This gap is particularly evident when
VI. LITRATURE REVIEEW comparing the performance of pedestrian detection and
action detection tasks, underscoring the challenges
a) Paper 1 inherent in concurrent human action detection from
aerial perspectives. Drawing from previous research by
Paper Title :- A Comparison of Face Verification with
Barekatain, Mohammadamin, Marti, Miquel, Shih,
Facial Landmarks and Deep Features
Hseuh-Fu, Murray, Samuel, Nakayama, Kotaro,
Matsuo, Yutaka, and Prendinger, Helmut (2017), our
In our paper titled "A Comparison of Face
study contributes to the advancement of knowledge in
Verification with Facial Landmarks and Deep
the field of aerial view video analysis, providing
Features," we explored the efficacy of different
valuable insights into the capabilities and limitations of
methodologies in face verification. Leveraging tools
existing methodologies for concurrent human action
such as Dlib for facial landmark detection, Histogram
detection.
of Oriented Gradients (HOG), and the VGG FaceNet
architecture, we conducted an in-depth analysis to c) Paper 3
compare the performance of facial landmarks-based
approach against deep features-based approach. Our Paper Title :- Human Control of UAVs using Face Pose
findings reveal a notable discrepancy in accuracy Estimates and Hand Gestures.
between the two approaches. Specifically, the results
demonstrate that the utilization of deep features In the paper titled "Human Control of UAVs using
significantly outperforms facial landmarks-based Face Pose Estimates and Hand Gestures," the authors
methods in verifying whether a given face corresponds presented a novel approach to the human control of
to a specific individual. This indicates that the depth unmanned aerial vehicles (UAVs) leveraging face pose
estimates and hand gestures. Employing tools and background, and distance. Classification accuracies
technologies such as Support Vector Regression show that well-lit gestures clear within 3 feet are
(SVR), Haar cascade classifier, and computer vision correctly recognized. More than 90%. Limitations of
techniques, the study aimed to facilitate intuitive and the current framework and feasibility Solutions for
efficient interaction between users and UAVs. The key better gesture recognition are also discussed. The
finding of the research revealed that the UAV could be software library we developed and the hand gesture
effectively maneuvered based on the user's hand datasets are available as open source on the project
gestures and facial pose estimates. This integrated website.
control mechanism enabled seamless and intuitive
control of the UAV, enhancing user experience and B. Disadvantages of Existing System:
operational efficiency. Citing the work by Nagi, Jawad
et al. (2014) presented at the 9th ACM/IEEE • Facial recognition on drone video footage.
International Conference on Human-Robot Interaction
• One problem we have found is that the
(HRI), the paper contributes valuable insights into the
performance, especially at low-resolutions the
realm of human-UAV interaction, highlighting the
performance
potential of combining facial pose estimation and hand
gestures for intuitive UAV control.
C. Proposed System
In this work, we studied the overall performance of
VII. PROBLEM DEFINITION deep neural networks for face re- identity particularly
The rapid evolution and spread of drones makes designed for low-nice snap shots, and implemented
them a popular area for researchers in many different them on a drone. We also added some other
domains, either for commercial or personal usage. functionality that is line following, Applications for
There can be many utilities of a drone to be used for this functionality include things like product delivery,
different purposes, like recording extreme sports warehouse management, library management, etc. To
footage or for use in fire- fighting emergency services. execute this functionality, we will use open-cv with an
Keep in mind such use cases. The proposed system algorithm that can detect the line and execute the
will help in controlling the drone using various hand commands based on the line pattern. In the end, if there
gestures like up, down, forward, land stop etc. It will is any emergency and the user wants to alert someone,
also be equipped with a face tracking system, which the user can send an SOS message to the drone, which
will be implemented using image processing. A line follows one start flying in that pattern to grab the
follower mechanism will also be incorporated into the attention of a third person and alert them.
system. This project will aid in further exploring this Keeping in mind the problems mentioned above, we
field of research more deeply. aim to provide a well-equipped, fully functioning, and
interactive drone for public safety and surveillance.
A. Existing System Our project is designed to skip this extreme learning
Computer vision-based methods rely on a drone's curve by making predefined actions for the drone, such
camera's ability to capture images of its surroundings as elevating, de- elevate, and moving left, right,
and use pattern recognition to translate the images into forward, and backward. The operator only needs to get
meaningful and/or actionable information. These are: familiar with hand gestures. We provide functionalities
separating images from the video sequences to create for a wide range of use-cases. We have enabled facial
a robust and reliable image recognition system based and object recognition with which the drone will follow
on separated images, and finally converting classified the target. To overcome the problem of jamming, we
gestures into actionable drone movements, such as: have also configured gesture-based controls. Another
starting, landing, etc. In this work, a set of five feature that we have added is path following, using
gestures is examined. Haar's feature-based AdaBoost which a predefined path can be traced and a repetitive
classifier is used for gesture recognition. We also process can be thus automated
consider operator safety and drone actions when
calculating distance based on computer vision for this D. Advantages of the Proposed System:
task. A series of experiments are performed to measure It takes extensive training and time to learn how to
the accuracy of gesture recognition, taking into operate the drone safely with the common remote that
account the main variables of the scene: lighting, comes with most drones, and drones can be very
dangerous if flown incorrectly or if it goes off-course A. Python
due to the operator not knowing how to use the remote Python is an interpreter, object-oriented, high-level
control. Our project is designed to skip this extreme programming language with dynamic semantics. Its
learning curve by making predefined actions for the high-level built in data structures, combined with
drone such as elevate de-elevate move left, right, dynamic typing and dynamic binding, make it very
forward and backward so that the user can simply pick attractive for rapid application development, as well as
up the drone and start using it without the worry of for use as a scripting or glue language to connect
accidentally thrusting the drone into a tree, damaging existing components together. Python's simple, easy to
the several hundred dollar drone that they just bought learn syntax emphasizes readability and therefore
minutes after using it. On top of that, the user does not reduces the cost of program maintenance. Python
even need to operate any extraneous hardware to supports modules and packages, which encourages
perform those actions; they simply need to use their program modularity and code reuse. The Python
hands. Our gesture schema has been set up to be interpreter and the extensive standard library are
accessible to any and all that have full motion of all of available in source or binary form without charge for
their fingers. Thus, our system tackles the problem of all major platforms, and can be freely distributed.
extensive training requirements to operate the drones Often, programmers fall in love with Python because of
with our easy-to-use hand gesture control mode of the increased productivity it provides. Since there is no
operating anyone with the basic understanding and compilation step, the edit-test-debug cycle is incredibly
one-time training can operate the drone 9 also the fast. Debugging Python programs is easy: a bug or bad
functionality of automating the repeated process of input will never cause a segmentation fault. Instead,
tracing a predefined path is extremely advantageous when the interpreter discovers an error, it raises an
for surveillance purposes, such as in agricultural farms exception. When the program doesn't catch the
to prevent crop damage by animals or track infiltrators exception, the interpreter prints a stack trace. A source
etc moreover in cases of military counter-insurgency level debugger allows inspection of local and global
operations 10 Our proposed system could prove to be variables, evaluation of arbitrary expressions, setting
a great asset as it can locate the positions of in filtrators breakpoints, stepping through the code a line at a time,
without risking the lives of our soldiers and overcome and so on. The debugger is written in Python itself,
the hindrances in operating the traditionally available testifying to Python's introspective power. On the other
remote-controlled drones caused by low-cost jamming hand, many the quickest way to debug a program is to
equipment, which is readily available and used by add a few print statements to the source: The fast edit-
infiltrators to block the radio signals from the remote test-debug cycle makes this simple approach very
control. To conclude, our project is going to be a new effective.
way to interact with drones, making it user-friendly.
B. MySQL
MySQL is well known as the world's most widely
used open-source database (back-end). It is the most
VIII. HARDWARE & SOFTWARE popular database for PHP, as PHP-MySQL is the most
REQUIREMENT frequently used open-source scripting database pair.
The user-interface which WAMP, LAMP, and XAMPP
Hardware: servers provide for MySQL is the easiest and reduces
• Processor: i3 ,i5 or more our work to a large extent.

• RAM: 4GB or more C. FLASK


• Hard disk: 16 GB or more A Flask is a web application framework that is built
with flexibility and speed in mind. Flask is built in
Software: Python, which many data scientists are familiar with.
• Operating System : Windows 10, 7, 8. Flask takes care of the environment and project setup
• Python involved in web applications allowing the developer to
focus on their application rather than thinking about
• Anaconda. HTTP, routing, datasets etc. Flask allows data scientists
• Spyder, Jupyter notebook, Flask. to create simple Single page Applications and one
should help or look into it if they want to create
products for consumers. Flask is a micro web
framework written in Python. It is classified as a micro
framework because it doesn't require particular tools • Camera
or libraries. There is no database abstraction layer,
form validation, or the other components where pre- • Photo: 5MP (2592×1936)
existing third-party libraries provide common • FOV: 82.6°
functions. However, Flask supports extensions which
• Video: HD720P30
will add application Features as if they were
implemented in Flask itself. Extensions exist for
object-relational mappers, form validation, up-load • Battery
handling, various open authentication technologies
• Detachable Battery: 1.1Ah/3.8V
and a number of other common framework related
tools. Flask was created by Armin Ronacher of Pocoo, • Battery Life: 1.1Ah/3.8V
a worldwide group of Python enthusiasts formed in
2004. According to Ronacher, the thought was
• Special Features
originally an April fool’s joke that was popular enough
to form into a significant application. When Ronacher • Throw & Go: Start flying by simply tossing Tello
and Georg Brandl created a bulletin board system into the air.
written in Python, the Pocoo projects Werkzeug and • 8D Flips: Slide on screen to perform cool aerial
Jinja were developed. The Flask has become popular stunts.
among Python enthusiasts. As of October 2020, it had
• Bounce Mode: Tello flies up and down from your
the second most stars on GitHub among Python web- hand automatically.
development frameworks, only slightly behind
Django, and was voted the most popular web • No. of Antennas: 2
framework in the Python Developers Survey 2018.

These are some important features of the flask:


• It is a development server
• Debugger
• RESTful request dispatching
• Unicode Based
• Flask have google app engine Compatibility

D. Drone Specifications

• Weight: 80 g (Propellers and Battery Included)


• Dimensions: 98×92.5×41 mm
IX. PLANNING
• Propeller: 3 inches
A. Software development Life Cycle
• Built-in Functions: Range Finder, Barometer, The entire project spanned for a duration of 6 months.
LED, Vision System, 2.4 GHz 802.11n Wi-Fi, In order to effectively design and develop a cost-
720p Live View effective model the Waterfall model was practiced.
• Port: Micro USB Charging Port
B. Requirement gathering and Analysis phase:
This phase started at the beginning of our project, we
• Flight Performance had formed groups and modularized the project.
• Flight Distance: 800m Important points of consideration were
• Max Speed: 8m/s
• Flight Time: 13min 1. Define and visualize all the objectives clearly.

• Max Flight Height: 30m 2. Gather requirements and evaluate them.


3. Consider the technical requirements needed and military counter-insurgency operations. Our proposed
then collect technical specifications of various system could prove to be a great asset as it can locate
peripheral components (Hardware) required. the positions of infiltrators without risking the lives of
our soldiers.
4. Analyze the coding languages needed for the
project.
5. Define coding strategies. XI. FUTURE MODIFIFACTIONS
6. Analyze future risks / problems. Ensuring ability between totally different
blockchain platforms is of high importance and shall be
7. Define strategies to avoid these risks else define thought-about collectively of the possible directions for
alternate solutions to these risks. future work. In addition, we tend to plan to implement
8. Check financial feasibility. the payment module within the existing framework.
For this, we would like to possess certain issues as we
9. Define Gantt charts and assign time span for each
would like to form a choice of what quantity a patient
phase.
would acquire consultation by the doctor on this
localised system performing on the blockchain. We'd
By studying the project extensively we developed a need to outline sure policies and rules that accompany
Gantt chart to track and schedule the project. Below is
the principles of the health care sector.
the Gantt chart of our project.

XII. CONCLUSION
X. ADVANTAGES The UAV is smaller in size, which assists in flying
It takes extensive training and time to learn how to in compact construction blocks; it has good balance; it
operate the drone safely with the common remote that might fly in specific weather regions; and it is able to
comes with most drones, and drones can be very be operated on the spontaneous movement of people.
dangerous if flown incorrectly or if it goes off-course It'll assist in the border traces of India for the
due to the operator not knowing how to use the remote surveillance and detection of humans. The video has
control. been recorded in an entry tool which assists in
investigating it and doing many photo processes. The
Our project is designed to skip this extreme manufacturing value is much less and the replacement
learning curve by making predefined actions for the is smooth.
drone such as elevate, de-elevate move left, right,
forward, and backward so that the user can simply pick
up the drone and start using it without the worry of ACKNOWLEDGMENT
accidentally thrusting the drone into a tree, damaging The beginning of this undertaking could not have
the several hundred dollar drone that they just bought been possible without the participation and assistance
minutes after using it. On top of that, the user does not of so many people whose names may not all the
even need to operate any extraneous hardware to enumerated. Their contribution are sincerely
perform those actions; they simply need to use their appreciated and gratefully acknowledged. However,
hands. our gesture schema has been set up to be the group would like to express their deep appreciation
accessible to any and all that have full motion of all of indebtedness particularly the following Prof. Nitin
their fingers thus our system tackles the problem of More for his support as a project guide, Dr. Dhanraj P.
extensive training requirements to operate the drones Tambuskar for project guidance and support as the
with our easy-to-use hand gesture control mode of HOD. We wish to express deep sense of gratitude to the
operating anyone with the basic understanding and project Coordinator Prof. Gajanan Thokal of
one-time training can operate the drone. department of mechanical engineering for his
necessary help and encouragement. We would also like
Furthermore, the functionality of automating the to take this opportunity to thank our project coordinator
repeated process of tracing a predefined path is and our principal Dr. Sandeep Joshi for their endless
extremely advantageous in surveillance purposes, support, kind and understanding spirit during our case
such as in agricultural farms to prevent crop damage presentation.
by animals, or tracking infiltrators etc., in cases of
REFERENCES
1. Rajkomar A, Oren E, Chen K, Dai AM, Hajaj N, 5. Anaconda (Python distribution) - Wikipedia
Hardt M, et al. Scalable and accurate deep
learning with electronic health records. NPJ
6. https://link.springer.com/article/10.1007%2Fs406
Digital Medicine 2018 May 8;1(1):1-10.
92-019-00133-9
[CrossRef]

7. Rabah, K. Challenges & opportunities for


2. Health Information Privacy U.S. Department of
blockchain powered healthcare systems: a review.
Health & Human Services. URL:
Mara Res J Med Health Sci 2017; 1(1): 45–52.
http://www.hhs.gov/hipaa/ [accessed 2019-02-
01]
8. Yue, X, Wang, H, Jin, D, et al. Healthcare data
gateways: found healthcare intelligence on
3. https://jupyter.org
blockchain with novel privacy risk control. J Med
Syst 2016; 40(10): 218.
4. Krawiec R, Housman D, White M, Filipova M,
Quarre F, Barr D, et al. Blockchain: Opportunities
9. Shae, Z, Tsai, JJP. On the design of a blockchain
for health care. In Proc. NIST Workshop
platform for clinical trial and precision medicine.
Blockchain Healthcare; 2016.
In: Proceedings of the 2017 IEEE 37th
URL:https://www2.deloitte.com/content/dam/De
international conference on distributed computing
loitte/us/Documents/public-sector/us-
systems (ICDCS), Atlanta, GA, 5–8 June 2017.
blockchain-opportunities-for health-care.pdf
[accessed 2019-02-01]

View publication stats

You might also like