Main Project Edited
Main Project Edited
Main Project Edited
On
Submitted by
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND COMMUNICATION
ENGINEERING
Certificate
This is to certify that the Project Report entitled Smart Sanitization Robot is a bonafide
work carried out by ANAGHA MENON (KTE19EC018), ELSA MARIA XAVIER
(KTE19EC034), G SARADHA (KTE19EC036), and RISHIKESH M NAMBIAR (KTE19EC051)
during 2022-23, in partial fulfillment for the award of the B.Tech. Degree in Electron-
ics and Communication Engineering at A.P.J Abdul Kalam Technological University,
Rajiv Gandhi Institute of Technology.
Professor Professor
As the project reached its completion, we wish to express our deep and profound feel-
ing of gratitude to our Project Coordinators, Dr. Rezuana Bai J, Associate Professor,
Dept. of ECE Prof. Umesh A C, Associate Professor, Dept. of ECE, our Project guide,
Dr. Manju Manuel, Professor, Department of Electronics and Communication Engi-
neering, Rajiv Gandhi Institute of Technology, Kottayam for inspiring and providing
sincere guidance throughout the project.
We express our heartfelt gratitude to our Principal, Dr. Jalaja and the Head of our
Department Dr. Anil Kumar C.D for providing us with this opportunity to success-
fully complete the project.
We also convey our genuine love to our parents for their constant love and words of
affection without which we wouldn’t have reached anywhere. We would like to thank
our project guides for their support and assistance throughout this project. The com-
pletion of this project wouldn’t have been possible without their help and guidance
from them. We thank them for their valuable guidance, suggestions, and support.
We are grateful to all the staff members of our college, RIT Kottayam
especially for their cooperation and help extended during the course of our project.
We also like to thank all our friends and family members for their encouragement,
inspiration, and moral support without which this work would never be possible.
1
ABSTRACT
The outbreak of the pandemic has reinstated the importance of proper cleaning and
sanitization in all areas with human interaction. The need for an automated saniti-
zation system in quarantine rooms, hospitals, chemical industries, and households is
inevitable. With this project, we intend to create a sanitization robot that maps the to-
tal layout of a room using a LiDAR sensor and sanitizes the entire surface of the map
generated. Provisions for alerts on an LCD screen, when a sanitizer is almost finished,
are also set up. We plan to use the Robot Operating System (ROS) platform to simu-
late and model the prototype. SLAM (Simultaneous Localization and Mapping) is the
technique used for mapping. It enables us to create a 3D model with the help of which
the robot can navigate its path automatically.
2
Contents
ACKNOWLEDGEMENT 1
ABSTRACT 2
LIST OF FIGURES 7
1 INTRODUCTION 8
2 LITERATURE SURVEY 10
.10
3 SYSTEM DESCRIPTION 12
3.1 Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 SLAM (Simultaneous Localization and Mapping) . . . . . . . . . . . . . 14
3.2.1 Visual SLAM (vSLAM) . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 LiDAR SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 MAIN FUNCTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.1 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.2 Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.3 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4 SOFTWARES USED 20
4.1 ROS (Robot Operating System) . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.1 Robot Programming . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.1.2 ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1.3 ROS Architecture and Concepts . . . . . . . . . . . . . . . . . . . 21
4.1.4 Why use ROS: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3
4.1.5 Hector SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.6 ROSSERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 RVIZ (Robot Visualization) . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 ARDUINO IDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 VNC VIEWER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.5 GAZEBO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.6 SOLIDWORKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5 HARDWARE DESCRIPTION 32
5.1 Raspberry Pi 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2 Arduino Mega 2560 Rev3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 12V 5200mAh Lithium-ion Battery . . . . . . . . . . . . . . . . . . . . . . 35
5.4 L293D Motor Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 DC Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.6 SLAMTEC RPLiDAR A1-M8 . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.7 Mini Aquarium Water Pump 6v-12v . . . . . . . . . . . . . . . . . . . . . 39
6 PROPOSED 3D MODEL 40
8 FINAL MODEL 45
10 CONCLUSION 54
4
REFERENCES 54
5
List of Figures
5.1 Raspberry pi 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Arduino Mega . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 5200mAh Lithium Ion Battery . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.4 L293D Motor Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.5 DC Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5.6 SLAMTEC RPLiDAR A1-M8 . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.7 Mini Aquarium Water Pump . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.1 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6
9.1 Code for map generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.2 Map generated on Rviz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9.3 Code for Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.4 Localization of the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.5 Code for Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.6 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
9.7 Generated Path is shown in green . . . . . . . . . . . . . . . . . . . . . . . 53
7
Chapter 1
INTRODUCTION
The outbreak of the pandemic has reinstated the importance of proper sanitization of
surfaces. And as we are familiar, the implementation of autonomous robots to make
life easier for humans is the new trend in technology. Incorporating both of these, we
intend to develop an autonomous bot sanitizing in a closed area that can automati-
cally traverse the dynamic environment by correcting the previously generated map,
if necessary. Using a predefined framework such as ROS, the process is much easier
and less expensive.
ROS is an open-source framework for robots that comprise libraries and packages.
The project that we are presenting, focuses on the SLAM method for localizing the
robot and building the map. The robot creates the map by using the IR data of ob-
stacles generated by the LiDAR sensor and localizes itself in the map concerning the
landmarks it recognizes during the navigation. A sanitizer pumping system would be
implemented along with the robot to sanitize the path that the robot travels. In practi-
cal terms, this can be deduced that the robot explores a static world and plans a path
for its movement with an accurate map. The methods which are used for SLAM can
be broadly classified into three categories.
Autonomous robots use some sort of visual sensors to sense and generate the map of
the environment and line sensors to navigate after that. Laser, ultrasonic sensors, and
cameras are often used for SLAM purposes. A LiDAR sensor is used as the sensor in
this work because can be used to map an obstacle’s height, density, and other char-
acteristics which help in accurate map building. The basic concept behind SLAM is
8
to generate a map of the unknown environment by exploring the environment only
once. While exploring the environment, the robot understands its location also covers
the entire space.
9
Chapter 2
LITERATURE SURVEY
Pan, Z. Xie, and Y. Jiang have proposed methods to perform experiments con-
ducted in the ROS operating system[1]. SLAM algorithm allows the robot to sense the
environment through its own sensors, build a map of the environment, and calibrate
its position, so that it can move in an unknown environment. The SLAM algorithm
based on laser has stable ranging performance in small static scenes, which is less af-
fected by the light intensity.
Syed Saad ul Hassan shows the performance of LiDAR Sensor adopting the HECTOR
SLAM algorithm [2]. LiDAR sensor is the perfect choice for distance calculation. It
also discusses the three assumptions, for SLAM to be able to work properly; the en-
vironment must be suitable for data acquisition (illumination, textured, etc.), the data
that is received from the sensors must be overlapping partially in consecutive frames
and the static environment must be dominated over the dynamic environment. Li-
dar (light detection and ranging) with the aid of the laser phenomenon illuminates its
vicinity by ejecting lasers. Those transmitted laser pulses are then reflected back that
are then detected via a photodetector and distance is measured. By knowing the posi-
tion, lidar allows us to calculate the 3D points. Compared to other imaging and range
measurement techniques such as camera and radar, lidar always has the upper hand
in terms of accuracy, reliability, and range. One more important characteristic of lidar
is its applications in multiple domains.
• Object Detection
10
• Object Mapping and Depth Imaging
Mustafa Eliwa, Ahmed Adham, Islam Sami and Mahmoud Eldeeb suggest that Hector
SLAM is an open-source algorithm used for building a 2D grid map for the surround-
ing environment based on laser scan sensor (LIDAR) [3]. This algorithm locates the
position of the robot based on scan matching and doesn’t use the odometry of the
wheel which is the common method in other SLAM algorithms. The LIDAR manages
it to perform the scan matching task to locate the robot fast and accurately due to its
high update rate.
B. An, Y., Shi, J., Gu, D. et al present that visual slam requires relatively stable lighting
changes[5] . The images obtained from a monocular camera cannot obtain absolute
scale directivity. Compared to Visual SLAM, Lidar SLAM has higher accuracy. The
collected 3D points are distorted in Lidar SLAM due to the motion of the lidar sensors.
Also, the paper shows that the vertical resolution of Lidar sensors is low. This paper
was published in 2022.
11
Chapter 3
SYSTEM DESCRIPTION
The Smart Sanitization Robot is designed to provide its services in all kinds of places
like hospitals, schools, households, and offices. It aims to sanitize the area that it
covers. Its movement is decided by the path generated from mapping. The map is
created with the help of LiDAR’s precise object-detection features. The robot can create
a map of the room and sanitizes the surface area as it moves. The system consists of
many elements that are required for each function. It consists of Raspberry Pi 4 as the
microcontroller board which dependently controls the movements with inputs from
the ROS Software working under the SLAM Algorithm on a PC which signals from
the LiDAR sensor. When the power is turned on, the model senses the area around
it and builds a map of its environment using data from the sensors provided. The
location of the object relative to its surrounding is identified, creates a map, detects the
objects as point clouds, and navigates a path of its own to sanitize the environment. In
our project, we use a Lidar Sensor whose outputs are given to Raspberry Pi v4 which
then takes decisions accurately for all the required movements. It then initiates the
movement and avoids any obstacles in its path and the complete area is covered. In
addition to that, the exhaustion of the sanitizer will be notified to the user through an
LCD display module. Figure 3.1 shows the block diagram of the system.
12
3.1 Block Diagram
The above-shown image depicts the block diagram of our smart sanitization system.
The Smart Sanitization Robot obtains the environment’s information with the help
of the LiDAR sensor. This then proceeds to create a map of the surroundings and
detects the obstacles. Then, after localization, it generates a path for its movement,
in order to cover the complete area as shown in the map. The said functions take
place in the Raspberry Pi 4. The movement information is sent to the Arduino Mega
which facilitates the desired movement by driving the motor driver and as a result,
the motors and wheels. The mini water pump begins to pump the sanitizer as soon
as the movement begins, and it is sprayed through the nozzle. The ultrasonic sensor
simultaneously checks the level of the sanitizer liquid left in the container and informs
the user through the LCD display.
13
3.2 SLAM (Simultaneous Localization and Mapping)
The entire working of SLAM can be broken down into Front-end data collection and
Back-end data processing. The front-end data collection of SLAM is of two types Vi-
sual SLAM and LiDAR SLAM.
Visual SLAM uses a camera to acquire or collect imagery of the surrounding. Visual
SLAM can use simple cameras (360-degree panoramic, wide angle, and fish-eye cam-
eras), compound eye cameras (stereo and multi cameras), and RGB-D cameras (depth
and ToF cameras). A ToF (time-of-flight) camera is a range imaging camera system
that employs time-of-flight techniques to resolve the distance between the camera and
the subject for each point of the image, by measuring the round trip time of an artificial
14
light signal provided by a laser or an LED.
Visual SLAM implementation is generally low cost as they use relatively inexpensive
cameras. Additionally, cameras provide a large volume of information, they can be
used to detect landmarks (previously measured positions). Landmark detection can
also be combined with graph-based optimization, achieving flexibility in SLAM im-
plementation. Visual SLAM algorithms can be broadly classified into two categories:
Sparse methods match feature points of images and use algorithms such as PTAM and
ORB-SLAM. Dense methods use the overall brightness of images and use algorithms
such as DTAM, LSD-SLAM, DSO, and SVO.
The visual SLAM is often paired with an Inertial Measurement Unit (IMU), to map
and plot a navigation path. When an IMU is also used, this is called Visual-Inertial
Odometry, or VIO. Odometry refers to the use of motion sensor data to estimate a
robot‘s change in position over time. Typically in a visual SLAM system, set points are
tracked through successive camera frames to triangulate 3D position, called feature-
point triangulation. This information is relayed back to create a 3D map and identify
the location of the robot. An IMU can be added to make feature-point tracking more
robust, such as when panning the camera past a blank wall. This is important with
drones and other flight-based robots which cannot use odometry from their wheels.
After mapping and localization via SLAM are complete, the robot can chart a navi-
gation path. Through visual SLAM, the robot would be able to easily and efficiently
navigate a room while bypassing the objects, by figuring out its own location as well
as the location of surrounding objects. A potential error in visual SLAM is a repro-
jection error, which is the difference between the perceived location of each set point
and the actual set point. Camera optical calibration is essential to minimize geometric
distortions (and reprojection error) which can reduce the accuracy of the inputs to the
SLAM algorithm.
15
3.2.2 LiDAR SLAM
LiDAR SLAM uses a laser sensor. Compared to Visual SLAM which used cameras,
lasers are more precise and accurate. The high rate of data capture with more preci-
sion allows LiDAR sensors for use in high-speed applications such as moving vehicles
such as self-driving cars and drones. The output data of LiDAR sensors often called
as point cloud data is available with 2D (x, y) or 3D (x, y, z) positional information.
The laser sensor point cloud provides high-precision distance measurements and works
very effectively for map construction with SLAM. Movement is estimated sequentially
by matching the point clouds. The calculated movement is used for localizing the ve-
hicle. For LiDAR point cloud matching, iterative closest point (ICP) and normal dis-
tributions transform (NDT) algorithms are used. 2D or 3D point cloud maps can be
represented as a grid map or voxel map.
A LiDAR-based SLAM system uses a laser sensor paired with an IMU to map a room
similarly to visual SLAM, but with higher accuracy in one dimension. LiDAR mea-
sures the distance to an object by illuminating the object with multiple transceivers.
Each transceiver quickly emits pulsed light and measures the reflected pulses to de-
termine position and distance.
As light travels quickly, very precise laser performance is needed to accurately track
the exact distance from the robot to each target. This requirement for precision makes
LiDAR both a fast and accurate approach. One of the main downsides to 2D LiDAR
is that if one object is occluded by another at the height of the LiDAR, or an object
is an inconsistent shape that does not have the same width throughout its body, this
information is lost.
16
LIDAR systems like the RPlidar A1 and provides 2D pose estimates at the scan rate
of the sensors. While it does not provide explicit loop-closing ability, it is sufficiently
accurate for many real-world scenarios. Hector-SLAM is considered to be one of the
easiest SLAM algorithms to implement for autonomous navigation and mapping on a
custom robot.
17
3.3 MAIN FUNCTIONS
3.3.1 Mapping
The process of estimation of the model of the environment using the sensor data given
the location of the robot at all times. In simple words, it is the process of building
a map of the environment. In this problem, the actual pose of the bot is known but
the surroundings are unknown. Hence it is only concerned with the position of land-
marks. The internal representation of the map can be metric or topological. The metric
framework is the most common for humans and considers a two-dimensional space
in which it places objects. The objects are placed with precise coordinates. This rep-
resentation is very useful but is sensitive to noise and it is difficult to calculate the
distances precisely. The topological framework only considers places and relations
between them. Often, the distances between places are stored. The map is then a
graph, in which the nodes correspond to places and arcs correspond to the paths.
Many techniques use probabilistic representations of the map, in order to handle un-
certainty. There are three main methods of map representations, i.e., free space maps,
object maps, and composite maps. These employ the notion of a grid but permit the
resolution of the grid to varying so that it can become finer where more accuracy is
needed and more coarse where the map is uniform.
3.3.2 Localization
Localization forms the heart of various autonomous mobile robots. For efficient nav-
igation, these robots need to adopt effective localization strategy. Localization is the
estimation of the pose of the robot given how the environment looks, i.e., given a set
of landmarks that represent a map. In the problem of localization, the map is known,
but the current pose of the bot is unknown. In other words, localization is the pro-
cess of finding a relation between the given coordinate system of a map and the actual
pose of the bot. In a typical robot localization scenario, a map of the environment is
available and the robot is equipped with sensors that observe the environment as well
as monitor its own motion. The localization problem then becomes one of estimating
the robot’s position and orientation within the map using information gathered from
these sensors. Robot localization techniques need to be able to deal with noisy obser-
18
vations and generate not only an estimate of the robot’s location but also a measure of
the uncertainty of the location estimate.
3.3.3 Navigation
The autonomous navigation can be performed once the mapping and localization
are completed. The ROS navigation stack packages are used to implement Adaptive
Monte Carlo Localization(AMCL). The node to perform localization on a static map is
provided by the ROS AMCL package. This node subscribes to the Transform(TF) data,
2D laser scan data or camera provided by the robot, and the previously generated
static map. The 2D nav goal in RViz is used to assign a destination goal to the robot on
the map along with its orientation. The path to the desired destination is planned by
the robot and velocity commands are given to the robot controller. Smooth and safe
navigation of mobile robots through cluttered environments from the start position to
the goal position following a safe path and producing optimal path length is the main
aim of mobile robot navigation.
19
Chapter 4
SOFTWARES USED
A robot is a machine with sensors, actuators, and a computing unit that behaves based
on user controls, or it can make its own decisions based on sensor inputs. The brain
of the robot is a microcontroller or a PC. Robot programming is the process of making
the robot work by writing a program for the robot’s brain. Robot programming is a
subset of computer programming. Any of the programming languages can program
robots, but good community support, performance, and prototyping time make C++
and Python is the most commonly used.
The following are some of the features needed for programming a robot.
• Threading
• Ease of prototyping
• Interprocess communication
20
4.1.2 ROS
Ubuntu Linux is the most preferred OS for installing ROS. ROS can run on PC/desktops
and on single-board computers like Raspberry Pi.
21
ROS is a framework to communicate between two programs or processes. If program
A wants to send data to program B, and B wants to send data to program A, we can
easily implement it using ROS. Figure 4.1 shows two programs marked as node 1 and
node 2. When any of the programs start, a node communicates to a ROS program
called the ROS master. The node sends all its information to the ROS master, includ-
ing the type of data it sends or receives. The nodes that are sending data are called
publisher nodes, and the nodes that are receiving data are called subscriber nodes.
The ROS Master has all the publisher and subscriber information running on comput-
ers. If node 1 sends particular data called “A” and the same data is required by node 2,
then the ROS master sends the information to the nodes so that they can communicate
with each other.
The ROS nodes can send different types of data to each other, which includes primitive
data types such as integer, float, string, and so forth. The different data types being
sent are called ROS messages. With ROS messages, we can send data with a single
data type or multiple data with different data types. These messages are sent through
a message bus or path called ROS topics. Each topic has a name.
Although ROS has many features, there are still areas in which ROS can’t be used or
is not recommended to use. In the case of a self-driving car, for example, we can use
ROS to make a prototype, but developers do not recommend ROS to make the actual
product. This is due to various issues, such as security, real-time processing, and so
forth. ROS may not be a good fit in some areas, but in other areas, ROS is an absolute
fit. In corporate robotics research centers and at universities, ROS is an ideal choice for
prototyping. And ROS is used in some robotics products after a lot of fine-tuning (but
not self-driving cars).
Hector SLAM is a mapping algorithm that only uses laser scan information to extract
the map of the environment. RViz is used for visualization. hector_mapping is a SLAM
22
approach that can be used without odometry and on platforms that exhibit roll/pitch
motion (of the sensor, the platform, or both). Hector SLAM combines a 2D SLAM
system based on a robust scan-matching technique. Estimation of robot movement in
real-time and different parameters of scanning rate from LiDAR sensor tested in this
experiment [4]. In this project, RPLIDAR A1 Laser Scanner (with 360-degree scanning
feature) 2D lidar has been used.
4.1.6 ROSSERIAL
ROSSERIAL is a standard protocol for communicating between ROS and a serial de-
vice. The communication is over a serial transmission line and uses serialization/de-
serialization techniques for transmitting ROS messages. ROS serial consists of a gen-
eral p2p protocol, libraries for use with Arduino, and nodes for the PC/Tablet side
(currently in both Python and Java). Here, we use rosserial_arduino package[8] which
allows communication between a Linux device i.e. raspberry pi 4, and a non-Linux
device i.e. Arduino Mega 2560.
It’s possible to program the rosserial node to publish and subscribe in the same
way as a normal ROS node. The topics are transferred across the USB serial port using
the rosserial_python serial_node on whichever device the rosserial device is tethered
23
to - so to be clear, a rosserial node cannot properly communicate with the rest of ROS
without the serial_node to act as a middle man to pass the messages between ROS
and the microcontroller. The serial device sends ROS messages as a packet which has
a header and a tail that allows multiple topics and services from a single hardware
device. The packet also contains flags to synchronize the communication between the
PC and the device, and vice versa. The figure below shows the packet format using
the rosserial protocol.
The first byte is called Sync Flag which is used to synchronize the communication
between ROS and the device. Its value will be always 0xff.
The second byte or Protocol Version/Sync Flag tells you if the ROS version we are
using is before ROS Groovy or after Groovy. The value will be 0xfe after ROS Groovy
and will be 0xff up until Groovy.
The third and fourth bytes represent the length of the message which is on the packet,
and the fifth byte is a checksum of the message length.
The sixth and seventh bytes are dedicated to Topic ID. The Topic ID from 0-100 is
reserved for system functions. The remaining bytes are used for serial data and its
checksums.
The checksums of the length of the packet and data is computed using the follow-
ing equation:
255 - ( (Topic ID Low Byte + Topic ID High Byte + data byte values)% 256)
The communication between the serial device and PC will start from the PC side,
which will send a query packet for getting the number of topics, names, and types of
24
topics from the Arduino/serial device side. When the Arduino gets this query packet,
it will reply to the PC with a series of response packets. The response packet will
consist of the following entries:
• uint16 topic_id
• string topic_name
• string message_type
• string md5sum
• int32 buffer_size
25
4.2 RVIZ (Robot Visualization)
RViz is an acronym for ROS visualization and is an effective 3D visualization tool for
ROS. It permits the user to see the simulated robot model, log sensor data from the
robot’s sensors, and replay the logged sensor data. By visualizing what the robot is
seeing, thinking, and doing, the user can debug a robot application from sensor inputs
to planned (or unplanned) actions.
RViz exhibits 3D sensor data from stereo cameras, lasers, Kinects, and other devices
in the form of point clouds. 2D sensor data from webcams and RGB cameras can be
viewed in RViz as image data. It enables you to see the robot’s perception of its world.
The purpose of Rviz is to enable you to visualize the state of a robot. It uses sensor data
to try to create an accurate depiction of what is going on in the robot’s environment.
Figure 4.2 shows a map created in RViz
The RViz tool is an official 3D visualization tool of ROS. Almost all kinds of data from
sensors can be viewed through this tool. RViz will be installed along with the ROS
desktop full installation. If an actual robot is communicating with a workstation that
is running rviz, rviz will display the robot’s current configuration on the virtual robot
model. ROS topics will be displayed as live representations based on the sensor data
published by any cameras, infrared sensors, and laser scanners that are part of the
robot’s system. This can be useful to develop and debug.
26
4.3 ARDUINO IDE
27
The main code, also known as a sketch, created on the IDE platform will ultimately
generate a Hex File which is then transferred and uploaded to the controller on the
board. The IDE environment mainly contains two basic parts: Editor and Compiler
where the former is used for writing the required code and later is used for compiling
and uploading the code into the given Arduino Module. The environment supports
both C and C++ programming languages.
28
4.4 VNC VIEWER
Not only can the remote user see everything on the remote computer’s screen, but the
program also allows for keyboard and mouse commands to work on the remote com-
puter from afar, so the connected user has full control (after being granted permission
from the remote computer).
VNC Viewer is used for local computers and mobile devices you want to control. A
device such as a computer, tablet, or smartphone with VNC Viewer software installed
can access and take control of a computer in another location. In this project, we have
used the VNC Viewer to view and control the Raspberry Pi 4. It is a graphical desktop
sharing system that allows a user to remotely control the desktop of a remote computer
(running VNC Server) from your device, and it transmits the keyboard and mouse or
touch events to VNC Server so that once you are connected, you have control over the
computer you’ve accessed.
29
4.5 GAZEBO
Gazebo is a 3D dynamic simulator used for simulating complex robotic models in in-
door and outdoor environments. Its objective is to simulate a robot, giving you a close
substitute for how your robot would behave in a real-world physical environment. It
provides the ability to test the robotic model in the simulated environment and incor-
porate the data from the sensors.
With Gazebo you are able to create a 3D scenario on your computer with robots, ob-
stacles, and many other objects as shown in Figure 4.3. Gazebo allows testing the
performance of the robotic model in extreme conditions without damaging the hard-
ware. Most of the time it is faster to run a simulator instead of starting the whole
scenario on your real robot. Gazebo brings a fresh approach to simulation with a com-
plete toolbox of development libraries and cloud services to make simulation easy. It
uses a physical engine for illumination, inertia, gravity, etc. An eXtensible Markup
Language(XML) file called the Universal Robotic description format(URDF) is used to
describe different elements of the robot.
30
4.6 SOLIDWORKS
31
Chapter 5
HARDWARE DESCRIPTION
The main components used in the smart sanitization robot are Raspberry Pi, Motor
drivers, Motors, and USB cameras. The following section gives a description of the
components.
5.1 Raspberry Pi 4
Raspberry Pi provides access to the on-chip hardware i.e. GPIOs for developing an
application. By accessing GPIO, we can connect devices like LED, motors, sensors,
etc, and can control them too. It has ARM-based Broadcom Processor SoC along with
an on-chip Graphics Processing Unit(GPU). The CPU speed of Raspberry Pi varies
from 700 MHz to 1.2 GHz. Also, it has onboard SDRAM that ranges from 256 MB to 1
GB. Raspberry Pi also provides on-chip SPI, I2C, I2S, and UART modules.
The operating system for all Raspberry Pi products is Linux. Linux is an open-source
operating system that interfaces between the computer’s hardware and software pro-
32
grams. The language used with Raspberry Pi is Python – a general-purpose and high-
level programming language used to develop graphical user interface (GUI) applica-
tions, websites, and web applications. One of the most impressive aspects of Rasp-
berry Pi is the amount and variety of support available to users of Raspberry Pi.
The Raspberry Pi 4 Model B is the latest version of the low-cost Raspberry Pi computer.
It is shown in Figure 5.1. It has a 64bit quad-core processor having cortex-A72 (ARM
v8) clocked @1.5GHz(later models: 1.8 GHz) . The new Pi board includes Broadcom
BCM2711 VC6 GPU able to handle two 4kp30 displays also it can handle H.265 decod-
ing at 4kp60. It has two micro HDMI ports.
The new Pi board comes in 2GB, 4GB, and 8GB LPDDR-4 RAM options. It has a dual-
band 2.4/5.0 GHz Wi-Fi, Bluetooth 5.0, Gigabit Ethernet Port, 2 USB 3.0, and 2 USB 2.0
ports. USB type C power input port. Also, it has POE capability via separate POE HAT
(add-on). It has a standard 40 pin GPIO header (having backward compatibility). The
new Raspberry Pi is even faster, powerful and versatile for a huge variety of projects
involving robotics, automation, image processing, AI, and many more.
Raspberry pi 4 can be used for a wide range of applications like using it as a retro
arcade machine, a web server, or using it as the brain for a robot, security system, IoT
device, or dedicated Android device.
33
5.2 Arduino Mega 2560 Rev3
Arduino Mega 2560 Rev3 is a microcontroller board based on the ATmega2560 mi-
crocontroller. Most Arduino boards seem a good pick for projects that require little
memory to store the program. However, when the nature of projects goes complex
that requires more memory and a rich set of I/O interfaces, the Arduino Mega 2560
Rev3 comes into play. This board is an advanced version of the board Arduino Mega
2560.
The Arduino MEGA 2560 is designed for projects that require more I/O lines, more
sketch memory, and more RAM. With 54 digital I/O pins, 16 analog inputs, and a
larger space for your sketch it is the recommended board for 3D printers and robotics
projects. This gives your projects plenty of room and opportunities to maintain the
simplicity and effectiveness of the Arduino platform.
In our project, we have used the Arduino Mega for the pumping system, the robot’s
movement, the detection of the sanitizer liquid level using an Ultrasonic Sensor, and
the indication of the liquid on an LCD module.
34
5.3 12V 5200mAh Lithium-ion Battery
This Li-ion Rechargeable Battery Pack has a nominal voltage of 12.6 volts and is a 3s2p
battery pack. This battery pack contains 6 cells connected in series and parallel, giving
it a capacity of 5200mAh. The battery pack has a built-in BMS that protects the battery
from overcharging, over-discharging, and short circuit.
Applications:
35
5.4 L293D Motor Driver
L293D IC is a low-voltage operating motor driver. The other ICs could have the same
functions as L293D but they cannot provide high voltage to the motor. L293D provides
the continuous bidirectional Direct Current to the Motor. The Polarity of the current
can change at any time without affecting the whole IC or any other device in the circuit.
L293D has an internal H-bridge installed for two motors.
H-Bridge is an electrical circuit that enables the load in a bidirectional way. L293D
bridge is controlled by external low-voltage signals. It may be small in size, but its
power output capacity is higher than our expectations. It could control any DC motor
speed and direction with a voltage range of 4.5 – 36 Volts. Its diodes also save the
controlling device and IC from back EMF. To control the max 600mA amount of current
an internal sink is installed.
When the power supply is less than or equal to 12V, then the internal circuitry will
be powered by the voltage regulator, and the 5V pin can be used as an output pin to
power the microcontroller. The jumper should not be placed when the power supply is
greater than 12V and separate 5V should be given through a 5V terminal to power the
internal circuitry. ENA and ENB pins are speed control pins for Motor A and Motor B
while IN1 and IN2, and IN3 and IN4 are direction control pins for Motor A and Motor
B.
36
5.5 DC Motor
DC motor is any of a class of rotary electrical motors that converts direct current (DC)
electrical energy into mechanical energy. The most common types rely on the forces
produced by induced magnetic fields due to flowing current in the coil. Nearly all
types of DC motors have some internal mechanism, either electromechanical or elec-
tronic, to periodically change the direction of current in a part of the motor.
DC motors were the first form of motors widely used, as they could be powered by ex-
isting direct-current lighting power distribution systems. A DC-geared motor’s speed
can be controlled over a wide range, using either a variable supply voltage or by
changing the strength of the current in its field windings. Small DC motors are used
in tools, toys, and appliances. The universal motor, a lightweight brushed motor used
for portable power tools and appliances can operate on direct current and alternating
current. Larger DC motors are currently used in the propulsion of electric vehicles,
elevators, and hoists, and in drives for steel rolling mills.
37
5.6 SLAMTEC RPLiDAR A1-M8
The RPLIDAR A1 is a LiDAR sensor that works on the principle of distance calcula-
tion by reflected infrared light. It can perform a 360-degree scan within a 12-meter
range. The produced 2D point cloud data can be used in mapping, localization, and
object/environment modeling. The scanning frequency of RPLIDAR A1 reached 5.5hz
when sampling 1450 points each round. And it can be configured up to 10hz maxi-
mum.
38
5.7 Mini Aquarium Water Pump 6v-12v
R385 6-12V DC Diaphragm Based Mini Aquarium Water Pump is an ideal nonsub-
mersible pump for a variety of liquid movement applications. It has enough pressure
to be used with a nozzle to make a spray system. The pump can handle heated liquids
up to a temperature of 80°C and when suitably powered can suck water through the
tube from up to 2m and pump water vertically for up to 3m.
Possible uses/projects include; a small aquarium pump, automatic plant watering sys-
tem, making a water feature, or music-activated dancing water features to name but a
few. When pumping a liquid the pump runs very quietly. The pump is also capable of
pumping air, though when pumping air the pump is quite noisy in comparison.
The R385 requires between 6 – 12V DC and between 0.5 – 0.7A and will deliver its
maximum operating values when power is at the upper end of these ranges.
39
Chapter 6
PROPOSED 3D MODEL
A 3D Model has been created for visualizing the working of the system. The models
shown in Figure 6, are developed in SOLIDWORKS. They depict the basic design and
spaces allotted for the placing of each component. The top view and side view of the
system give an idea about the positions of the mainboards, camera, sanitizer dispenser,
and so on.
40
Figure 6.2: Position of sanitizer dispenser
41
Chapter 7
SIMULATION EXPERIMENT
RESULTS
The experiments have been performed first in the Gazebo simulation environment.
The autonomous robot was initially localized at a random location inside the bound-
aries. The robot explores the environment while memorizing its position and gener-
ating a map of the environment. The method can compute the optimal paths while
avoiding obstacles. Figure 7.1 shows the simulation environment in Gazebo contain-
ing the robot model. The obstacles inside the room are identified by the robot and it is
represented as point clouds in RViz as shown in Figure 7.2.
42
Figure 7.2: Simulation Results in RViz
43
44
Chapter 8
FINAL MODEL
45
Chapter 9
REAL ENVIRONMENT
IMPLEMENTATION
hector_mapping is a node for LIDAR-based SLAM with no odometry and low com-
putational resources. Below is the ROS implementation of Hector SLAM.
• map (nav_msgs/OccupancyGrid): Get the map data from this topic, which is
latched, and updated periodically
46
• poseupdate (geometry_msgs/PoseWithCovarianceStamped): The estimated robot
pose with a gaussian estimate of uncertainty
9.1.3 Services
• reset_map (std_srvs/Trigger): Call this service to reset the map, and Hector will
start a whole new map from scratch. This doesn’t restart the robot’s pose, and it
will restart from the last recorded pose.
• restart_mapping_with_new_pose (hector_mapping/ResetMapping):
Call this service to reset the map, and the robot’s pose, and resume mapping (if
paused)
9.1.4 Parameters
• ∼ base_frame (string, default: base_link): The name of the base frame of the
robot. This is the frame used for localization and for the transformation of laser
scan data.
• ∼ map_resolution (double, default: 0.025): The map resolution [m]. This is the
length of a grid cell edge.
• ∼ map_size (int, default: 1024): The size [number of cells per axis] of the map.
The map is square and has (map_size map_size) grid cells.
• ∼ map_start_x (double, default: 0.5): Location of the origin [0.0, 1.0] of the /map
frame on the x-axis relative to the grid map. 0.5 is in the middle.
• ∼ map_start_y (double, default: 0.5): Location of the origin [0.0, 1.0] of the /map
frame on the y-axis relative to the grid map. 0.5 is in the middle.
47
• ∼ map_update_distance_thresh (double, default: 0.4): The threshold for per-
forming map updates [m]. The platform has to travel this far in meters or expe-
rience an angular change as described by the map_update_angle_thresh param-
eter since the last update before a map update happens.
• ∼ update_factor_free (double, default: 0.4): The map update modifier for up-
dates of free cells in the range [0.0, 1.0]. A value of 0.5 means no change.
• ∼ laser_min_dist (double, default: 0.4): The minimum distance [m] for laser
scan endpoints to be used by the system. Scan endpoints closer than this value
are ignored.
• ∼ laser_max_dist (double, default: 30.0): The maximum distance [m] for laser
scan endpoints to be used by the system. Scan endpoints farther away than this
value are ignored.
48
• ∼ pub_map_odom_transform (bool, default: true): Determine if the map->odom
transform should be published by the system.
• ∼ output_timing (bool, default: false): Output timing information for the pro-
cessing of every laser scan via ROS_INFO.
• ∼ scan_subscriber_queue_size (int, default: 5): The queue size of the scan sub-
scriber. This should be set to high values (for example 50) if logfiles are played
back to hector_mapping at faster than realtime speeds.
49
9.2 Mapping Results
The above image shows the map generated by Rviz. The Lidar sensor senses the sur-
roundings and generates a map which then decides the path for the robot to move
along.
50
9.3 Localization Results
51
9.4 Navigation Results
Once the map is created, the path for the robot to move, maneuvering the obstacles, is
generated by Rviz.
52
Figure 9.7: Generated Path is shown in green
The above image shows the generated path for movement in green. According to this
path, the robot will move along this line.
53
Chapter 10
CONCLUSION
The global pandemic has made us realize how sanitization is essential in our daily
lives. The automatic robot that has been proposed here, is a perfect tool for unsuper-
vised sanitization in the desired area. The camera calibration of an already available
simulation environment has been successfully completed and map points along with
depth information are obtained, the details of which are provided. The camera cali-
bration of the real environment is in progress. An advanced algorithm called SLAM
(Simultaneous Localization and Mapping) is used for detecting obstacles and creating
a map of what the robot sees. RViz, a robot visualization tool functioning under SLAM
technology displays the map and the camera view received from the robot. With a
sanitizer pump attached below the robot, it goes on to sanitize the area covered using
a spray system. We have analyzed and successfully executed an autonomous nav-
igation system using ROS. The Real-time implementation is a low-cost autonomous
mobile robot system, with a basic mobile platform equipped with a usb web camera.
We intend on building globally consistent maps for reliable and long-term localization
in a wide range of environments as demonstrated. Additionally, the sanitizer spray
system assures consistent sanitization of the surface covered by the robot.
54
References
[1] S. Pan, Z. Xie, Y. Jiang, "Sweeping Robot Based on Laser SLAM", Procedia Com-
puter Science,199, 2022, pp. 1205-1212
[2] Syed Saad ul Hassan, "Lidar Sensor in Autonomous Vehicles," March 2022, pp.
1-6
[3] Mustafa Eliwa, Ahmed Adham, Islam Sami, and Mahmoud Eldeeb, "A criti-
cal comparison between Fast and Hector SLAM algorithms," REST Journal on
Emerging Trends in Modelling and Manufacturing, 2017, pp. 44-45
[4] S. Nagla, "2D Hector SLAM of Indoor Mobile Robot using 2D Lidar," 2020 In-
ternational Conference on Power, Energy, Control and Transmission Systems
(ICPECTS), Chennai, India, 2020, pp. 1-4
[5] B. An, Y., Shi, J., Gu, D. et al. Visual-LiDAR SLAM Based on Unsupervised Multi-
channel Deep Neural Networks. Cogn Comput 14, 1496–1508 (2022).
[8] Zubrycki, Igor Granosik, Grzegorz. (2014). Introducing modern robotics with
ROS and Arduino, including case studies. Journal of Automation Mobile Robotics
and Intelligent Systems. 8. 10.14313/JAMRIS_1-2014/9.
55