Rapport PFE
Rapport PFE
Rapport PFE
To my Mother and Father, I would have never gone this far without you
To my Brother and Sisters, Thank you for your love and support
To my Aunt for her support through the toughest time in life
To my dearest Grand Mother who stood by my side and whishing me the best of
luck
To all my Family and Friends, I dedicate this humble work
And to those who believed in me when I couldn’t,
A special thank you, I am forever grateful. . .
i
Acknowledgments
First and foremost, I would like to thank God Almighty for giving me the strength,
knowledge, ability and opportunity to undertake this internship and to complete it. Without
his blessings, this achievement would not have been possible. I would like to express my
deepest gratitude and appreciation to all those who helped me accomplish this humble work.
A special gratitude to my academic supervisor, Mme. Fatma Louati whose contribution
in stimulating suggestions and guidance helped me accomplish my project specially while
writing this report. I want to particularly thank her for her patience and kindness. Likewise,
I want to express my acknowledgement and appreciation to my professional supervisor Mr.
Jacer Omri who introduced me to the professional world and taught me lessons that I will
always be grateful for. Last but not least, I would like to thank all the members of Devagnos
and Seemba ; Slim, Sami, Slah, Imen, Achref, Hamdi, Mohammed, Djo, Imen and Nesrine.
Ghaith Hammadi
ii
Dedications
Abstract
DevOps is a conceptual framework for reintegrating development and operations of In-
formation Systems. We discovered that DevOps has not been adequately studied in scientific
literature.There is relatively little research available on DevOps and the studies are often of
low quality. We also found that DevOps is supported by aculture of collaboration, automation,
measurement, information sharingand web service usage. DevOps benefit the development
and operations performance. It also has positive effects on web service development and qual-
ity assurance performance. Finally, our mapping study suggests that more research is needed
to quantify these effects.
Key words : DevOps, reintegrating, Information Systems, quality, automation, collabor-
ation, measurement, performance, quality assurance.
Abstrait
DevOps est un cadre conceptuel pour la réintégration du développement et du fonc-
tionnement des systèmes d’information. Nous avons découvert que DevOps n’avait pas été
suffisamment étudié dans la littérature scientifique. Il existe relativement peu de recherches
sur DevOps et les études sont souvent de mauvaise qualité. Nous avons également constaté
que DevOps est supporté par une culture de collaboration, d’automatisation, de mesure, de
partage d’informations et d’utilisation de services Web. DevOps bénéficie la performance
du développement et la performance opérationnelle. Il a également des effets positifs sur le
développement de services Web et les performances d’assurance qualité. Enfin, notre étude
cartographique suggère que davantage de recherches sont nécessaires pour quantifier ces effets.
Mots clés: DevOps, réintégration, systèmes d’information, qualité, collaboration, auto-
matisation, mesure, partage d’informations, assurance qualité.
iii 2018/2019
Table of Contents
General Introduction 1
1 General Context 3
1.1 Company Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 General Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Project Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Study Of The Existing System . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Critics Of The Existing System . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 The Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Development Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Agile Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 The Adopted Method : Iterative Development . . . . . . . . . . . . . 7
1.3.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2.2 Iterations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2.3 The Goal of the Iterative Method . . . . . . . . . . . . . . . 8
2 Planning 9
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Requirement Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Identifying Actors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Product Backlog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Functional and Non-Functional Requirements . . . . . . . . . . . . . 12
2.1.3.1 Functional Requirement . . . . . . . . . . . . . . . . . . . . 12
2.1.3.2 Non-Functional Requirement . . . . . . . . . . . . . . . . . 13
2.2 Requirement Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 General Definitions and Tools . . . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Global Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . 15
iv
Table of Contents
3 Design 23
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Global Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Detailed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.1 Iteration 1: Cloud and Containers ( infrastructure ) . . . . . . . . . . 25
3.2.1.1 Cloud Computing Overview . . . . . . . . . . . . . . . . . . 25
3.2.1.2 Amazon Web Services . . . . . . . . . . . . . . . . . . . . . 26
3.2.1.3 Container Overview . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 Iteration 2: Continuous Integration . . . . . . . . . . . . . . . . . . . 30
3.2.2.1 Version Control System – GitLab . . . . . . . . . . . . . . . 30
3.2.2.2 Automation Tool – Jenkins . . . . . . . . . . . . . . . . . . 31
3.2.2.3 Tests Overview . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.2.4 Testing the Application . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Iteration 3: Continuous Deployment . . . . . . . . . . . . . . . . . . . 35
3.2.3.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.2.3.2 Deployment Servers . . . . . . . . . . . . . . . . . . . . . . 36
3.3 AWS virtual machines architecture . . . . . . . . . . . . . . . . . . . . . . . 37
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4 Achievements 42
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1 Work Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Collaboration tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Iteration 1: Cloud and Containers (infrastructure) . . . . . . . . . . . . . . . 44
4.2.1 Setting up AWS Machines . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Configuring Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Iteration 2: Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.1 Configuring GitLab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.2 Configuring Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3 Configuring SonarQube . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4 Iteration 3: Continuous Deployment . . . . . . . . . . . . . . . . . . . . . . 56
v 2018/2019
Table of Contents
General Conclusion 59
vi 2018/2019
List of Figures
vii
Liste des figures
viii 2018/2019
List of Tables
ix
General Introduction
For a long time, the processes of development and operations were highly isolated. De-
velopers wrote code alone, testers ran the tests separately and the operation managers were
responsible for the deployment and integration of the application. Therefore, the communic-
ation between all three teams was almost non-existent. These standards costed corporations
a lot in terms of money, product quality and productivity.
Working by the traditional methodologies kept a separation between all teams involved
in the product-making process. Which meant a developer would have to go through a long
road of Code writing before getting feedback from Quality Assurance team, along with the
feedback from the production team . Not to mention, the rigid and excessive documentations
that were prioritised over the actual work.
Agile methodologies were introduced as a solution to this problem. They focused on com-
pleting the project in small sections called iterations. They accelerated receiving feedbacks
and aligned product features with clients needs. Agile project methodologies divide projects
into work packages. Those units are processed in work sessions called iterations and they are
generally short, typically two to four weeks long.
The DevOps was then introduced as an evolution of the Agile methodologies. It further
installed its core principles, which are short development sessions (iterations), accelerated
feedback loop and more channels of communication between teams. The new feature that
DevOps brought to market is the automation of these standards. It encourages the automa-
tion of integration, testing and delivery as a way to accelerate the process and the lifecycle of
a product. DevOps also emphasizes on communication between teams and the importance of
the contribution of each individual. The DevOps approach provides many advantages such
as speed, reliability, quick delivery, security and high levels of collaboration.
Another key aspect of software development installed by DevOps is testing. Software
testing is defined as the activity to check whether the actual results match the requirements,
and thus ensuring that the software system is defect free. It also helps to identify errors,
gaps or missing requirements. Tests are vital during the software lifecycle to provide product
quality, security and save money.
1
General introduction
As an examle of these corporation, Seemba has many running projects. One of these many
projects is an E-tournament system. E-Tournaments has become an emerging field. The term
“E-Tournaments” refers to the use of technologies and channels of communication to enhance
the overall performance of gaming and tournament organization process. It is the intersection
between gaming and informatics by inventing new ways to bring gaming tournaments to a
larger population.
Thus this is what my graduation project consists of. It consists of providing better tools to
the gaming community all while further installing the DevOps culture deep in the Company
(Seemba). It also aims to improve the testing mechanism by automating tests and creating
pipelines to oversee the software life cycle.
This report represents a detailed description of all the tasks I have accomplished during
my internship in order to obtain my Diploma from the Private High School of Engineering
and Computer Science (école superieure privée d’ingénierie et informatique – ESPRIT)
Throughout this report we will set out in detail the roadmap chosen for this report,
the first chapter will contain a genaral context where we will present the company and it’s
activities, then the chosen development method to achieve the desired final product, the
study of the existant solutions and its critics. we will talk in the second chapter about the
planning of the iterations, and the requirements for our project, the third chapter we will
discuss the desing of the AWS infrastructure and the planning for each iteration, and in the
next and final chapter we will see wht we have achieved and how we did it with a detailed
explanation of how we configured each tool to work perfectly in our CI/CD pipeline. Finally
we have a general conclusion where we have what we achieved during the internship, and the
possible perspective to enhance the pipeline.
2 2018/2019
Chapter 1
General Context
Contents
1.1 Company Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 General Presentation . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Project Presentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Study Of The Existing System . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Critics Of The Existing System . . . . . . . . . . . . . . . . . . . 6
1.2.3 The Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Development Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.1 Agile Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.2 The Adopted Method : Iterative Development . . . . . . . . . . . 7
3
General Context
Introduction
This chapter gives a general presentation of the work environment. It will firstly, introduce
the hosting company. Secondly, it will give an overview of the tasks to be handle. Finally, it
will describe the used method of work and its characteristics.
Seemba is a software solution aimed to help independent game developers monetize their
games and widen their community of users. Seemba developed a plug and play component
(SDK) that allows users to play in multiplayer mode and allows them to challenge each other.
Installed into any mobile game with one line of code, games can become multiplayer
enabled that lets players challenge each other for real money and virtual currency that can
be exchanged for in-game features and prizes.
The solution also offers to game developers a dashboard with statistics to monitor earn-
ings and users analytics. The solution already has partnerships with content publishers and
payment systems.
Commission-based with Seemba charging 20% for each transaction on the platform.
As Co-Founders we have in the CEO position Slim Ben Nasrallah, CTO Geoffrey Umer
and as a CPO Jean Philippe Nitkowski.
4 2018/2019
General Context
1.1.2 Activities
Seemba is a new startup that focus on the well bieng of its employees. It is main activities
are managing big internal projects. For example the Plug-and-Play multiplayer SDK, a fully
managed and monitered e-tournament system and on top of that the integration of it’s own
SDK on different mobile games .
1
Continous Integration and Continuous Delivery
5 2018/2019
General Context
Software engineering companies are migrating to this new culture every day exponentially.
No matter what field we are professionnaly involved in, Computer science came to make it
more efficient and more responsive to the final clients. But we rarely spoke about the efficiency
to the developers or product owners.
6 2018/2019
General Context
Agile development process encourages the stakeholder’s engagement through regular meet-
ings. This active involvement permits the developers to fully understand the projects require-
ments therefore the client’s satisfaction.
– Transparency
The agile method focuses on involving the client through each and every step of the devel-
opment process. This involvement gives the client full visibility over the product from early
stages.
Since the main focus of the agile process is the satisfaction of the client, it allows to make
changes in plans. These changes may be introduced at any point of the development process.
Teams with purpose are always more productive, members challenge themselves to do more
and be more efficient. Therefore, the agile process focuses on giving a shared sense of owner-
ship and goals for the team members.
1.3.2.1 Overview
The Agile Iterative Approach is best suited for projects or businesses that are part of an
ever-evolving scope. Projects that do not have a defined set of requirements intended for a
defined set of time. For such cases, the Agile Iterative Approach helps to minimize the cost
and resources needed each time an unforeseen change occurs.
7 2018/2019
General Context
1.3.2.2 Iterations
In the next figure 1.4, each iteration is issued a fixed-length of time known as a timebox.
A single timebox typically lasts 2-4 weeks, and it brings together the Analysis of the plan,
the Design, its Code and simultaneously the Test. The ADCT2 wheel is more technically
referred to as the PDCA3 cycle.
We chose the Iterative method to develop our application so at the end of each iteration
we will have a small package to deliver. The package we obtain at each iteration is restudied
and enhanced so we can obtain a better and bigger deployable product untill we get to the
finish line.
Conclusion
Throughout this Chapter, we presented the host company Seemba and its sector of activ-
ity. Then we gave an overview of the project and we have identified the methodology chosen
while working on this project. Up next, we will dive into the planning phase all while studying
the requirements and planning the iterations.
2
Analysis, Design, Code, Test
3
Plan, Design, Check, Adjust
8 2018/2019
Chapter 2
Planning
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1 Requirement Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Identifying Actors . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Product Backlog . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Functional and Non-Functional Requirements . . . . . . . . . . . 12
2.2 Requirement Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 General Definitions and Tools . . . . . . . . . . . . . . . . . . . . 14
2.2.2 Global Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . 15
2.2.3 Detailed Use Case Diagram . . . . . . . . . . . . . . . . . . . . . 16
2.3 Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
9
Planning
Introduction
Understanding and detailing the client’s needs represents a crucial task in our work.
Misunderstanding them may lead to developing an application that does not sutisfy the
customer’s needs. During this chapter, we will be focusing on specifying the requirements of
our project, identifying our actors and planning the upcoming steps.
– The Client
– The Developer
– The Tester
– Jenkins
– GitLab
As a < type of user >, I want < some goal > so that < some reason >
10 2018/2019
Planning
This approach helps us to distinguish three master points: who are the actors, what they can
do and what is the added value obtained after the specified action.
Since this project is an internal project held within the Seemba Company. The product
backlog was written by us – the company’s employees. After few meeting, the following is
the final product backlog 2.1 produced:
11 2018/2019
Planning
– Benefit from the services: Clients and testers must be able to access the final product
by consuming an URL.
– Code Push: Developers and testers must be able to expose their code on a Git server.
– Code Pull: Developers and testers must be able to import already shared code on the
Git server.
– Build Automation: the system must enable automated build after a Git commit.
– Review Logs: The system must return logs for developers and testers to detect bugs
and failures.
– Deploy Builds: The system must ensure the automated deploy of the applications on
servers.
12 2018/2019
Planning
The non-functional requirements aim to enhance the quality of the final product . They
specify how the system should behave and they are constraints upon the systems behaviour.
They specify criterias that provide insights for the health of the product making process.
From these requirements, we list:
– Reliability: The system should be reliable and shall avoid down times.
– Extensibility: The system should be flexible to add-ons and shall be capable to support
new features and extensions.
– Machine monitoring: The system must enable the administrator to monitor the CPU
usage, the RAM pressure and system’s loads.
– Platform monitoring: The administrators should be able to monitor all platform features
(Jenkins jobs, Containers’ state,etc).
– Tests runing automation: The system should enable the testers to execute their tests
on the application either manually or automatically.
– Ergonomics: The system should be clear and easy on the eye of the user.
– Usability: The system should be easy to use and to understand its features. It shall not
contain complicated functions or perplexing elements.
– Scalability: The system should maintain its high performance under pressure and adjust
its settings depending on the demand.
13 2018/2019
Planning
A use case diagram is a dynamic or behaviour diagram in UML. Use case diagrams model
the functionality of a system using actors and use cases. Use cases are a set of actions, services
and functions that the system have to provide.
Draw.io
1
https://www.draw.io/
14 2018/2019
Planning
15 2018/2019
Planning
To further explain the Use Case Diagram, we represent the textual description of the
main functionalities mentioned above:
16 2018/2019
Planning
The code-push use case 2.2 describes when a developer finish writing code, he can publish
it on the Git server for the rest of the team members to be revised and/or modifed.
The code-pull use case 2.3 describes the steps developers can take to acquire the code
shared by their peers. The developers must be authenticated to the Git server to be able to
download the latest version of the code on their workstations.
17 2018/2019
Planning
After each code push to the Git server, or at a time configured by the developers, Jenkins
starts a job to build 2.4 the application ready to be deployed.
Alternative Scenario
– Jenkins does not detect a change, Repeat from step 2 of
Best case scenario
Error Scenario the connection between Jenkins and Git server cannot be
established and the servers cannot communicate with each other
18 2018/2019
Planning
In the following figure 2.3 we have the test use case diagram.
To Further explain the Use Case Diagram, we represent next the textual description of
the main functionalities of the system:
19 2018/2019
Planning
Testers write test scripts in a package on the application project. They can push the
scripts to the Git server 2.5 that’s when test are ready to be ran by Jenkins later.
Alternative Scenario
– The Git server does not function
Error Scenario the connection between the tester’s PC and Git server cannot
be established or the test script cannot be pushed to the server
20 2018/2019
Planning
Jenkins run the test scripts it receives after each push from the testers on the Git servers.
The test scripts have all information needed by Jenkins to run the tests 2.6 without human
intervention.
Alternative Scenario
– Jenkins does not detect a change, Repeat from step 2 of
Best case scenario
Error Scenario
– the connection between Jenkins and Git server cannot be
established and the servers cannot communicate with each
other.
21 2018/2019
Planning
2.3 Planning
In order to have a clear vision on the work plan, we must clear out that we mainly belong
to the tester’s team not the developer’s team.
Although we have participated in all the meeting (As we have developed the DevOps
culture), writing the application code does not belong to our tasks list. As a member of
the whole team, we have participated in the user stories writing process. Even though, we
developped a demo application with it’s front-end and back-end to demonstrate locally an
example of the Continous Integration and delivery process.
We have studied the technologies used and designed them in order to adjust them to fit
our needs.
As specified, our task was to host the application on the cloud and to create a pipeline
for test automation, continuous integration and deployment. Therefore, our project can be
deconstructed into Three major sections 2.7 called Iterations:
Conclusion
All the way through this chapter, we have defined the functional and non- functional
requirements of our application, we stated the actors and we gave a general plan for the steps
to come. The next chapter is devoted to the crucial phase of design of the system .
22 2018/2019
Chapter 3
Design
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Global Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Detailed Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.1 Iteration 1: Cloud and Containers ( infrastructure ) . . . . . . . . 25
3.2.2 Iteration 2: Continuous Integration . . . . . . . . . . . . . . . . . 30
3.2.3 Iteration 3: Continuous Deployment . . . . . . . . . . . . . . . . . 35
3.3 AWS virtual machines architecture . . . . . . . . . . . . . . . . . . . . . . 37
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
23
Design
Introduction
This chapter is dedicated to the design phase. First, we will provide a global overview of
the system architecture. Second, we will detail the mentioned architecture to grasp more of
its components and understand the core concepts of our work. This system touches the three
main phases of a product lifecycle; development, tests and operations.
To fulfill this purpose, we built a system focusing on three main aspects; cloud and
containers, development and tests, and deployment environment. As for the first part, we
have chosen Amazon Web Services (AWS)1 as our cloud provider and we have used Docker
containers to optimize the delivery process. The development and test process is composed of
a Version Control System - GitLab, an integration system – Jenkins and a debugging system
– Sonar Qube – Selenium . The third and final phase – Deployment, is composed of a build
server – Maven, deployment servers – Node.JS and Tomcat , PostgreSQL Database server to
hold the data.
1
https://aws.amazon.com/
24 2018/2019
Design
The next figure 3.2 presents cloud computing consists of delivering on-demand services.
These services may vary from storage to processing power to machine learning and they
are usually delivered on the internet. Providers charge on a pay-as-you go basis. It offers
companies the possibility to rent access to the provider’s services rather than owning their
own data centers. Therefore, cloud computing affects the project’s cost and the complexity
of their maintenance.
25 2018/2019
Design
Amazon Web Services , also known as AWS, is one of the major providers of cloud
computing services. It provides on-demand cloud computing platforms for variety of users
such as companies, individuals and governments. AWS offers a huge number of services in
different fields such as Artificial Intelligence, Computing, Storage, etc.
Amazon Web Services was our provider of choice because of the range of services it offers,
the resources it provides (CPU, Memory, etc) and the affordable pricing for each service.
From this vast range, we have chosen the following services:
The previous figure 3.3 is Elastic Cloud Computing EC2: Presents one of the major and
most used services of AWS. It allows users to rent virtual machines to run their applications.
For this project we allocated three machines with different characteristics depending on the
demands of the application. These machines hold respectively NodeJS server, Tomcat server
and Jenkins with Maven and SonarQube.
26 2018/2019
Design
The previous figure 3.4 is Relational Databases Service RDS: It aims to simply set up, use
and the maintenance a Relational Databases. We used this service to create a PostgreSQL
database to hold all the relational data used by our application.
The previous figure 3.5 Elastic Load Balancing ELB: It automatically distributes the
incoming load to different instances to insure the best uptime of the application. We used
the ELB to redirect the traffic to different servers located in different High availability zones,
in order to ensure the high availability of our application.
27 2018/2019
Design
the previous figure 3.6 Auto Scaling: It is an important feature of cloud computing. Auto
scaling makes sure to measure the resources used according to the incoming traffic. We used
the Auto scaling group to adjust the number of machines and resources allocated according
to the traffic coming to the website. This solution saves money as we are using only what we
require and ensures the performance of the application under any pressure.
28 2018/2019
Design
29 2018/2019
Design
For this project, we choose Docker as our container manager. Docker provides a set of
coupled SaaS2 and PaaS3 products and uses OS-level virtualization to package and deploy
applications. All containers share the same operating system kernel, which makes them more
lightweight than virtual machines. We used Docker to host Jenkins and its slaves – Maven
and SonarQube on the same EC2 machine. We did so to ensure the connection between the
elements while ensuring the performance of the machine.
Git is a Version Control System4 3.9 . It works as a source code management solution to
ensure the collaboration between team members. GitLab add even more services to code man-
agement. It evolved from offering visibility to providing issue-tracking and CI/CD pipeline
features and the whole DevOps lifecycle.
2
Software as a service
3
Platform as a service
4
Version control systems are a category of software tools that help a software team manage changes to
source code over time. Version control software keeps track of every modification to the code in a special kind
of database.
30 2018/2019
Design
GitLab 3.10 is the tool of choice. Its role is to enable both the developers and testers
to push and pull their work. It offers visibility among team members and it accelerates the
build of our application as it is connected to Jenkins. The purpose of gitlab is managing the
diffrent version of the application code while developers are working on it.
Jenkins 3.11 is an automation tool used to create jobs during which it automates all sort
of tasks related to builds, tests and deployment . It can be installed through native OS or as
docker container.
We have used Jenkins for our application to automatically build the code developers push
into GitLab and return logs for them to review. It also allows testers to run tests automatically
and on the go as soon as they push them to GitLab.
31 2018/2019
Design
To test our application we have chosen two major development processes. They are Be-
haviour Driven Development BDD and Test Driven Development TDD.
Test-driven development (TDD) 3.12 is a software development process that relies on the
repetition of a very short development cycle: requirements are turned into very specific test
cases, then the software is improved to pass the new tests, only.
32 2018/2019
Design
– Unit testing : A Unit is a smallest testable portion of system or application. This kind
of tests focuses on testing each module separately.
– Integration testing : Integration means combining. It tests the workflow between differ-
ent modules. In this testing phase, different software modules are combined and tested
as a group to make sure that integrated system is ready for system testing.
33 2018/2019
Design
– Backend testing : also known as database testing. It tests each time data is stored.
Database testing may include testing of table structure, schema, stored procedure and
data structure.
– Integration testing : Testing to verify the functionalities after integrating all modules.
This type of testing is especially relevant to client/server and distributed systems.
– Performance testing : also known as stress testing or load testing. It testes the per-
formance of the product under pressure and to to check whether the system meets
requirements.
Syntax of tests
Tests conducted using cucumber5 are written in a language called Gherkin6 3.14, which
is the language that Cucumber uses to define test cases. It is designed to be non-technical
and human readable, and collectively describes use cases relating to a software system.
5
Cucumber is a software tool used by computer programmers that supports behavior-driven development.
6
Gherkin uses a set of special keywords to give structure and meaning to executable specifications. Each
keyword is translated to many spoken languages; in this reference we’ll use English.
34 2018/2019
Design
3.2.3.1 Database
Different data types require different databases. For this project, we have worked with
SQL database. SQL database is used to register relational well-formatted data.
PostgreSQL
35 2018/2019
Design
To build our application we have adopted two of the latest technologies. For the frontend,
we used Angular as it offers a variety of features while keeping the developing process simple.
And for the backend, we chose Java Enterprise Edition JEE for its performance and its use
of micro-services that will split functionalities into chunks and offering speed in return.
We have used the following servers to host the different parts of the application:
– Tomcat server 3.16 : We have used it to host the backend code as it implements different
JEE specifications. And therefore, beneficing from the full potential of JEE. Tomcat is
also an open source product with a large community of developers around the world.
– NodeJS 3.17 : we have chosen it to host the frontend which is Angular. It is the server
compatible with Angular as they are both based on JavaScript. NodeJS also offers
many features that are interesting to our project.
36 2018/2019
Design
RDS RDS
Master Slave
Cross-AZ
Replication
Scaling
M4 M4 M4 M4
Auto
Availability Zone
Availability Zone
Elastic Load
Balancing
Scaling
M3 M3 M3 M3
S3
Auto
EC2
AZ AZ
A B
Resources
Static
Elastic Load
Balancing
CloudFront
CDN
Amazon
Route 53
37 2018/2019
Design
AWS is offering us a diverse set of tools so we can obtain the best infrastructure.
AWS S3 3.19 is an object storage service that offers industry-leading scalability, data
availability, security, and performance.
We need to implement S3 in our infrastructure to host the static data , like the docu-
mentations, the audio and visual data needed in our application.
Amazon Relational Database Service (Amazon RDS) 3.20 makes it easy to set up, operate,
and scale a relational database in the cloud.
It provides cost-efficient and resizable capacity while automating time-consuming admin-
istration tasks such as hardware provisioning, database setup, patching and backups.
It frees us to focus on our applications so we can get the fast performance, high availability,
security and compatibility we need.
We have decided to use ELB 3.21 in our infrastructure because it automatically distributes
incoming traffic across multiple targets – Amazon EC2 instances, containers, IP addresses –
in multiple Availability Zones and ensures only healthy targets receive traffic. ELB is capable
of handling rapid changes in network traffic patterns.
Additionally, deep integration with Auto Scaling ensures sufficient application capacity
to meet varying levels of application load without requiring manual intervention.
With enhanced container support for Elastic Load Balancing, we can now load balance
across multiple ports on the same Amazon EC2 instance.
We can use this feature to better manage and decrease failures in our containers.
38 2018/2019
Design
CloudFront 3.23 is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment.
We needed to implement this service to better handle access to data stored in S3 Buckets
in our application.
39 2018/2019
Design
Amazon Route 53 3.24 is a highly available and scalable cloud Domain Name System
(DNS) web service.
We have all of our infrastructure based on AWS services so it is sain to use a DNS from
their services.
It is built using AWS’s highly available and reliable infrastructure. It can routes traffic
based on multiple criteria, such as endpoint health, geographic location, and latency.
We need this implemented whithin our application because we are willing to target the
entire world with our application.
We need to elaborate a flexible deploiement plan that helps us to control the scale of our
application, AWS Availability Zone helps us to avoid failures, we host different EC2 instances
in different AZs because failures can occur that affect the availability of instances that are
in the same location.
If we host all our instances in a single location that is affected by such a failure, none of
our instances would be available.
The big disadvantage we need to solve is the budget issue, Auto Scaling service mon-
itors your applications and automatically adjusts capacity to maintain steady, predictable
performance at the lowest possible cost.
With its help it’s easy to setup application scaling for multiple resources across multiple
services in minutes.
40 2018/2019
Design
Conclusion
All throughout this chapter, we have presented the global architecture of the system. We
have them further elaborated on the concepts and tools adopted during each part the of
DevOps approach.
41 2018/2019
Chapter 4
Achievements
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1 Work Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.2 Collaboration tools . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 Iteration 1: Cloud and Containers (infrastructure) . . . . . . . . . . . . . 44
4.2.1 Setting up AWS Machines . . . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Configuring Docker . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Iteration 2: Continuous Integration . . . . . . . . . . . . . . . . . . . . . 49
4.3.1 Configuring GitLab . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.2 Configuring Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3 Configuring SonarQube . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.4 Iteration 3: Continuous Deployment . . . . . . . . . . . . . . . . . . . . . 56
4.4.1 Application Servers . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.2 Database Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
42
Achievements
Introduction
This final chapter allows us to present the achieved work. We will go through the details
of setting up the development and test tools. And finally, the deployment servers and the
database.
4.1.1 Hardware
In order to accomplish this project, we have used a Lenovo Laptop with the following
characteristics:
43 2018/2019
Achievements
44 2018/2019
Achievements
The following figure 4.2 details the choice of characteristics of the machine. We define the
resources used like the CPU and the memory.
The following figure 4.3 details the configuration of a virtual machine. During this section,
we choose the availability zone, subnets and supervision.
45 2018/2019
Achievements
The following figure 4.4 presents the configuration of the security group. The security
group isolates the machine from external interventions.
The following figure 4.5 represents the EC2 dashboard. This dashboard gives an overview
of each machine deployed and its state. It provides also all information about each machine
like the IP address.
46 2018/2019
Achievements
47 2018/2019
Achievements
Compose is a tool for defining and running multi-container Docker applications. With
Compose, you use a YAML file to configure your application’s services. Then, with a single
command, you create and start all the services from your configuration.
We have all docker containers linked to the same network to ease the interconnection.
In the file we have chosen a Jenkins docker image to create a container as a master with
2 slaves because a single Jenkins container cannot handle the entire load of building and
deploying a large and heavy project. The master is responsible for pulling code from Gitlab,
using TCP/IP protocol, the master assigns the workload to each of its slaves.
On request from Jenkins master, the slaves carry out builds and tests and produce test
reports.
We have chosen Sonarqube Image to process the code of the application and return a
quality assurance report, connected to another image Postgresl to store the data generated
by SonarQube.
To ensure the well functionning of the entire docker containers, we needed to set up the
environment variables and the needed volumes.
48 2018/2019
Achievements
– creating an account
– creating the SSH keys between the member workstation and the main server.
the following figure 4.8 represents the creation of a GitLab Repository. This Repository hosts
the application code and will be shared between team members.
49 2018/2019
Achievements
the following figure 4.9 presents the addition of team members and their privileges. Each
member will be granted an access permit to branches and repositories depending on their
role.
the following figure 4.10 presents the creation of a SSH key to connect to GitLab. To
connect to Git server, each workstation has to create an SSH key that will be registered on
the Git server.
50 2018/2019
Achievements
the following figure 4.11 shows the addition of SSH key into GitLab account. This setp
allows the server to recognise the workstation and give access to repositories.
51 2018/2019
Achievements
the following figure 4.13 presents the connection between GitLab and Jenkins. After this
configuration, Jenkins can detect when there is a change in the content of Git repository and
can start a job.
52 2018/2019
Achievements
the following figure 4.14 presents the configuration of the build triggers. Jenkins has to
be configured to send the resulting artifacts and logs.
the following figure 4.15 presents the Jenkins Dashboard. Jenkins’s dashboard gives an
overview of the jobs that succeeded and provides logs of the resulting builds.
53 2018/2019
Achievements
4.3.4 Tests
Testing Tools
– Selenium : It is a framework for automated testing of web applications. The tests are
carried out on headless web browser.
– JUnit : It is a unit testing framework for Java. It allows testers to see results immedi-
ately.
Here we present 4.17 4.18 few examples of the tests that could be conducted on the applica-
tion.
54 2018/2019
Achievements
55 2018/2019
Achievements
56 2018/2019
Achievements
57 2018/2019
Achievements
Conclusion
During this chapter,we applied the knowledeg ackwired during the analysis phase in order
to set up a fully functionning infrastructure on AWS. We have specified the cloud host
machines, development and test tools and the deployment environment.
58 2018/2019
General Conclusion
59
Webography
https://www.visual-paradigm.com/scrum/extreme-programming-vs-scrum/ — 07/11/2019
https://bubbleplan.net/blog/wp-content/uploads/2018/05/430.jpeg — 07/11/2019
https://www.kcsitglobal.com/images/cloud-computing.png — 07/11/2019
https://aws.amazon.com/ — 06/22/2019
https://www.docker.com/ — 06/22/2019
https://gitlab.com/ — 06/22/2019
https://jenkins.io/ — 06/20/2019
https://i1.wp.com/www.brightdevelopers.com/wp- content/uploads/2018/07/continuous-
integration-workflow.png?ssl=1 — 06/20/2019
https://images.xenonstack.com/blog/test-driven-development-process- cycle.png — 06/20/2019
https://cucumber.io/ —06/20/2019
https://cucumber.io/docs/gherkin/ —06/20/2019
https://www.tutorialrepublic.com/snippets/designs/elegant-modal-login- form-with-avatar-
icon.png —06/21/2019
https://www.postgresql.org/ —06/21/2019
http://tomcat.apache.org/ —06/21/2019
https://nodejs.org/en/ —06/21/2019
60