Aws Internship

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

An Internship Report on

AWS Academy Cloud Virtual Internship


Submitted in accordance with the requirement for the degree of

BACHELOR OF TECHNOLOGY

in

COMPUTER SCIENCE AND ENGINEERING


(DATA SCIENCE)

submitted by
P.Sudharshan Rao
(218A1A4457)
Under the Guidance of

Pathan Parveen
Asst.Professor
CSE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

RISE KRISHNA SAI PRAKASAM GROUP OF INSTITUTIONS


(INTEGRATED CAMPUS)
(Approved by AICTE, NEW DELHI, Affiliated to JNTUK, KAKINADA)
VALLURU(Post), ONGOLE, PRAKASAM District, A.P- 523272.

2023-2024
RISE KRISHNA SAI PRAKASAM GROUP OF INSTITUTIONS
Approved by AICTE, Permanently Affiliated to JNTUK &Accredited by NAAC with ‘A’ Grade

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the report entitled “AWS ACADEMY CLOUD FOUNDATIONS”, that

is being submitted by P.Sudharshan Rao of III Year I Semester bearing 218A1A4457, in partial

fulfilment for the award of the Degree of Bachelor of Technology in Computer Science and

Engineering(DATA SCIENCE), Rise Krishna Sai Prakasam Group Of Institutions is a record of

bonafide work carried out by them.

Signature of Guide Signature of Senior Faculty Member

Signature of Head of the Department Signature of External Examiner


VISION AND MISSION OF THE INSTITUTION

Vision of the To be a premier institution in technical education by creating


professionals of global standards with ethics and social responsibilityfor
Institute
the development of the nation and the mankind.

Impart Outcome Based Education through well qualified and dedicated


faculty.

Provide state-of-the-art infrastructure and facilities for application-


oriented research.

Mission of the Reinforce technical skills with life skills and entrepreneurship skills.
Institute
Promote cutting-edge technologies to produce industry-ready
Professionals.

Facilitate interaction with all stakeholders to foster ideas andinnovation.

Inculcate moral values, professional ethics and social responsibility

VISION AND MISSION OF THE DEPARTMENT

Vision of the To be a center of excellence in computer science and engineering for


value-based education to serve humanity and contribute for socio-
Department economic development.
 Provide professional knowledge by student centric teaching-
learning process to contribute software industry.
 Inculcate training on cutting edge technologies for industry needs.
Mission of the
Department  Create an academic ambiance leading to research.

 Promote industry institute interaction for real time problem solving.


Program Outcomes (POs)

Engineering Knowledge: Apply the knowledge of mathematics, science, engineering


PO1 fundamentals, and an engineering specialization to the solution of complex engineering
Problems
Problem Analysis: Identify, formulate, review research literature, and analyze complex
PO2 Engineering problem searching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences
Design/Development of Solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
PO3 consideration for the public health and safety, and the cultural, societal, and
Environmental considerations.
Conduct Investigations of Complex Problems: Use research-based knowledge and
PO4 research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information provide valid conclusions
Modern Tool Usage: Create, select, and apply appropriate techniques, resources, and
PO5 modern engineering and IT tools including prediction and modeling to complex
Engineering activities with an understanding of the limitations
The Engineer and Society: Apply reasoning informed by the contextual knowledge to
PO6 assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the Professional engineering practice.
Environment and Sustainability: Understand the impact of the professional engineering
PO7 solutions in societal and environmental contexts, and demonstrate the Knowledge of, and
need for sustainable development.
Ethics: Applyethicalprinciplesandcommittoprofessionalethicsandresponsibilities and
PO8 norms of the engineering practice
Individual and Team Work: Function effectively as an individual, and as a member
PO9 or leader in diverse teams, and in multidisciplinary settings
Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
PO10 write effective reports and design documentation, make effective presentations, and give
and receive clear instructions
Project Management and Finance: Demonstrate knowledge and understanding of the
PO11 engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments
Life-long Learning: Recognize the need for, and have the preparation and ability to
PO12 engage in independent and life-long learning in the broadest context of technological
change.
Program Educational Objectives (PEOs):
PEO1: Develop software solutions for real world problems by applying Mathematics
Function as members of multi-disciplinary teams and to communicate effectively using
PEO2: modern tools.
Pursue career in software industry or higher studies with continuous learning and apply
PEO3: professional knowledge.
PEO4: Practice the profession with ethics, integrity, leadership and social responsibility.

Program Specific Outcomes (PSOs):

Domain Knowledge: Apply the Knowledge of Programming Languages, Networks


PSO1 and Databases for design and development of Software Applications.
Computing Paradigms: Understand the evolutionary changes in computing possess
PSO2 knowledge of context aware applicability of paradigms and meet the challenges of the
future.
Student’s Declaration

I, P.Sudharshan Rao a student of BACHELOR OF TECHNOLOGY program, Reg.No:

218A1A4457 of the Department of COMPUTER SCIENCE AND ENGINEERING(DATA

SCIENCE),RISE KRISHNA SAI PRAKASAM GROUP OF INSTITUTIONS do hereby declare that

I have completed the mandatory internship from MAY 2023 to AUGUST 2023 in AWS ACADEMY

CLOUD FOUNDATIOINO under the Faculty Guideship of Pathan Parveen Department of

COMPUTER SCIENCE AND ENGINEERING(DATA SCIENCE), RISE KRISHNA SAI

PRAKASAM GROUP OF INSTITUTIONS.

(Signature and Date)


COMPLETION CERTIFICATE
ACKNOWLEDGEMENTS

It gives us an immense Pleasure to express a deep sense of gratitude to our supervisor Pathan Parveen(Asst.
Professor Department of COMPUTER SCIENCE AND ENGINEERING(DATA SCINENCE) because
his whole hearted and invaluable guidance throughout the report. Without his sustained and sincere effort, this
report would not have taken this shape. He encouraged and helped us to overcome various difficulties and that

We have faced at various stages of our report. We would like to sincerely thank Dr. K.Narayana Rao,
Professor & HOD, Computer Science &Engineering(DATA SCIENCE), for providing all the necessary
facilities that led to the successful completion of our report. We would like to take this opportunity to thank
our beloved Principal Dr. A. V, BHASKAR RAO, Ph. D, M. Tech, for providing a great support to us in
completing our project and for giving us the opportunity of doingthe internship report.

Finally, we would like to thank all of our friends and family members for their continuous help
and encouragement.

P.Sudharshan Rao
218A1A4457
INDEX

ACKNOWLEDGEMENT
INTERNSHIP CERTIFICATE
INTERNSHIP REPORT

1. Course objectives and overview

2. AWS certification exam information

2.1 Modules

2.1.1 Cloud concepts

2.1.2 Cloud economics and billing

2.1.3 AWS Global Infrastrucure

2.1.4 AWS Cloud Security

2.1.5 Networking and content delivery

2.1.6 Compute

2.1.7 Storage

2.1.8 Databases

2.1.9 Automatic scaling and monitoring


Course objectives and overview
To begin, it is important to have an understanding of the prerequisites for this course. First, you
should have general IT technical knowledge. The foundational computer literacy skills you will need to be
successful in this course include a knowledge of basic computer concepts, file management, and a good
understanding of the internet. Second, you should have general IT business knowledge. This includes insight
into how information technology is used by businesses and other organizations.

Additionally, to ensure success in this course, it is preferred that you have:


• A general familiarity with cloud computing concepts
• A working knowledge of distributed systems
• Familiarity with general networking concepts
• A working knowledge of multi
After completing this course, you should be able to:
• Define the AWS Cloud.
• Explain the AWS pricing philosophy.
• Identify the global infrastructure components of AWS
• Describe security and compliance measures of the AWS Cloud including AWS Identity and
Access Management (IAM).
• Create an AWS Virtual Private Cloud (Amazon VPC).
• Demonstrate when to use Amazon Elastic Compute Cloud (EC2), AWS Lambda and AWS
Elastic Beanstalk.
• Differentiate between Amazon S3, Amazon EBS, Amazon EFS and Amazon S3 Glacier.
• Demonstrate when to use AWS Database services including Amazon Relational Database
Service (RDS), Amazon DynamoDB, Amazon Redshift, and Amazon Aurora.
• Explain AWS Cloud architectural principles.
• Explore key concepts related to Elastic Load Balancing (ELB), Amazon CloudWatch, and
Auto Scaling.
Modules:
To achieve the course objectives, the course explores the following topics:

Module 1: Cloud Concepts Overview

Module 2: Cloud Economics and Billing

Module 3: AWS Global Infrastructure Overview

Module 4: AWS Cloud Security

Module 5: Networking and Content Delivery

Module 6: Compute

Module 7: Storage

Module 8: Databases

Module 9: Cloud Architecture

Module 10: Automatic Scaling and


MODULES
Module 1: Cloud Concepts Overview

Module sections:
• Introduction to cloud computing

• Advantages of cloud computing

• Introduction to Amazon Web Services (AWS)

• Moving to the AWS Cloud the AWS Cloud Adoption Framework (AWS CAF)

In this module, Section 1 introduces cloud computing. In Section 2, you learn about the advantages
that cloud computing provides over a traditional, on-premises computing model. In Section 3, you learn about
what AWS is and the broad range of AWS products and services. You become familiar with the idea that AWS
services are designed to work together to build solutions that meet business goals and technology requirements.
The module concludes with Section 4, which is about the AWS Cloud Adoption Framework (AWS CAF). It
covers the fundamental changes that must be supported for an organization to successfully migrate its IT
portfolio to the cloud.

Module 2: Cloud Economics and Billing

Module sections:
• Fundamentals of pricing

• Total Cost of Ownership

• AWS Organizations

• AWS Billing and Cost Management

• Technical support
The purpose of this module is to introduce you to the business advantages of moving to the cloud.
Section 1 describes the principles for how AWS sets prices for the various services. This includes the AWS
pricing model and a description of the AWS Free Tier:https://aws.amazon.com/free Section 2 describes the
Total Cost of Ownership and how customers can reduce their costs by moving IT services to the cloud. The
section outlines four types of costs that are reduced by using cloud computing, and provides examples that
illustrate each of these type. Section3 describes how customers can use AWS Organizations to manage their
costs. Section4 describes billing and the components of the AWS Billing dashboard. This section includes a
demonstration of how customers can use the dashboard to understand and manage their cost Finally, Section
5 describes the four different options for AWS Technical Support: Basic Support, Developer Support, Business
Support, and Enterprise Support. The section a includes an activity that will help you understand the benefits
of each support option.

Module 3: AWS Global Infrastructure Overview

Module sections:
• AWS Global Infrastructure

• AWS services and service category overview

Module 3 provides an overview of the AWS global infrastructure. In Section 1, you are introduced to the
major parts of the AWS Global Infrastructure, including Region Availability Zones, the network infrastructure,
and Points of Presence. In Section 2, you are shown a listing of all the AWS service categories, and then you
are provided with a listing of each of the services that this course will discuss. The module ends with an AWS
Management Console clickthrough activity.
Module 4: AWS Cloud Security
Module sections:
• AWS shared responsibility model

• AWS Identity and Access Management (IAM)

• Securing a new AWS account

• Securing accounts

• Securing data on AWS

• Working to ensure compliance

This module provides an introduction to the AWS approach to security. In Section 1, you are introduced
to the AWS shared responsibility model, specifies which responsibilities belong to the customer and which
responsibilities belong to AWS. Section 2 introduces you to the key concepts of AWS Identity and Access
Management (IAM), including users, groups, policies, and roles. Section 3 provides guidance on how to secure
a new AWS account. It discusses how you Should avoid using the AWS account root user for day-today
activities. It also best practices, such as creating IAM users that have multi-factor authentication (MFA)
enabled. Section 3 highlights other ways to secure accounts. It discusses the security-related features of AWS
Organizations, which include service control policies. This section also discusses AWS Shield, Amazon
Cognito, and AWS Key Management Service (AWS KMS). Section 5 discusses how to secure data on AWS.
Topics include encryption of data at rest and data in transit, and discusses options for securing data that is
stored on Amazon Simple Storage Service (Amazon S3).
Finally, Section 6 discusses how AWS supports customer efforts to deploy solutions that are in compliance
with laws and regulations. It also discusses the certifications that AWS maintains and AWS services —such
as AWS Config and AWS Artifact—that support compliance.
Module 5: Networking and Content Delivery

Module sections:

• Networking basics

• Amazon VPC

• VPC networking

• VPC security

• Amazon Route 53

• Amazon CloudFront

The purpose of this module is to introduce you to the fundamental of AWS networking And content
delivery services: Amazon Virtual Private Cloud (Amazon VPC), Amazon Route 53, and Amazon CloudFront.
You will have the opportunity to label a virtual private cloud (VPC) network architecture diagram, design a
VPC, watch how a VPC is built, and finally build a VPC yourself. Section 1 discusses networking concepts
that will be referenced throughout the rest of the module: network, subnet, IPv4 and IPv6 addresses, and
Classless Inter-Domain Routing (CIDR) notation. Section 2 provides an overview of the key terminology and
features of Amazon VPC, which you must be familiar with when you design and build your own virtual private
clouds (VPCs). In Section 3, you learn about several important VPC networking options: internet gateway,
network address translation (NAT) gateway, VPC endpoints, VPC sharing, VPC peering, AWS Site-to-Site
VPN, AWS Direct Connect, and AWS Transit Gateway. In Section 4, you learn how to secure VPCs with
network access control lists (networkACLs) and security groups. Section5 covers Domain Name System
(DNS)resolution and Amazon Route 53. It also covers the topic of DNS failover, which introduces the topic
of high availability that you will learn about in more detail in module 10.
Module 6: Compute

Module sections:

• Compute services overview

• Amazon EC2

• Amazon EC2 cost optimization

• Container services

• Introduction to AWS Lambda

• Introduction to AWS Elastic Beanstalk

This module provides an introduction to many of the compute services offered by AWS. Section 1
provides a high-level, compute services overview. Section 2 introduces you to the key concepts of Amazon
Elastic Compute Cloud (Amazon EC2), including Amazon Machine Images (AMIs), instance types, network
settings, user data scripts, storage options, security groups, key pairs, instance lifecycle phases, Elastic IP
addresses, instance metadata, and the benefits of using Amazon CloudWatch for monitoring. Section 3 focuses
on the four pillars of cost optimization, with an emphasis on cost optimization as it relates to Amazon
EC2.Section 4 covers container services. It introduces Docker and the differences between virtual machines
and containers. It then discusses Amazon Elastic Container Service (Amazon ECS), AWS Fargate, Kubernetes,
Amazon Elastic Kubernetes Service (Amazon EKS), and Amazon Elastic Container Registry (Amazon ECR).
Section5 introduces serverless computing with AWS Lambda. Event sources and Lambda function
configuration basics are introduced, and the section ends with examples of a schedule-based Lambda function
and an event-based Lambda function. Finally, Section 6 describes the advantages of using AWS Elastic
Beanstalk for web application deployments. It concludes with a hands - on activity where you deploy a simple
web application to Elastic Beanstalk.
Module 7: Storage
Module sections:

• Amazon Elastic Block Store (Amazon EBS)

• Amazon Simple Storage Service (Amazon S3)

• Amazon Elastic File System (Amazon EFS)

• Amazon Simple Storage Service Glacier

Module 7 introduces you to the various options for storing data with AWS. The module provides an
overview of storages services — which are based on four different storage technologies — so that you can
choose a storage service for various use cases. Section 1 provides you with an overview of the functionality of
Amazon Elastic Block Store (Amazon EBS) and a summary of common use cases. It also introduces the
concept of block versus object storage, and how to interact with Amazon EBS through the AWS Management
Console. Section 2 provides an overview of the functionality of Amazon Simple Storage Service (Amazon S3)
and a summary of common use cases. It also describes how Amazon S3 scales as demand grows and discusses
the concept of data redundancy. The section also contains a general overview of Amazon S3 pricing. Section
3 starts with an overview of the functionality of Amazon Elastic File Store (Amazon EFS) and a summary of
common use cases. It also provides an overview of the Amazon EFS architecture and a list of common Amazon
EFS resources. Finally, in Section 4, you are provided an overview of the functionality of Amazon Storage
Service Glacier and a summary of common use cases. This last section also describes the lifecycle of migrating
data from Amazon S3 to Amazon S3 Glacier.
Module 8: Databases
Module sections:
• Amazon Relational Database Service (Amazon RDS)

• Amazon DynamoDB

• Amazon Redshift

• Amazon Aurora

This module introduces you to four of the most commonly used AWS database services, with an
emphasis on differentiating which database service to select for various use cases. Section 1 provides an
overview of the Amazon Relational Database Service (Amazon RDS). It describes the difference between a
managed and unmanaged service, and provides an overview of how to provide a highly available Amazon
RDS implementation. In Section 2, an overview of the Amazon DynamoDB services is provided. The section
Also describes how DynamoDB uses data partitioning to address scenarios that call for high data volumes and
the ability to scale out on demand. Section 3 provides an overview of Amazon Redshift. The section describes
the parallel processing architecture of Amazon Redshift, and how this architecture supports processing very
large datasets. It also reviews some of the more common use cases for Amazon Redshift. Finally, Section 4
provides an overview of Amazon Aurora. The module describes the use cases where Amazon Aurora is a
better solution than Amazon RDS. It also discusses how Amazon Aurora provides a more resilient database
solution through the use of multiple Availability Zone.
Module 9: Cloud Architecture

Module sections :
• AWS Well - Architected Framework

• Reliability and availability

• AWS Trusted Advisor

The purpose of this module is to introduce you to designing and building cloud architectures according to
best practices. In Section 1, you learn about the AWS Well - Architected Framework and its purpose, how the
framework is organized, and its design principles and best practices. You will also learn how to use it to design
a cloud architecture solution that is secure, performant, resilient, and efficient. Finally, this section also
introduces the AWS Well - Architected Tool, which can be used to evaluate your architectural designs against
AWS Well - Architected Framework best practices. In Section 2, you learn about reliability and high
availability, which are two factors to consider when you design an architecture that can withstand failure. In
Section 3, you learn about AWS Trusted Advisor. You can use this tool to evaluate and improve your AWS
environment when you implement your architectural design.
Module 10: Automatic Scaling and Monitoring
Module sections:
• Elastic Load Balancing

• Amazon CloudWatch

• Amazon EC2 Auto Scaling

The purpose of this module is to introduce you to three fundamental AWS services that can be used together
to build dynamic, scalable architectures. Section 1 introduces you to Elastic Load Balancing, which is a service
that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2
instances, containers, IP addresses, and Lambda functions. Section 2 introduces you to Amazon CloudWatch,
which is a service that provides you with Data and actionable insights to monitor your applications, respond
to system - wide performance changes, optimize resource utilization, and get a unified view of operational
health. Finally, Section 3 introduces you to the Amazon EC2 Auto Scaling features that help you maintain
application availability and enable you to automatically add or remove EC2 instances according to conditions
that you define.
AWS certification exam information

The AWS Certified Cloud Practitioner certification provides individuals in various and technology roles
with a way to validate their AWS Cloud knowledge and professional credibility. This exam covers four
domains, including cloud concepts, security, technology, and billing and pricing. The AWS Certified Cloud
Practitioner exam is the only AWS certification exam that is classified as foundational (as shown on the
previous slide). It is often the first AWS IT professionals attempt to obtain. Though this AWS Academy Cloud
Foundations course is not listed in the AWS Certified Cloud Practitioner Exam Guide as one of the AWS
options recommended to prepare for the exam, this course does cover many of the same topics that are covered
by AWS commercial courses, such as AWS Technical Essentials, AWS Business Essentials, and AWS Cloud
Practitioner Essentials. Therefore, the AWS Academy Cloud Foundations course you are taking now is a good
way to help prepare yourself to take this exam. The services included in the AWS Certified Cloud Practitioner
exam change as new services are added. At a minimum, you should be able to describe the overall functionality
of a broad range of AWS services before taking the exam. For an overview of the AWS services see the
Amazon Web Services Cloud Platform section of the Overview of Amazon Web Services whitepaper.
ACTIVITY LOG FOR THE FIRST WEEK

Day & Brief description of the daily Learning Outcome Person


Date activity In-Charge
Signature

DAY 1 Learned about the


30/05/2023 Introduction to cloud computing introduction to cloud
computing

DAY 2 Learned about the


31/05/2023 Advantages of cloud computing advantages of cloud
computing

DAY 3 Introduction to Amazon Web Services (AWS) Learned about the


01/06/2023 introduction to
amazon web
services
DAY 4 Learned about the
02/06/2023 Introduction to Amazon Web Services (AWS) introduction to amazon
web services

DAY 5 AWS Cloud Adoption Framework (AWS Learned about the aws
03/06/2023 CAF) cloud adoption
framework

DAY 6 AWS Cloud Adoption Framework (AWS Learned about the aws
CAF) cloud adoption
04/06/2023 framework
WEEKLY REPORT
WEEK-1 (From:30/05/2023 to 04/06/2023)

The objective of the activity done: The main objective of the first week activities is to know about the
Introduction to cloud computing and how to complete certain modules as part of the internship.

Detailed Report:
Day–1:
Cloud computing is the on - demand delivery of compute power, database, storage, applications, and other
IT resources via the internet with pay – as – you – go pricing. These resources run on server computers that
are located in large data centers in different locations around the world. When you use a cloud service
provider like AWS, that service provider owns the computers that you are using. These resources can be used
together like building blocks to build solutions that help meet business goals and satisfy technology
requirements. In the traditional computing model, infrastructure is thought of as hardware. Hardware solutions
are physical, which means they require space, staff, physical security, planning, and capital expenditure. In
addition to significant upfront investment, another prohibitive aspect of traditional computing is the long
hardware procurement cycle that involves acquiring, provisioning, and maintaining on - premises
infrastructure. With a hardware solution, you must ask if there is enough resource capacity or sufficient storage
to meet your needs, and you provision capacity by guessing theoretical maximum peaks. If you don’t meet
your projected maximum peak, then you pay for expensive resources that stay idle. If you exceed your
projected maximum peak, then you don’t have sufficient capacity to meet your needs. And if your needs
change, then you must spend the time, effort, and money required to implement a new solution.
Day–2:
Trade capital expense for variable expense:
Capital expenses (capex) are funds that a company uses to acquire, upgrade, and maintain physical assets
such as property, industrial buildings, or equipment. Do you remember the data center example in the
traditional computing model where you needed to rack and stack the hardware, and then manage it all? You
must pay for everything in the data center whether you use it or not. By contrast, a variable expense is an
expense that the person who bears the cost can easily alter or avoid. Instead of investing heavily in data centers
and servers before you know how you will use them, you can pay only when you consume resources and pay
only for the amount you consume. Thus, you save money on technology. It also enables you to adapt to new
applications with as much space as you need in minutes, instead of weeks or days. Maintenance is reduced, so
you can spend focus more on the core goals of your business.
Benefit from massive economies of scale:
By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because
usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve
higher economies of scale, which translates into lower pay You can deploy your application in multiple AWS
Regions around the world with just a few clicks. As a result, you can provide a lower latency and better
experience for your customers simply and at minimal cost.
Day–3:
In general, a web service is any piece of software that makes itself available over the internet or on private
(intranet) networks. A web service uses a standardized format—such as Extensible Markup Language (XML)
or JavaScript Object Notation (JSON)—for the request and the response of an application programming
interface (API) interaction. It is not tied to any one operating system or programming language. It’s describing
via an interface definition file it is discoverable.

For example, say you’re building a database application. Your customers might be sending data to your
Amazon Elastic Compute Cloud (Amazon EC2) instances, which is a service in the compute category. These
EC2 servers batch the data in one-minute increments and add an object per customer to Amazon Simple
Storage Service (Amazon S3), the AWS storage service you’ve chosen to use. You can then use a nonrelational
database like Amazon DynamoDB to power your application, for example, to build an index so that you can
find all the objects for a given customer that were collected over a certain period. You might decide to run
these services inside an Amazon Virtual Private Cloud (Amazon VPC), which is a service in the networking
category. The purpose of this simple example is to illustrate that you can select web services from different
categories and use them together to build a solution

Day–4:

The array of AWS services can be intimidating as you start your journey into the cloud. This course
focuses on some of the more common services in the following service categories: compute, storage, database,
networking and content delivery, security, identity, and compliance, management and governance, and AWS
cost management.
Legend:
 Amazon Elastic Block Store (Amazon EBS)
 Amazon Elastic Compute Cloud (Amazon EC2)
 Amazon Elastic Container Registry (Amazon ECR)
 Amazon Elastic Container Service (Amazon ECS)
 Amazon Elastic File System (Amazon EFS)
 Amazon Elastic Kubernetes Service (Amazon EKS)
 Amazon Relational Database Service (Amazon RDS)
 Amazon Simple Storage Service (Amazon S3)
 Amazon Virtual Private Cloud (Amazon VPC)

 AWS Identity and Access Management (IAM)

 AWS Key Management Service (AWS KMS)

AWS Management Console:

The console provides a rich graphical interface to a majority of the features offered by AWS. (Note: From
time to time, new features might not have all of their capabilities included in the console when the feature
initially launches.)

AWS Command Line Interface (AWS CLI):


The AWS CLI provides a suite of utilities that can be launched from a command script in Linux, macOS, or
Microsoft Windows.

Software development kits (SDKs):


AWS provides packages that enable accessing AWS in a variety of popular programming languages. This
makes it easy to use AWS in your existing applications and it also enables you to create applications that
deploy and monitor complex systems entirely through code.

Day–5:
As you learned so far in this module, cloud computing offers many advantages over the traditional model.
However, for most organizations, cloud adoption does not happen instantly. Technology is one thing, but an
organization also consists of people and processes, and these three elements must all be in alignment for
successful cloud adoption. Cloud computing introduces a significant shift in how technology is obtained, used,
and managed. It also shifts organizations budget and pay for technology services. Cloud adoption requires that
fundamental changes are discussed and considered across an entire organization. It also requires that
stakeholders across all organizational units—both within and outside IT—support these new changes. In this
last section, you learn about the AWS CAF, which was created to help organizations design and travel an
accelerated path to successful cloud adoption.
Day–6:
Each organization’s cloud adoption journey is unique. However, in order for any organization successfully
migrate its IT portfolio to the cloud, three elements (that is, people, process, and technology) must be in
alignment. Business and technology leaders in an organization must understand the organization’s current
state, target state, and the transition that is needed to achieve the target state so they can set goals and create
processes for staff. The AWS Cloud Adoption Framework (AWS CAF) provides guidance and best practices
to help organizations identify gaps in skills and processes. It also helps organizations build a comprehensive
approach to cloud computing—both across the organization and throughout the IT lifecycle—to accelerate
successful cloud adoption. At the highest level, the AWS CAF organizes guidance into six areas of focus,
called perspective. Perspectives span people, processes, and technology. Each perspective consists of a set of
capabilities, which covers distinct responsibilities that are owned or managed by functionally related
stakeholders. Capabilities within each perspective are used to identify which areas of an organization require
attention. By identifying gaps, prescriptive work streams can be created that support a successful cloud
journey.
ACTIVITY LOG FOR THE SECOND WEEK

Person
Day & Date Description of the daily Learning Outcome In-Charge
activity Signature

Fundamentals of pricing Learn about the Fundamentals


of pricing
DAY 1
05/06/2023

Total Cost of Ownership Learn about the Total Cost of


Ownership
DAY 2
06/06/2023

Learn about the AWS


Organizations
DAY 3 AWS Organizations
07/06/2023

DAY 4
AWS Billing and Cost Learn about the AWS Billing
10/06/2023
Management and Cost Management

Technical Support Learn about the Technical


DAY 5 Support
11/06/2023

Overview of the Show the Overview of the


DAY 6
Billing Dashboard Billing Dashboard
12/06/2023
WEEKLYREPORT
WEEK 2 (From:05/06/2023 to 12/06/2023)

Objective of the activity done: The main objective of second week activities is to know about
flows TotalCost of Ownership, Technical Support. And how to completegiven modules as a part of the
internship.
Detailed Report:

Day-1:
There are three fundamental drivers of cost with AWS:
compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on AWS
product and pricing model you choose. In most cases, there is no charge for inbound data transfer or for data
transfer between other AWS services within the same AWS Region. There are some exceptions, so be sure to
verify data transfer rates before you begin to use the AWS service. Outbound data transfer is aggregated across
services and then charged at the outbound data transfer rate. This charge appears on the monthly statement as
AWS Data Transfer Out.

Day-2:
On-premises versus cloud is a question that many businesses ask. The difference between these two options
is how they are deployed. An on - premises infrastructure is installed locally on a company’s own computers
and servers. There are several fixed costs, also known as capital expenses, that are associated with the
traditional infrastructure. Capital expenses include facilities, hardware, licenses, and maintenance staff.
Scaling up can be expensive and time-consuming. Scaling down does not reduce fixed costs.
A cloud infrastructure is purchased from a service provider who builds and maintains the facilities, hardware,
and maintenance staff. A customer pays for what is used. Scaling up or down is simple. Costs are easy to
estimate because they depend on service use. It is difficult to compare premises IT delivery model with the
AWS Cloud. The two are different because they use different concepts and terms. Using on premises IT
involves a discussion that is based on capital expenditure, long planning cycles, and multiple components to
buy, build, manage, and refresh resources over time. Using the AWS Cloud involves a discussion about
flexibility, agility, and consumption-based costs.
Day-3:
AWS Organizations is a free account management service that enables you to consolidate multiple AWS
accounts into an organization that you create and centrally manage. AWS Organizations include consolidated
billing and account management capabilities that help you to better meet the budgetary, security, and
compliance needs of your business.
The main benefits of AWS Organizations are:
Centrally managed access policies across multiple AWS accounts.
Controlled access to AWS services.
Automated AWS account creation and management.
Consolidated billing across multiple AWS accounts.
Here is some terminology to understand the structure of AWS Organizations. The diagram shows a basic
organization, or root, that consists of seven accounts that are organized into four organizational units (or OUs).
An OU is a container for accounts within a root. An can also contain other OUs. This structure enables you to
create a hierarchy that looks like an upside-down tree with the root at the top. The branches consist of child
OUs and they move downward until they end in accounts, which are like the leaves of the tree.
When you attach a policy to one of the nodes in the hierarchy, it flows down and it affects all the branches
and leaves. This example organization has several policies that are attached to some of the OUs or are attached
directly to accounts. An OU can have only one parent and, currently, each account can be a member of exactly
one OU. An account is a standard AWS account that contains your AWS resources. You can attach a policy
to an account to apply controls to only that one.
Day-4:
AWS Billing and Cost Management is the service that you use to pay your AWS bill, monitor usage, and
budget your costs. Billing and Cost Management enables you to forecast and obtain a better idea of what your
costs and usage might be in the future so that you can plan ahead. You can set a custom time period and
determine whether you would like to view your data at a monthly or daily level of granularity. With the filtering
and grouping functionality, you can further analyze your data using a variety of Available dimensions. The
AWS Cost and Usage Report Tool enables you to identify the opportunities for optimization by understanding
your cost and usage data trends and how you are using your AWS implementation The AWS Billing and
Cost Management console that includes the Cost Explorer page for viewing your AWS cost data as a graph.
With Cost Explorer, you can visualize, understand, and manage your AWS costs and usage over time. The
Cost Explorer includes a default report that visualizes your costs and usage for your top cost incurring AWS
services. The monthly running costs report gives you an overview of all your cost for the past 3 months. It also
provides forecasted numbers for the coming month, with at responding confidence interval. The Cost Explorer
is a free tool that enables you to:
 View charts of your costs.
 View cost data for the past 13 months.
 Forecast how much you are likely to spend over the next 3 months.
 Discover patterns in how much you spend on AWS resources over time and identify cost
 problem areas.
 Identify the services that you use the most view metrics, like which Availability Zones have
 the most traffic or which linked AWS account is used the most.
Day-5:
AWS Budgets uses the cost visualization that is provided by Cost Explorer to show you the status
of your budgets and to provide forecasts of your estimated costs. you can also use AWSO Budgets to create
notifications for when you go over your budget for the month, or when your estimated costs exceed your
budget. Budgets can be tracked at the monthly, quarterly, or yearly level, and you can customize the start and
end dates. Budget alerts can be sent via email or via Amazon Simple Notification Service (Amazon SNS).
for accessibility:
The AWS Billing budgets panel showing budget names, current and future costs and usages, and headings
for current and forecasted versus budgets. End of accessibility description. The AWS Cost and Usage Report
is a single location for accessing comprehensive information about your AWS costs and usage. This tool
lists the usage for each service to category that is used by an account (and its users) in hourly or daily line
items, and any tax that you activated for tax allocation purposes. You can choose to have AWS to publish
billing reports to an S3 bucket. These reports can be updated once a day. Whether you are new or continuing
to adopt AWS services and applications as your business solutions, AWS want help you do amazing things
with AWS. AWS Support can provide you with a unique combination of tools and expertise based on your
current or future and planned use cases. AWS Support was developed to provide complete support and the
right resources to aid your success. We want to support all our customers, including customers that might be
experimenting with AWS, those that are looking for production uses of AWS, and also customers that use
AWS as a business-critical resource. AWS Support can vary the type of support that is provided, depending
on the customer’s needs and goals.

Day-6:
In summary you:
• Explored the fundamentals of AWS pricing
• Reviewed Total Cost of Ownership concepts
• Reviewed an AWS Pricing Calculator estimate.
Total Cost of Ownership is a concept to help you understand and compare the costs that are associated with
different deployments. AWS provides the AWS Pricing Calculator to assist you with the calculations that are
needed to estimate cost savings.
Use the AWS Pricing Calculator to:
• Estimate monthly costs
• Identify opportunities to reduce monthly costs
• Model your solutions before building them
• Explore price points and calculations behind your estimate
• Find the available instance types and contracts that meet your needs
AWS Billing and Cost Management provides you with tools to help you access, understand, allocate,
control, and optimize your AWS costs and usage. These tools include AWS Bills, AWS Cost Explorer, AWS
Budgets, and AWS Cost and Usage Reports. information about your AWS costs and usage including which
AWS services are the main cost drivers.

ACTIVITY LOG FOR THE THIRD WEEK

Person In
Day & Charge
Description of the daily activity Signature
Date Learning Outcome

LiveSession-5Content– Lightning Web Lightning Web


DAY 1 Components (LWC) Components (LWC)
13/06/2023

Self-Paced Learning VS Code Setup


DAY 2 ModulesVS Code Setup installation, some concepts.
14/06/2023

Self-Paced Learning VS Code Setup


DAY 3 ModulesCLC Setup installation, some concepts.
15/06/2023

Basics Of LWC, Visual


LiveSession-6Content: force, Use Lightning,
DAY 4 Lightning Web Components Components in Lightning
16/06/2023 (LWC & API) Experience, REST API,
SOAPAPI

Self-Paced Learning Learned the basics of API,


DAY 5 ModulesAPI Basics put the web in web API,
17/06/2023 Event Monitoring Download and visualize
Event Log files.

Setup and Manage shield


DAY 6 Self-Paced Learning Modules platform Encryption,
18/06/2023 Shield Platform Encryption Deploy shield platform
Apex Integration Services encryption in a smart way,
Apex SOAP and REST
Callouts.
WEEKLYREPORT
WEEK-3 (From:13/06/2023 to 18/06/2023)
The objective of the activity done: The main objective of the third week activities to

know about the Lightning Web Components (LWC), Lightning Web Components (LWC & API).
Detailed Report:
Day–1:
The AWS Cloud infrastructure is built around Regions. AWS has 22 Regions worldwide. An AWS Region
is a physical geographical location with one or more Availability Zones. Availability Zones in turn consist of
one or more data centers. To achieve fault tolerance and stability, Regions are isolated from one another.
Resources in one Region are not automatically replicated to other Regions. When you store data in a specific
Region, it is not replicated outside that Region. It is your responsibility to replicate data across Regions, if
your business needs require it. AWS Regions that were introduced before March 20, 2019 are enabled by
default. Regions that were introduced after March 20, 2019—such as Asia Pacific (Hong Kong) and Middle
East (Bahrain)—are disabled by default. You must enable these Regions before you can use them. You can
use the AWS Management Console to enable or disable a Region. Some Regions have restricted access. An
Amazon AWS (China) account provides access to the Beijing and Ningxia Regions only. To learn more about
AWS in China, see: https://www.amazonaws.cn/en/about-aws/china/. The isolated AWS GovCloud (US)
Region is designed to allow US government agencies and customers to move sensitive workloads into the
cloud by addressing their specific regulatory compliance requirements.
For accessibility:
Snapshot from the infrastructure. Aws website that shows a picture of downtown London including the Tower
Bridge and the Shard. It notes that there are three Availability Zones in the London region. End of accessibility
description.
Day–2:
As discussed previously, the AWS Global Infrastructure can be broken down into three elements: Regions,
Availability Zones, and Points of Presence, which include edge locations. This infrastructure provides the
platform for a broad set of services, such as networking, storage, compute services, and databases—and these
services are delivered as an on-demand utility that is available in seconds, with pay-as-you-go pricing.
For accessibility:
Marketing diagram showing infrastructure at the bottom, consisting of Regions, Availability Zones, and edge
locations. The next level up is labeled Foundational Services and includes graphics for compute, networking,
and storage. That level is highlighted. Next level up platform services that includes databases, analytics, app
services, deployment and management, and mobile services. Top layer is labeled applications and includes
virtual desktops and collaboration and sharing. End of accessibility description.
Day–3:
AWS offers a broad set of cloudbased services. There are 23 different product or service categories, and
each category consists of one or more services. This course will not attempt to introduce you to each service.
Rather, the focus of this course is on the services that are most widely used and offer the best introduction to
the AWS Cloud. This course also focuses on services that are more likely to be covered in the AWS Certified
Cloud Practitioner exam. The categories that this course will discuss are highlighted on the slide: Compute,
Cost Management, Database, Management and Governance, Networking and Content Delivery, Security,
Identity, and Compliance, and Storage.
Day–4:
If you click Amazon EC2, it takes you to the Amazon EC2 page. Each product page provides a detailed
description of the product and lists some of its benefits. Explore the different service groups to understand the
categories and services with in them. Now that you know how to locate information about different services,
this module will discuss the highlighted service categories. The next seven slides list the individual services
—within each of the categories and highlighted above— that this course will discuss.

Day–5:
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers scalability, data
availability, security, and performance. Use it to store and protect any amount of data for websites, mobile
apps, backup and restore, archive, enterprise applications, Internet of Things (IoT) devices, and big data
analytics. Amazon Elastic Block Store (Amazon EBS) is high-performance block storage that is designed Use
with Amazon EC2 for both throughput and transaction intensive workloads. It is used for a broad range of
workloads, such as relational and non- relational databases, enterprise applications, containerized applications,
big data analytics engines, file systems, and media workflows. Amazon Elastic File System (Amazon EFS)
provides a scalable, fully managed elastic Network File System (NFS) file system for use with AWS Cloud
services and on-premises of resources. It is built to scale on demand to petabytes, growing and shrinking
automatically as you add and remove files. It reduces the need to provision and manage capacity to
accommodate growth. Amazon Simple Storage Service Glacier is a secure, durable, and extremely low - cost
Amazon S3 cloud storage class for data archiving and long-term backup. It is designed to deliver 11 9s of
durability, and to provide comprehensive security and the capabilities to meet stringent regulatory
requirements.
Day–6:
AWS compute services include the services listed here, and many others. Amazon Elastic Compute Cloud
(Amazon EC2) provides resizable compute capacity as virtual machines in the cloud. Amazon EC2 Auto
Scaling enables you to automatically add or remove EC2 instances according to conditions that you define.
Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high -performance container
orchestration service that supports Docker containers. Amazon Elastic Container Registry (Amazon ECR) is
a fully- managed Docker container registry that makes it easy for developers to store, manage, a deploy
Docker container images. AWS Elastic Beanstalk is a service for deploying and scaling web applications and
services on familiar servers such as Apache and Microsoft Internet Information Services (IIS). AWS Lambda
enables you to run code without provisioning or managing servers. you pay only for the compute time that you
consume. There is no charge when your code is not running Amazon Elastic Kubernetes Service (Amazon
EKS) makes it easy to deploy, manage, and scale containerized applications that use Kubernetes on
AWS.AWS Fargate is a compute engine for Amazon ECS that allows you to run containers without having to
manage servers or cluster.
ACTIVITY LOG FOR THE FOURTH WEEK

Day & Date Person In


Brief description of the daily activity Learning Outcome -Charge
Signature

Learned the aws global


DAY 1 AWS shared responsibility model infrastructure
19/06/2023

DAY 2 Learned aws services and


20/06/2023 AWS Identity and Access Management service category
(IAM) overview

DAY 3 Learned aws


21/06/2023 Securing a new AWS account management console
click through

DAY 4 Securing accounts Learned aws


22/06/2023 management console
click through.

DAY 5
Learned aws global
23/06/2023 Securing data on AWS
infrastructure

DAY 6 Learned aws global


24/06/2023 Working to ensure compliance infrastructure
Thursday
WEEKLYREPORT

WEEK 4 (From 19/06/2023 to 24/06/2023)

The objective of the activity done: The main objective of the fourth week is to test with in the
knowledge gain all over the course by completing AWS Identity and the Management (IAM), Securing a
new AWS account Securing a new AWS account, Securing accounts e.t.c

Detailed Report:
Day1:
Security and compliance are a shared responsibility between AWS and the customer. This share
responsibility model is designed to help relieve the customer’s operational burden. At the same time, to provide
the flexibility and customer control that enables the deployment of customer solutions on AWS, the customer
remains responsible for some aspects of the overall security. The differentiation of who is responsible for what
is commonly referred to as security “of” the cloud versus security “in” the cloud. AWS operates, manages,
and controls the components from the software virtualization layer down to the physical security of the
facilities where AWS services operate. AWS is responsible protecting infrastructure that runs all the services
that are offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking,
and facilities that run the AWS Cloud services. The customer is responsible for the encryption of data at rest
and data in transit. The customer should also ensure that the network is configured for security and that security
credentials and logins are managed safely. Additionally, the customer is responsible for the configuration of
security groups and the configuration of the operating system that run on compute instances that they launch
(including updates and security patches).

Day2:
AWS Identity and Access Management (IAM)allows you to control access to compute, storage, database,
and application services in the AWS Cloud. IAM can be used to handle authentication, and to specify and
enforce authorization policies so that you can specify which users can access services. IAM is a tool that
centrally manages access to launching, configuring, managing, and terminating resources in your AWS
account. It provides granular control over access to resources, including the ability to specify exactly which
API calls the user is authorized to make to each service. Whether you use the AWS Management Console, the
AWS CLI, or the AWS software development kits (SDKs), every call to an AWS service is an API call. With
IAM, you can manage which resources can be accessed by who, and how these resources can be accessed.
You can grant different permissions to different people for different resources. For example, you might allow
some users full access to Amazon EC2, Amazon S3, Amazon DynamoDB, Amazon Redshift, and other AWS
services. However, for other users, you might allow read-only access to only a few S3 buckets. Similarly, you
might grant permission to other users to administer only specific EC2 instances.
Day3:
When you first create an AWS account, you begin with a single sign- in identity that has complete access
to all AWS services and resources in the account. This identity is called the AWS account root user and it is
accessed by signing into the AWS Management Console the email address and password that you used to
create the account. AWS account root users have (and retain) full access to all resources in the account.
Therefore, AWS strongly recommends that you do not use account root user credentials for day-to-day
interactions with the account. Instead, AWS recommends that you use IAM to create additional users and
assign permissions to these users, following the principle of least privilege. For example, if you require
administrator-level permissions, you can create an IAM user, grant that user full access, and then use those
credentials to interact with the account. Later, if you need to revoke or modify your permissions, you can
delete or modify any policies that are associated with that IAM user Additionally, if you have multiple users
that require access to the account, you can create unique credentials for each user and define which user will
have access to which resources. For example, you can create IAM users with read-only access to resources in
your AWS account and distribute those credentials to users that require read access. You should avoid sharing
the same credentials with multiple users. While the account root user should not be used for routine tasks,
there are a few tasks that can only be accomplished by logging in as the account root user. A full list of these
tasks is detailed on the Tasks that require root user credentials AWS documentation page.
Day4:
AWS Organizations is an account management service that enables you to consolidate multiple AWS
accounts into an organization that you create and centrally manage. Here, the focus is on the security features
that AWS Organizations provides. One helpful security feature is that you can group accounts into
organizational units (OUs) and attach different access policies to each OU. For example, if you have accounts
that should only be allowed to access the AWS services that meet certain regulatory requirements, you can put
those accounts into one OU. You then can define a policy that blocks OU access to services that do not meet
those regulatory requirements, and then attach the policy to the OU. Another security feature is that AWS
Organizations integrates with and supports IAM. AWS Organizations expands that control to the account level
by giving you control over what users and roles in an account or a group of accounts can do. The resulting
permissions are the logical intersection of what is allowed by the AWS Organizations policy settings and what
permissions are explicitly granted by IAM in the account for that user or role. The user can access only what
is allowed by both the AWS Organizations policies and IAM policies. Finally, AWS Organizations provides
service control policies (SCPs) that enable you to specify the maximum permissions that member accounts in
the organization can have. In SCPs, you can restrict which AWS services, resources, and individual actions
the users and roles in each member account can access. These restrictions even override the administrators of
member accounts. When AWS Organizations blocks access to a service, resource, or API action, a user or role
in that account can't access it, even if an administrator of a member account explicitly grants such permissions.

Day5:
Data encryption is an essential tool to use when your objective is to protect digital data. Data encryption
takes data that is legible and encodes it so that it is unreadable to anyone who does not have access to the secret
key that can be used to decode it. Thus, even if an attacker gains access to your data, they cannot make sense
of it. Data at rest refers to data that is physically to stored on disk or on tape. You can create encrypted file
systems on AWS so that all your data and metadata is encrypted at rest by using the open standard Advanced
Encryption Standard (AES)-256 encryption algorithm. When you use AWS KMS, encryption and decryption
are handled automatically transparently, so that you do not need to modify your applications. If your
organization is subject to corporate or regulatory policies that require encryption of data and metadata at rest,
AWS recommends enabling encryption on all services that store your data. You can encrypt data stored in any
service that is supported by AWS KMS. See How AWS Services use AWS KMS for a list of supported
services.

Day6:
As an example of a certification for which you can use AWS services to meet your compliance goals,
consider the ISO/IEC 27001:2013 certification. It specifies the requirements for establishing, implementing,
maintaining, and continually improving an Information Security Management System. The basis of this
certification is the development and implementation of a rigorous security program, which includes the
development and implementation of an Information Security Management System. The Information Security
Management System defines how AWS perpetually manages security in a holistic, comprehensive manner.
AWS also provides security features and legal agreements that are designed to help support customers with
common regulations and laws. One example is the Health Insurance Portability and Accountability Act
(HIPAA) regulation. Another example, the European Union (EU) General Data Protection Regulation
(GDPR) protects European Union data subjects' fundamental right to privacy and the protection of personal
data. It introduces robust requirements that will raise and harmonize standards for data protection, security,
and compliance AWS Config is a Regional service. To track resources across Regions, enable it in every
Region that you use. AWS Config offers an aggregator feature that can show an aggregated view of resources
across multiple Regions and even multiple accounts.
ACTIVITY LOG FOR THE FIFTH WEEK

Day & Date Description of the daily activity Person


Learning Outcome Incharge
Signature

DAY 1 Networking basics Learn about the


25/06/2023 Networking basics

DAY 2 Learn about the


26/06/2023 Amazon VPC Amazon VPC.

DAY 3 Learned about VPC


28/06/2023 VPC networking networking

DAY 4 Learn about the


30/06/2023 VPC security VPC security

DAY 5 Learned and create


01/07/2023 Amazon Route 53 the Amazon Route 53

DAY 6 Learn the Amazon


02/07/2023 Amazon CloudFront cloudfront
WEEKLYREPORT
WEEK-5 (From 25/06/2023 to 02/70/2023)
The objective of the activity done: The main objective of this week activities is to
know aboutthe Validations and Networking basics, Amazon VPC e.t.c basedon certain
modules as part of the internship.
Detailed Report:
Day-1:
A computer network is two or more client machines that are connected together to share resources. A
network can be logically partitioned into subnets. Networking requires a networking device (such as a router
or switch) to connect all the clients together and enable communication between them.
Day-2:
Many of the concepts of an on-premises network apply to a cloud-based network, but much of the
complexity of setting up a network has been abstracted without sacrificing control, security, and usability. In
this section, you learn about Amazon VPC and the fundamental components of a VPC.
Day-3:
Now that you have learned about the basic components of a VPC, you can start routing traffic in interesting
ways. In this section, you learn about different networking options.
Day-4:
You can build security into your VPC architecture in several ways so that you have complete control over
both incoming and outgoing traffic. In this section, you learn about two Amazon VPC firewall options that
you can use to secure your VPC: security groups a network access control lists(network ACLs).
Day-5:
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS)web service. It is
designed to give developers and businesses are liable and cost-effective way to route users to internet
applications by translating names (like www.example.com) into the numeric IP addresses (like 192.0.2.1) that
computers use to connect to each other.
Day-6:
The purpose of networking is to share information between connected resources. So far in this module,
you learned about VPC networking with Amazon VPC. You learned about the different options for connecting
your VPC to the internet, to remote networks, to other VPCs, and to AWS services. Content delivery occurs
over networks, too—for example, when you stream a movie from your favorite streaming service. In this final
section, you learn about Amazon CloudFront, which is a content delivery network (CDN) service.
ACTIVITY LOG FOR THE SIXTH WEEK

Brief Description of the daily Learning Person


Day & Date activity Outcome Incharge
Signature
DAY 1 Compute services overview To compute services
03/07/2023 overview

DAY 2 Amazon EC2 Amazon EC2


04/07/2023

DAY 3 Amazon EC2 cost optimization


Learn about the
05/07/2023
Amazon EC2 cost
optimization

DAY 4 Learn about the


06/07/2023 Container services Container services

DAY 5 Learn about rhe


07/07/2023 Introduction to AWS Lambda introduction to AWS
lambda

DAY 6 Learn about the


09/07/2023 Introduction to AWS Elastic introduction to AWS
Beanstalk elastic beanstalk
WEEKLY REPORT
WEEK-6 (From 03/07/2023 to 09/07/2023)

The objective of the activity done: The main objective of this week activities is to
Clarify the doubts and increasing the soft skills.
Detailed Report:

Day-1:
Section 2 includes a recorded Amazon EC2 demonstration. The end of this same section includes a hands-
on lab, where you will practice launching an EC2 instance by using the AWS Management Console. There is
also an activity in this section that has you compare advantages and disadvantages of running a database
deployment on Amazon EC2, running it on Amazon Relational Database Service (RDS). Section 5 includes a
hands-on AWS Lambda activity and section 6 includes a hands-on Elastic Beanstalk activity. Finally, you will
be asked to complete a knowledge check that will test your understanding of the key concepts that are covered
in this module.
Day-2:
Running servers on-premises is an expensive undertaking. Hardware must be procured, and this
procurement can be based on project plans instead of the reality of how the servers are used. Data centers are
expensive to build, staff, and maintain. Organizations also need to permanently provision a sufficient amount
of hardware to handle traffic spikes and peak workloads. After traditional on-premises deployments are built,
server capacity might be unused and idle for a significant portion of the time that the servers are running,
which is wasteful.

Day-3:
The demonstration shows:
• How to use the AWS Management Console to launch an Amazon EC2 instance (with all the
default instance settings accepted).
• How to connect to the Windows instance by using a Remote Desktop client and the key pair
that was identified during instance launch to decrypt the Windows password for login.
• How to terminate the instance after it is no longer needed

Day-4:
The objective of this activity is to demonstrate that you understand the differences between building a
deployment that uses Amazon EC2 and using a fully managed service, such as the Amazon RDS, to deploy
your solution. At the end of this activity, you should be prepared to discuss the advantages and disadvantages
of deploying Microsoft SQL Server on AmazonEC2 versus deploying it on Amazon RDS.
Day-5:
As you saw in the earlier sections of this module, AWS offers many compute options. For some examples,
Amazon EC2 provides virtual machines. As another example, Amazon ECS and Amazon EKS are container-
based compute services. However, there is another approach to compute that does not require you to provision
or manage servers. This third approach is often referred to as serverless computing.

Day-6:
AWS Elastic Beanstalk is another AWS compute service option. It is a platform as a service (or PaaS)
that facilitates the quick deployment, scaling, and management of your web and the applications and services.
you remain in control. The entire platform is already built, and you only need to upload your code. Choose
your instance type, your database, set and adjust the automatic scaling, update your application, access the
server log files, and enable HTTPS on the load balancer.
ACTIVITY LOG FOR THE SEVEN AND EIGHTWEEK

Brief Description of the daily Learning Person


Day & Date activity Outcome Incharge
Signature
DAY 1 Amazon Elastic Block Store Amazon Elastic
10/07/2023 (Amazon Block Store
EBS)

DAY 2 Amazon Simple Storage Service Amazon Simple Storage


11/07/2023 Glacier Service
Glacier

DAY 3 Amazon Relational Database Amazon Relational


12/07/2023 Service Database Service
(Amazon RDS)

DAY 4 Amazon DynamoDB


13/07/2023 Amazon DynamoDB

DAY 5 Amazon Redshift


14/07/2023 Amazon Redshift

DAY 6 Amazon Aurora


16/07/2023 Amazon Aurora
WEEKLY REPORT
WEEK-7&8 (From 10/07/2023 to 16/07/2023)

The objective of the activity done: The main objective of this week activities is to Clarify the
doubts and increasing the soft skills.
Detailed Report:
Day-1:
Amazon EBS provides persistent block storage volumes for use with Amazon EC2 instances. It Persistent
storage is any data storage device that retains data after power to that device is shut off. It is also sometimes
called non-volatile storage. Each Amazon EBS volume is only the automatically replicated within its
Availability Zone to protect you from component failure. It is designed for high availability and durability.
Amazon EBS volumes provide consistent and performance that is needed to run your workloads. With Amazon
EBS, you can scale your usage up or down within minutes, while paying a low price for only what you
provision.

Day-2:
The demonstration shows how to configure the following resources by using the AWS Management
Console. The demonstration shows how to:
• Create an Amazon Glacier vault.
• Upload archived items to the vault using a third-party graphical interface tool.
In this educator- led activity, you will be asked to log in to the AWS Management Console. The activity
instructions are on the next slide. You will be challenged to answer five questions. The educator will lead the
class in a discussion of each question, and reveal the correct answers.

Day-3:
AWS solutions typically fall into one of two categories: unmanaged or managed. Unmanaged services are
typically provisioned in discrete portions as specified by the user. You must manage how the service responds
to changes in load, errors, and situations where resources become unavailable. Say that you launch a web
server on an Amazon Elastic Compute Cloud (Amazon EC2) instance. Because Amazon EC2 is an unmanaged
solution, that web server will not scale to handle increased traffic load or replace unhealthy instances with
healthy ones unless you specify that it use a scaling solution, such as AWS Automatic Scaling. The benefit to
using an unmanaged service is that you have more fine-tuned control over how your solution handles changes
in load, errors, and situations where resources become unavailable.

Day-4:
AWS solutions typically fall into one of two categories: unmanaged or managed. Unmanaged services are
typically provisioned in discrete portions as specified by the user. You must manage how the service responds
to changes in load, errors, and situations where resources become unavailable. Say that you launch a web
server on an Amazon Elastic Compute Cloud (Amazon EC2) instance. Because Amazon EC2 is an unmanaged
solution, that web server will not scale to handle increased traffic load or replace unhealthy instances with
healthy ones unless you specify that it use a scaling solution, such as AWS Automatic Scaling. The benefit to
using an unmanaged service is that you have more finetuned control over how your solution handles changes
in load, errors, and situations where resources become unavailable. amazon manages all the underlying data
infrastructure for this service and redundantly stores data across multiple facilities in a native US Region as
part of the fault-tolerant architecture. With DynamoDB, you can create tables and items. You can add items to
a table. The system ally partitions your data and has table storage to meet workload requirements. There is no
practical limit on the number of items that you can store in a table. For instance, some customers have
production tables that contain billions of items.

Day-5:

Analytics is important for businesses today, but building a data warehouse is complex and the expensive.
Data warehouses can take months and significant financial resources to set up. Amazon Redshift is a fast and
powerful, fully managed data warehouse that is simple and cost-effective to set up, use, and scale. It enables
you to run complex analytic queries against petabytes of structured data by using sophisticated query
optimization, columnar storage on high-performance local disks, and massively parallel data processing. Most
results come back in seconds.

Day-6:
Amazon Aurora is a MySQL-and PostgreSQL compatible relational database that is built for the cloud.
It combines the performance and availability of high-end commercial databases with the simplicity and cost-
effectiveness of opensource databases. Using Amazon Aurora can reduce your database costs while improving
the reliability and availability of the database. As a fully managed service, Aurora is designed to automate time
consuming tasks like provisioning, patching, backup, recovery, failure detection, and repair.
ACTIVITY LOG FOR THE NINE and TENWEEK

Brief Description of the daily Learning Person


Day&Date activity Outcome Incharge
Signature
DAY 1 AWS Well - Architected Framework AWS Well - Architected
17/07/2023 Framework

DAY 2 Reliability and high availability Reliability and high


18/07/2023 availability

DAY 3 AWS Trusted Advisor AWS Trusted Advisor


19/07/2023

DAY 4 Elastic Load Balancing


20/07/2023 Elastic Load Balancing

DAY 5 Amazon CloudWatch


22/07/2023 Amazon CloudWatch

DAY 6
23/07/2023 Amazon EC2 Auto Scaling Amazon EC2 Auto
Scaling
WEEKLY REPORT
WEEK-9&10 (From 17/07/2023 to 23/07/2023)

The objective of the activity done: The main objective of this week activities is to Clarifythe
doubts and increasing the soft skills.
Detailed Report:

Day-1:
Architecture is the art and science of designing and building large structures. Large systems require
architects to manage their size and complexity.
Cloud architects:
• Engage with decision makers to identify the business goal and the capabilities that need
improvement.
• Ensure alignment between technology deliverables of a solution and the business goals.
• Work with delivery teams that are implementing the solution to ensure that the technology
features are appropriate.
Having well-architected systems greatly increases the likelihood of business success.
Day-2:
In the words of Werner Vogels, Amazon’s CTO, “Everything fails, all the time.” One of the best practices
that is identified in the AWS Well-Architected Framework is to plan for failure (or application or workload
downtime). One way to do that is to architect your applications and workloads to withstand failure. There are
two important factors that cloud architects consider when designing architectures to withstand failure:
reliability and availability.
Day-3:
As you have learned so far, you can use the AWS Well-Architected Framework as you design your
architectures to understand potential risks in your architecture, identify areas that need improvement, and drive
architectural decisions. In this section, you will learn about AWS Trusted. Advisor, which is a tool that you
can use to review your AWS environment as soon as you start implementing your architectures.
Day-4:
Modern high-traffic websites must serve hundreds of thousands—if not millions—of concurrent requests
from users or clients, and then return the correct text, images, video, or application data in a fast and reliable
manner. Additional servers are generally required to meet these high volumes. Elastic Load Balancing is an
AWS service that distributes incoming application or network traffic across multiple targets—such as Amazon
Elastic Compute Cloud (Amazon EC2) instances, containers, internet protocol (IP) addresses, and Lambda
functions—in a single Availability Zone or across multiple Availability Zones. Elastic Load Balancing scales
your load balancer as traffic to your application changes over time. It can automatically scale to most
workloads.
Day-5:
Amazon CloudWatch is a monitoring and observability service that is built for DevOps engineers,
developers, site reliability engineers (SRE), and IT managers. CloudWatch monitors your AWS resources (and
the applications that you run on AWS) in real time. You can use CloudWatch to collect and track metrics,
which are variables that you can measure for your resources and applications.
Day-6:
When you run your applications on AWS, you want to ensure that your architecture can scale to handle
changes in demand. In this section, you will learn how to automatically scale your EC2 instances with Amazon
EC2 Auto Scaling.
CONCLUSION:

Claim your badge


 You will receive an email from Amazon Web Services Training and Certification via Credly
<[email protected]> with a link to claim your digital badge within 24 hours.
 Click the link in the email and follow the instructions.
 Then share your badge on your LinkedIn or other social media profile to let peers and potential
employers know about your accomplishment.
 Once you accept your badge, you can also download a certificate of completion Links to an
external site..
 If you have any questions about your badge, please contact Credly support Links to an external
site. Links to an external site..

AWS Academy is excited to welcome you into our growing global community of students,
graduates, and lifelong learners of cloud computing.
Student Self Evaluation of the Short-Term Internship

Student Name: P.Sudharshan Rao


Registration No:218A1A4457
Term of Internship: 2 Months
From: MAY 2023 To: AUGUST 2023
Date of Evaluation:
Organization Name & Address: RISE KRISHNA SAI PRAKASAM GROUP OF INSTITUTIONS

Please rate your performance in the following areas:

Rating Scale: Letter grade of CGPA calculation to be provided

1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5

Date: Signature of the Student


Evaluation by the Supervisor of the Intern Organization

Student Name: P.Sudharshan Rao Registration No:218A1A4457

Term of Internship: To :
From:
Date of Evaluation

Please rate the student’s performance in the following areas:

Please note that your evaluation shall be done independent of the Student’s self

evaluationRating Scale: 1 is lowest and 5 is highest rank

1 Oral communication 1 2 3 4 5
2 Written communication 1 2 3 4 5
3 Proactiveness 1 2 3 4 5
4 Interaction ability with community 1 2 3 4 5
5 Positive Attitude 1 2 3 4 5
6 Self-confidence 1 2 3 4 5
7 Ability to learn 1 2 3 4 5
8 Work Plan and organization 1 2 3 4 5
9 Professionalism 1 2 3 4 5
10 Creativity 1 2 3 4 5
11 Quality of work done 1 2 3 4 5
12 Time Management 1 2 3 4 5
13 Understanding the Community 1 2 3 4 5
14 Achievement of Desired Outcomes 1 2 3 4 5
15 OVERALL PERFORMANCE 1 2 3 4 5

Date: Signature of the Supervisor


PHOTOS&VIDEO LINKS
46

You might also like