Cloud Computing NOTES

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Cloud Computing Notes

Chapter 1 : Introduction to Cloud Computing

Q1. Define cloud computing and enlist its advantages and disadvantages (5) May 2023
1. Cloud computing is the on-demand availability of computer system resources, especially data storage
and computing power, without direct active management by the user.
2. Large clouds often have functions distributed over multiple locations, each of which is a data centre.
3. Cloud computing relies on sharing of resources to achieve coherence and typically uses a pay-as-you-
go model, which can help in reducing capital expenses
Advantages:

Disadvantages:

Q2. Define the components of NIST Model. (5) Dec 2023


1. From the National Institute of Standards and Technology (NIST) definition of cloud computing, "Cloud
computing is a model for enabling ubiquitous, convenient, on- demand network access.
2. In a shared pool of configurable computing resources that can be rapidly provisioned and released with
minimal management effort or service provider interactive."
3. The NIST definition lists five essential characteristics of cloud computing: on-demand self- service,
broad network access, resource pooling, rapid elasticity or expansion, and measured service.
4. It also lists three "service models" (software, platform and infrastructure), and four "deployment
models" (private, community, public and hybrid) that together categorize ways to deliver cloud
services.
Q3. What is the significance of cloud computing with respect to on premise infrastructure? Justify
with example (5) Dec 2023
1. Cloud computing differs from on-premises software in one critical way.
2. A company hosts everything in-house in an on-premise environment, while in a cloud environment, a
third-party provider hosts all that for you.
3. This allows companies to pay on an as-needed basis and effectively scale up or down depending on
overall usage, user requirements, and the growth of a company.
4. A cloud-based server utilizes virtual technology to host a company’s applications offsite.
5. There are no capital expenses, data can be backed up regularly, and companies only have to pay for the
resources they use.
Example:
As an example, EDI software has been traditionally been hosted on-premises, but recent cloud computing
developments have allowed EDI providers to offer their services via an EDI SaaS model.
This development has saved installation costs for clients, but also enabled software companies to create a
recurring revenue model charged on an annual basis.

Q4. What are various components in cloud computing architecture (5) Dec 2023

1. Frontend :
Frontend of the cloud architecture refers to the client side of cloud computing system. Means it contains all
the user interfaces and applications which are used by the client
 Client Infrastructure – Client Infrastructure is a part of the frontend component. It contains the
applications and user interfaces which are required to access the cloud platform.
2. Backend :
Backend refers to the cloud itself which is used by the service provider. It contains the resources as well as
manages the resources and provides security mechanisms.
 Application –
Application in backend refers to a software or platform to which client accesses. Means it provides the
service in backend as per the client requirement.
 Service –
Service in backend refers to the major three types of cloud based services like SaaS, PaaS and IaaS.
Also manages which type of service the user accesses.
 Runtime Cloud-
Runtime cloud in backend provides the execution and Runtime platform/environment to the Virtual
machine.
 Storage –
Storage in backend provides flexible and scalable storage service and management of stored data.
 Infrastructure –
Cloud Infrastructure in backend refers to the hardware and software components of cloud like it
includes servers, storage, network devices, virtualization software etc
Q5. Cloud Cube Model (5) Dec 2023
Cloud Cube Model, designed and developed by Jericho forum helps to categorize the cloud network based
on the four-dimensional factor: Internal/External, Proprietary/Open, De-Perimeterized / Perimeterized, and
Insourced/Outsourced.

As the name Four-Dimensional, the working is also categorized into four parts viz :
(1) Physical Location of Data: The location of data may be internally or externally to the organization which
ultimately defines the organization's boundary.
(2) Ownership: Ownership may be proprietary or open. It is a measure of not only technology ownership but also
its interoperability, use of data, ease of data-transfer and degree of vendor application lock-in.
(3) Security Range: It is peremeterised or de-peremeterised. It measures whether the operations are performed
inside or outside the security boundary, firewall, etc.
(4) Sourcing: It is In-sourcing or out-sourcing; which defines whether the customer or the service provider
provides the service.
Chapter 2 : Virtualization

Q1. What is Virtualization? Explain pros and cons of virtualization in detail (5) May 2023
1. In cloud computing, virtualization refers to preparing a virtual version of a server, a desktop, a storage
device, an operating system, or network resources.
2. This approach allows a single physical instance of an application •or resource to be shared among
multiple organizations or customers.
3. It does by assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
4. It helps to separate the service from its physical delivery.
5. As a result of this technique, multiple operating systems and applications can be run on the same
machine and hardware at the same time.

Pros: Cons:
 Uses Hardware Efficiently  High Initial Investment
 Recovery is Easy  Quick Scalability is a Problem
 Cloud Migration is Easier  Unintended Server Sprawl
 Available at all Times  Data can be at Risk
 Quick and Easy Setu  Performance Witnesses a Dip

Q2. Explain the architecture of Xen Hypervisor in detail (10) May 2023
1. Xen is an open source hypervisor program developed by Cambridge University.
2. Xen is a microkernel hypervisor, which separates the policy from the mechanism.
3. Xen provides a virtual environment located between the hardware and the OS.
4. A number of vendors are in the process of developing commercial Xen hypervisors, among the mare
Citrix XenServer and Oracle VM.
5. The core components of a Xen system are the hypervisor, kernel, and applications.

Working:
 Xen Architecture works by allowing multiple guest operating systems (OSes) to run on top of a
hypervisor.
 The primary OS with control privileges is called Domain 0, responsible for managing hardware
resources and allocating them to other guest OSes, known as Domain U.
 However, if Domain 0 is compromised, it poses a significant security risk to the entire system.
 Despite this risk, Domain 0, functioning as a Virtual Machine Monitor (VMM), offers users the
ability to easily create, manipulate, and manage virtual machines (VMs).
Q3. Explain different implementation levels of virtualization along with its structures (10) May 2023

1) Instruction Set Architecture Level (ISA):


ISA virtualization can work through ISA emulation. This is used to run many legacy codes written for a
different hardware configuration.
2) Hardware Abstraction Level (HAL)
True to its name HAL lets the virtualization perform at the level of the hardware. This makes use of a
hypervisor which is used for functioning.
3) Operating System Level
This is an isolated container on the operating system and the physical server, which uses the software and
hardware. Each of these then functions in the form of a server.
4) Library Level
The applications use the API from the libraries at a user level. These APIs are documented well, and this is
why the library virtualization level is preferred in these scenarios. API hooks make it possible as it controls
the link of communication from the application to the system.
5) Application Level
The application-level virtualization is used when there is a desire to virtualize only one application and is
the last of the implementation levels of virtualization in Cloud Computing. One does not need to virtualize
the entire environment of the platform.
Q4. Virtualization vs Cloud Computing (5) May 2023

Q5. What is the job of Xen Hypervisor? How is it helpful for large scale industries? (10) Dec 2023
Refer Q.6 Bare Metal ( Xen Hypervisor type 1).
How is it helpful for large scale industries:
The Xen hypervisor allows a company to consolidate multiple virtual machines into one hardware platform.
So there are savings of space and electricity which can result in real savings. It also allows you to share
resources so that instead of having say 6 physical systems, the 6 virtual machines can run closer to the
capacity of the larger host system since they won’t all be experiencing CPU spikes at the same time. If you
have some spare capacity you could even spin up additional machines during peak times. The virtual
machines can be more easily monitored and controlled using XenCenter.
Q6. Differentiate Hosted virtualization with Bare Metal. Also explain various mechanism of
virtualization with architecture. (10) Dec 2023
Bare Metal Virtualization/ TYPE-I Hypervisor
1. The hypervisor runs directly on the underlying host system.
2. It is also known as a "Native Hypervisor" or "Bare metal hypervisor".
3. It does not require any base server operating system.
4. It has direct access to hardware resources.
5. Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V
hypervisor.

Hosted Virtualization/ TYPE-2 Hypervisor


1. A Host operating system runs on the underlying host system.
2. It is also known as 'Hosted Hypervisor".
3. Such kind of hypervisors doesn't run directly over the underlying hardware rather they run as an
application in a Host system (physical machine).
4. Basically, the software is installed on an operating system. Hypervisor asks the operating system to
make hardware calls.
5. Containers, KYM, Microsoft Hyper V, VMWare Fusion, Virtual Server 2005 R2, Windows Virtual PC
and VMWare workstation 6.0 are examples of Type 2 hypervisor.

Q7. Define KVM, CPU Virtualization and Memory.


Kernal Virtualization
1. Kernel-based Virtual Machine (Kinsf) is a virtualization infrastructure integrated into the Linux kernel,
It was first developed by Qumranet.
2. Instead of creating the hypervisor from scratch, the Linux kernel was used as its basis.
3. It relies on hardware support for virtualization.
4. It comprises a loadable kernel module (kvm.ko, providing the core virtualization infrastructure) and a
processor specific module (either kvm-intel.ko or kvm-amd.ko).
CPU Virtualization:
1. A single CPU can run numerous operating systems (OS) via CPU virtualization in cloud computing.
2. This is possible by creating virtual machines (VMs) that share the physical resources of the CPU.
3. Each Virtual Machine can’t see or interact with each other’s data or processes.
4. CPU virtualization is very important in cloud computing. It enables cloud providers to offer services
like –Virtual private servers (VPSs), Cloud storage (EBS), Cloud computing platforms (AWS,
Azure and Google Cloud)
Memory Virtualization:
1. Memory virtualization in cloud computing is the abstraction of the physical memory resources of a
server and the presentation of a virtualized memory space to the applications running on the server.
2. In cloud computing, memory virtualization is achieved using a hypervisor or virtual machine monitor
(VMM), which creates and manages virtual machines (VMs) on the server.
3. The hypervisor allocates and de-allocates memory resources to the VMs as required, and provides a
virtualized memory space to each VM.
4. Memory virtualization allows multiple VMs to share the same physical memory resources, increasing
hardware utilization and reducing costs.

Q8. Explain Kernel based virtualization with the help of its architecture.

1. KVM emulates virtual devices like network interfaces and hard disks.
2. A paravirtualized driver, like virtio, can also be employed to improve 1/0 performance.
3. Since a VM is simply a process, the standard Linux process management tools can be
4. One can destroy, pause, and resume a VM with the kill command and view resource usage with the
top command.
5. The VM belongs to the user who started it and all accesses to it are verified by the kernel.
6. Hence, system administrators can manage VMS with existing tools available in Linux.
Chapter 3 : Cloud Computing Services

Q1. Describe the service models and deployment models of cloud computing with their advantages
and disadvantages (10) May 2023 ( what a dumb question )
Service models of Cloud Computing:
1. Software as a Service (SaaS):
SaaS is also known as "on-demand software". It is a software in which the applications are hosted by a
cloud service provider. Users can access these applications with the help of internet connection and web
browser.
Advantages Disadvantages
Centralized management of data. Browser based risks.
Modest software tools. Lack of portability.

2. Platform as a Service(PaaS):
It provides virtual platforms and tools to create, test, and deploy apps. It provides runtime environments and
deployment tools for applications.
Advantages Disadvantages
Lower administrative overhead. Lack of portability.
Easily scalable. Middleware can pose a security risk

3. Infrastructure as a Service(IaaS):
It provides a virtual data center to store information and create platforms for app development, testing, and
deployment. It provides access to resources such as virtual machines, virtual storage, etc.
Advantages Disadvantages
User gets control over entire operation. Security threats from VMs
Pay-per-use model is budget-friendly and cost- Legacy systems may not be able to integrate
effective

Deployment models of Cloud Computing:


1. Public Cloud: The public cloud makes it possible for anybody to access systems and services. The
public cloud may be less secure as it is open to everyone.
Advantages Disadvantages
Minimal Investment. Data Security and Privacy Concerns.
No Hardware Setup. Reliability Issues.

2. Private Cloud: It’s a one-on-one environment for a single user (customer). There is no need to share
your hardware with anyone else.
Advantages Disadvantages
Data Privacy and better security. Higher Cost.
Supports Legacy Systems. Fixed Scalability.

3. Community Cloud: It allows systems and services to be accessible by a group of organizations. It is a


distributed system that is created by integrating the services of different clouds to address the specific
needs of a community, industry, or business.
Advantages Disadvantages
Smaller Investment. Shared Resources.
Setup Benefits. Not as Popular.

4. Hybrid Cloud: It combines the capabilities of Private and Public cloud. Organizations can move data
and applications between different clouds using a combination of two or more cloud deployment
methods, depending on their needs.
Advantages Disadvantages
Flexibility. Complexity.
Security. Specific Use Case.
Q2. Explain any five services of Everything as a service (XaaS) (10) / Anything as a Service (5)
Dec 2023 May 2023
The Five services of Everything as a Service (Xaas) are as follows:
1. Database as a service (DbaaS): Is a cloud computing managed service model enables users to set up,
operate, manage and scale with some form of access to database without the need for setting it up on
physical hardware, installing or configuring it for performance, database management by themselves art
2. Collaboration as a Service (Caas):
Cloud collaboration is a way for employees to work together and –collaborate documents & other files
that are stored off-premises or outside the firewall of company. So, when a user creates or uploads a file
online and shares access with othe work on it together. These are called cloud collaboration.
3. Storage as a Service (STaas):
Storage as a Service, also known by the acronym "STaaS," is a managed service where a storage
provider supplies a customer with storage space. In the STaaS model, the storage provider handles most
of the complex aspects of long- term bulk data storage — hardware costs, security, and data integrity.
4. Security as a Service (SECaas):
Security as a Service (SECaaS) is a cloud computing model that delivers managed security services over
the internet. SECaaS is based on the Software as a Sentice (SaaS) model but is limited to specialized
information security services. (a) Security as a service (SECaaS) allows companies to use an external
provider to handle and manage cybersecurity.
5. Network as a Service (NaaS):
Network-as-a-service (NaaS) is a cloud service model in which customers rent networking services
from cloud providers. Naas allows customers to operate their own networks without maintaining their
own networking infrastructure. Amazon and Rackspace are examples of NaaS providers.

Q3. Storage as a Service (5) May 2023 Dec 2023


1. In cloud computing, Storage as a Service (STaaS) refers to renting data storage space from a cloud
service provider (CSP).
2. It allows you to store your data on the internet instead of managing your own physical storage devices.
3. In the STaaS model, the storage provider handles most of the complex aspects of long-term bulk data
storage — hardware costs, security, and data integrity.
4. STaaS provides elastic storage capacity. You can easily scale your storage space up or down as your
data needs fluctuate
5. Types of Storage as a Service:
 Object-based storage—can be useful if your organization has a lot of cold data. This type of storage is
elastically scalable, and metadata is attached to every file for easy retrieval of information.
 File storage—organizes information into navigable hierarchies, such as file directories. It is suitable
for integration projects and legacy systems, and is compatible with cold or hot storage.
 Block storage—segments and partition data into blocks to ensure fast access. It mimics the process of
writing data to a solid-state drive or standard hard drive. The higher the efficiency, the more expensive
it is.

Q4. Explain SPI Model of cloud computing in detail. May 2019


1. Software as a Service (Saas):
 Software-as a Service (SaaS) model provides software application as a service to the end users.
 Here, a software that is deployed on a host service and is accessible via Internet.
 Google Workforce, Netflix are examples of SaaS applications.
 Some of the SaaS applications are not customizable such as Microsoft Office Suite.
 But SaaS provides us Application Programming
Interface (API), which allows us to develop a
customized application.
2. Platform as a Service (PaaS):
 PaaS provides runtime environment for applications. It also offers development and deployment tools
required to develop applications.
 It is the computer platform that provides the facility to use web applications quickly.
 PaaS has a feature of a point-and-click tool that allows non-programmers to develop web applications.
 App-Engine of Google & Force.com, Windows Azure, AppFog, Openshift, and VMware
 Cloud Foundry are PaaS examples.
 Developer may log on to these websites and use the built-in API to create web-based applications.

3. Infrastructure as Service ( IaasS ):


 IaaS provides access to fundamental resources such as physical machines, virtual machines, virtual
storage, etc.
 Apart from these resources, the IaaS also offers: Virtual machine disk storage, Virtual local area
network (VLANs), Load balancers, IP addresses, Software packages.
 All of the above resources are made available to end user via server virtualization.
 Moreover, these resources are accessed by the customers as if they own them. In a word, it is the only
layer of the cloud where the customer gets the platform for their organization to outsource IT
infrastructure on a pay-per-use basis.
Q5. Write short note on Analytic as a Service. May Dec 2019
1. Analytics-as-a-Service is the combination of analytics software and cloud technology.
2. Instead of hosting any analytics software on premises using your own servers, you use a ready-to-go
solution that is easy to deploy and most of the time has a pay-as-you-go payment system.
3. Analytics-as-a-Service (AaaS) provides subscription-based data analytics software procedures through
the cloud.
4. AaaS typically offers a fully customizable BI solution with end-to-end capabilities.
5. AaaS uses data mining, predictive analytics, and A1 to effectively reveal trends and insights from
existing data sets. In the past, running analytic processes in data.

Q6. Short note on Disaster Recovery as a Service. Dec 2019


1. Disaster recovery as a service (DRaaS) is a cloud computing service model that allows an
organization to back up its data and IT infrastructure in a third party cloud computing environment
and provide all the DR orchestration, all through a SaaS solution, to regain access and functionality
to IT infrastructure after a disaster.
2. Disaster Recovery as a Service (DRaaS) is disaster recovery hosted by a third party.
3. It involves replication and hosting of physical or virtual servers by the provider, to provide failover
in the event of a natural disaster, power outage, etc.
Benefits:
1. Cost reduction: A cloud DR solution is cost-effective as organizations do not require to duplicate
costly hardware and need to pay only for the services availed by the cloud service provider.
2. Reliable and easier implementation : Data restore functions can give rise to problems in the case
of physical backup using tape drives, disks, etc., unlike cloud restoration, which has a reliability of
at least 99%.
3. Faster recovery: Cloud-based DR drastically reduces the time for the Recovery Point Objective
(RPO) and Recovery Time Objective (RTO).
4. Scalability: Utilization of the Cloud DR services can be scaled up or down as per business
requirements, with payment required only for the actual use.
Chapter 4 : Amazon Web Service Cloud Platform

Q1. Compare between RDS and DynamoDB (5) May 2023


RDS DynamoDB
1 Relational Database NoSQL Database (Key-Value)
2 Structured data with tables, rows, and columns Unstructured data with key-value pairs
3 Predefined schema required Flexible schema, no predefined structure
Vertical scaling (increase instance size) and
4 Automatic horizontal scaling based on workload
horizontal scaling (read replicas)
Good for predictable workloads and complex
5 Excellent for high-throughput, NoSQL queries
queries
Pay per read/write capacity unit and storage
6 Pay per instance hour
used
7 Familiar SQL interface and tools Requires learning a new query language
Transactional applications, complex data Mobile backends, gaming applications, real-
8
relationships time analytics
Standard security features like IAM and
9 Offers encryption at rest and in transit
encryption
10 More control over underlying infrastructure Less control over underlying infrastructure

Q2. Explain EC2, S3, EBS, and Glacier services of AWS cloud platform (10) May 2023
1. EC2 (Elastic Compute Cloud):
 EC2 is a virtual rental space for computer servers.
 You can choose from a wide range of pre-configured server options or create your own.
 EC2 allows you to scale your computing power up or down as needed, paying only for the resources you
use.
2. S3 (Simple Storage Service):
 S3 is a massive, secure online storage locker.
 It's incredibly versatile, suitable for storing anything from application data and backups to websites and
media files.
 S3 offers different storage classes optimized for cost and access frequency, allowing you to choose the
most economical option for your data needs.
3. EBS (Elastic Block Store):
 EBS is a high-performance digital hard drives attached to your EC2 instances.
 It provides persistent block storage for your applications, meaning data remains even if you restart your
EC2 instance.
 EBS is ideal for storing data frequently accessed by your applications running on EC2.
4. Glacier (Glacier Deep Archive):
 Glacier is secure, long-term storage vault in the cloud.
 It's perfect for archiving rarely accessed data that needs to be retained for compliance or historical
purposes.
 Glacier offers extremely low storage costs, making it a great option for data that doesn't require frequent
retrieval.

Q3. What is VPC? Describe the terms Elastic Network Interface, Internet Gateway, Route Table and
Security Group with respect to VPC (10) May 2023
VPC (Virtual Private Cloud):
VPC is a logically isolated network you create within the AWS cloud. It's like having your own private data
center within the broader AWS infrastructure.
You can define the IP address range for your VPC and control who can access it. This provides a high level
of security and control over your network resources.
1. Elastic Network Interface (ENI):
 ENI is a virtual network card attached to your EC2 instances within the VPC.
 It allows your instances to connect to the VPC and communicate with other resources.
 Each EC2 instance can have multiple ENIs, providing flexibility in network configuration.
2. Internet Gateway:
 Internet gateway is the 'on-ramp' connecting your VPC to the public internet.
 It allows resources in your VPC to initiate outbound traffic to the internet, for example, downloading
updates or accessing external APIs.
 However, an internet gateway does not allow inbound traffic from the internet to reach your VPC
resources by default. You need additional configuration for that.
3. Route Table:
 Route table is a traffic map for your VPC.
 It directs network traffic to the appropriate destination, whether within the VPC, to the internet gateway,
or to other VPCs.
 You can associate a route table with one or more subnets within your VPC.
4. Security Group:
 Security group is a virtual firewall for your resources in the VPC.
 It controls inbound and outbound network traffic, specifying which ports and protocols are allowed.
 You can assign security groups to ENIs or subnets, controlling the traffic flow to your resources.

Q4. What is DynamoDB? State the features of DynamoDB (10) Dec 2023
1. DynamoDB is a NoSQL serverless database provided by AWS. It follows a key-value store structure and
adopts a distributed architecture for high availability and scalability.
2. As in any serverless system, there's no infrastructure provisioning needed. Dynamo offers two capacity
modes that can serve both highly variable and predictable workloads.
3. Data is organized in tables, which contains items. Each item contains a set of key-value pairs of
attributes.
4. There are two special types of attributes: the primary-key, which works similarly to
5. ID, and the sort-key, which allows for ordering the items.
Features of DynamoDB:
 On-demand capacity mode: The applications using the on-demand service, DynamoDB automatically
scales up/down to accommodate the traffic.
 Built-in support for ACID transactions: DynamoDB provides native/ sewer-side support for
transactions.
 On-demand backup: This feature allows you to make an entire backup of your work on any given point
in time.
 Point-in-time recovery: This feature helps you with the protection of your data just in case of accidental
read/ write operations.
 Encryption at rest: It keeps the info encrypted even when the table is not in use. This enhances security
with the assistance of encryption keys.

Q5. Explain the various instances in EC2? Discuss AWS EC2 instance life cycle. (10) Dec 2023
Instances in EC2 are as follows:
1. General Purpose: These are versatile instances suitable for a wide range of applications, offering a
balanced combination of CPU, memory, and storage. Examples include M5, T3, and A1 instances.
2. Compute Optimized: Designed for applications requiring high processing power, these instances offer
powerful CPUs and are ideal for compute-intensive tasks like scientific simulations or video rendering.
Examples include C5, R5, and G4 instances.
3. Memory Optimized: Prioritize memory capacity for applications that need to handle large datasets or
in-memory workloads. Examples include R5, X1, and R6gd instances.
4. Storage Optimized: Offer high storage capacity and throughput for data-intensive applications like
databases or log processing. Examples include D2, I3, and Ss4 instances.
5. Accelerated Computing: These instances come with hardware accelerators like GPUs or FPGAs (Field-
Programmable Gate Arrays) to accelerate specific workloads like machine learning, video processing, or
scientific computing. Examples include P3, P4d, and F1 instances.
EC2 Life Cycle:

1. Pending: This is the initial state after you request to launch an EC2 instance. The AWS system is
allocating resources and preparing the instance for launch.
2. Running: Once the instance is successfully configured, it enters the running state. In this state, the
instance is operational and ready to run your applications or workloads. You are billed for EC2 instances
based on the instance type and running time, even if they are idle.
3. Stopped: You can stop a running EC2 instance to temporarily halt its operation. This saves costs as you
are not billed for stopped instances. However, the instance data (like EBS volumes attached) remains
preserved. Terminated: When you no longer need an EC2 instance, you can terminate it. This
permanently deletes the instance and associated resources. Data on instance store volumes (ephemeral
storage) is lost upon termination.
4. Hibernate (EBS-backed instances only): This is a power-saving state available for instances with EBS
volumes attached for their root disk. Hibernation saves the instance's memory state to the EBS volume and
then shuts down the instance.
5. Reboot: You can reboot a running EC2 instance to restart the operating system. This can be helpful for
troubleshooting purposes or applying software updates. The instance data persists during a reboot.

Q6. Explain AWS S3 Storage and Glacier Storage with comparison between them. (10) Dec 2023
S3 (Simple Storage Service):
1. Amazon S3 (Simple Storage Service) provides object storage, which is built for storing and
recovering any amount of information or data from anywhere over the internet.
2. It provides this storage through a web services interface.
3. It can also store computer files up to 5 terabytes in size.
4. Files are stored in Bucket. A bucket is like a folder available in S3 that stores the files.
Amazon Glacier:
1. AWS Glacier is a long term and low-cost cloud storage solution, optimized for infrequently used data
or otherwise known as "cold data".
2. It is an economical and online storage solution by AWS and like S3 it simple, secure, cloud-based
data storage that can be easily scaled up and down as needed.
3. But unlike S3, AWS Glacier is designed for long term storage for cold data that will not be needed.
4. It stores the data across multiple availability zones before it confirms the successful upload.
Comparison:
Amazon S3 is a durable, secure, simple, and fast storage service, while Amazon S3 Glacier is used for
archiving solutions. S3 will offer more low latency compared to Amazong Glacier. In S3 the data is
stored with logical buckets while in Amazon Glacier the data is stored in form of archives and in vaults.
Q7. AWS Core Services (5) Dec 2023
1. Compute : Services under the computing domain in are servers that are used to host a website, process
backend data, etc.
Two well known compute services in AWS are:
 EC2: AWS EC2 (Elastic Compute Cloud) offers you a server with desired OS, RAM, and processor
where you have full control of OS to perform any operation.
 Lambda: It is used only for backend processing where it can receive the request, process it, and
send back the result.
2. Storage: AWS offers storage plenty of choices to users for backing up information, archiving, and
disaster recovery. The well known storage services in AWS are:
 S3: Amazon Simple Storage Service (S3) stores files in the form of objects with a maximum size of
5 TB each.
 Glacier: AWS Glacier provides low-cost, flexible, durable, secure storage for data backup and
archival where customers can store data for as little as $0.004 per GB month.
3. Database: In this domain, AWS provides services that can help you monitor and manage your
databases in the AWS infrastructure for better productivity. Services under Database are:
 RDS: Amazon RDS (relation database service) is a relational databases and keeps your information
updated.
 DynamoDB: DynamoDB supports key-value and document data along with recovery and on-
demand backup.
4. Networking: AWS offers the highest network availability amongst all cloud service providers to
increase throughput with on-time content delivery and reduced network latency.
 VPC: AWS VPC (Virtual private Cloud) allows users to launch their AWS resources into a virtual
network.
5. Migration: This domain deals with the transferring of data to and from the AWS infrastructure. There
is a service called snowball used when we need to transfer our considerable data to the AWS
infrastructure physically.
6. Management Tools: Using these tools, we can manage all of our AWS resources AWS infrastructure.
There are some services that users can use to control and secure their data:
 CloudWatch: AWS CloudWatch is a monitoring service that helps in monitoring AWS account and
resources in it.
7. Security: This domain deals with user rights and authenticity. Services under this domain will help us
manage what access should given to which person in our organization or team.
IAM: AWS IAM (Identity and Access Management) allows users to access the resources in the AWS
ecosystem.

Q8. Cloud Watch Metric (5) Dec 2023 (4-54)


1. Metrics are the fundamental concept in CloudWatch. A metric represents a time-ordered set of data
points that are published to CloudWatch.
2. Think of a metric as a variable to monitor, and the data points as representing the values of that
variable over time.
3. Metrics exist only in the Region in which they are created.
4. Metrics cannot be deleted, but they automatically expire after 15 months if no new data is published
to them.
5. Data points older than 15 months expire on a rolling basis; as new data points come in, data older than
15 months is dropped.
6. Metrics are uniquely defined by a name, a namespace, and zero or more dimensions.
Q9. Explain different RDS Database in detail.
1. Amazon Aurora: Built for the cloud, Aurora offers top-tier performance and easy migration from
MySQL or PostgreSQL for mission-critical applications. With automatic scaling and failover,
Aurora ensures minimal downtime and continuous operation.
2. MySQL: A widely used open-source database, MySQL is popular for its familiar syntax and cost-
effective scaling options. It excels at managing web applications, content management systems, and
various other relational database needs.
3. MariaDB: Compatible with existing MySQL applications, MariaDB provides additional features
and bug fixes. Offering similar scalability and cost-effectiveness to MySQL, it's a suitable choice for
those seeking a familiar experience with potential performance benefits.
4. PostgreSQL: Beyond traditional relational databases, PostgreSQL offers object-oriented features
and robust security, making it a powerful choice for complex data management, geographic
information systems, and advanced data manipulation tasks.
5. Oracle Database: This mature, feature-rich platform is ideal for large-scale enterprise applications
requiring high security, scalability, and advanced data management capabilities. However, Oracle
Database on RDS incurs additional licensing costs compared to other options.
6. Microsoft SQL Server: For Windows-based applications needing tight integration with other
Microsoft technologies, SQL Server offers a seamless experience. RDS provides various instance
sizes for scaling and managing workloads efficiently, but using SQL Server on RDS comes with
additional licensing fees.
Chapter 5 : Openstack cloud & Serverless Computing

Q1. Describe in brief architecture of Mobile Cloud Computing with its benefits and challenges (10)
May 2023, Dec 2023

1. MCC stands for Mobile Cloud Computing which is defined as a combination of mobile computing,
cloud computing, and wireless network that come up together purpose such as rich computational
resources to mobile users, network operators, as well as to cloud computing providers.
2. Mobile Cloud Computing is meant to make it possible for rich mobile applications to be executed on a
different number of mobile devices.
3. In this technology, data processing, and data storage happen outside of mobile devices.
Benefits:
 Mobile Cloud Computing saves Business money.
 Because of the portability which makes their work easy and efficient.
 Cloud consumers explore more features on their mobile phones.
 Developers reach greater markets through mobile cloud web services.
 More network providers can join up in this field.
Challenges:
 Low bandwidth: Mobile cloud use radio waves which are limited as compared to wired networks.
 Security: It is difficult to identify and manage threats on mobile devices as compared to desktop
devices
 Service Availability: Users often find complaints like a breakdown of the network, transportation
crowding, out of coverage, etc.
 Limited Energy source: Mobile devices consume more energy and are less powerful.
Q2. Serverless Computing (5) May 2023

1. Serverless computing operates on an on-demand server model. It is referred to as Function as a Service


(FaaS).
2. Serverless computing is a cloud computing application development and execution model that enables
developers to build and run application code without provisioning or managing servers or backend
infrastructure.
3. Serverless lets developers put all their focus into writing the best front end application code and
business logic they can.
4. The cloud provider is also responsible for all routine infrastructure management and maintenance such
as operating system updates and patches, security management, capacity planning, system monitoring
and more.
Benefits:
 Cost efficiency: Serverless computing charges for the resources used rather than pre-purchased capacity.
 Scalability: The automatic scaling feature is advantageous for handling unpredictable or fluctuating
traffic patterns
 Simplified back-end code: Simplified back-end code enables developers to concentrate on their core
product, often leading to better quality and more innovative features.
 No infrastructure requirements: Since developers will be literally using someone else's computer to
execute their serverless functions, there will be no infrastructure to maintain.
Challenges:
 Performance issues: When a function remains unused for a certain period, it enters a dormant state.
 Vendor lock-in: Serverless architectures often rely on the services and tools that a single cloud provider
offers.
 Security: Serverless applications can potentially increase the risk of cyberattacks because each function
can serve as a potential attack entry point.
Q3. Explain various components and architecture of Openstack. (10) May 2023

1. Nova ( Compute service): It manages the computer resources like creating, deleting and handling
the scheduling.
2. Network : It is responsible for connecting all the networks across openstack.
3. Swift ( Object storage ): It is an object storage which is used to retrieve unstructured data with the
help of REST api.
4. Cinder ( Block storage ): It is responsible for providing persistent block storage that is made
accessible using API.
5. Keystone ( identity service ): It is responsible for all types of authentications and authorizations in
OpenStack services.
6. Glance ( Image service provider ) : It is responsible for registering, storing and retrieving virtual
disk images from the complete network.
7. Horizon ( Dashboard ): It is responsible for providing a web-based interface for openstack.
8. Heat ( Orchestration ) : It is used for on-demand service provisioning with auto-scaling of cloud
resources.

Q4. Short note on AWS Lamba (5) May 2023


1. AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of
Amazon Web Services.
2. Therefore, one doesn't need to worry about which AWS resources to launch, or how will one manage
them.
3. In AWS Lambda the code is executed based on the response of events in AWS services such as
add/delete files in S3 bucket, HTTP requests from Amazon API gateway, etc.
4. However, Amazon Lambda can only be used to execute background tasks.
Benefits:
1. Quicker Development: One of the biggest benefits of AWS Lambda is that it allows for quicker
development.
2. Optimized Costs: AWS is required to pay for only what they need and not for the time their server
sits idle, it allows for substantial cost savings.
3. Scalability: AWS Lambda offers a simple and effective way of scaling.
Challenges:
1. It is not suitable for small projects.
2. Since AWS Lambda relies completely on AWS for the infrastructure, you cannot install anything
additional software if your code demands it.
Chapter 6 : Cloud Security & Privacy

Q1. Describe the concept of audit and reporting in cloud computing? (5) May 2023
Audit:
1. Auditing in cloud computing refers to the process of examining and analyzing various aspects of the
cloud infrastructure, services, and activities to ensure adherence to security policies, regulatory
requirements, and best practices.
2. Audits typically involve assessing controls, configurations, access logs, and activities to identify
potential security vulnerabilities, unauthorized access, or non-compliance issues.
3. Audits in cloud computing can include internal audits performed by the cloud service provider (CSP)
itself or external audits conducted by independent third-party organizations.
Reporting:
1. Reporting in cloud computing involves generating and presenting comprehensive insights and
documentation based on audit findings, performance metrics, and other relevant data.
2. Reports provide stakeholders, including management, regulators, and customers, with transparency into
the security and operational aspects of the cloud environment.
3. Cloud providers typically offer various types of reports to their customers, such as compliance reports,
security incident reports, and performance reports.

Q2. Explain IAM architecture along with its Standards and Protocols (10) May 2023
IAM, or Identity and Access Management, architecture is all about creating a secure system for managing
user access to resources. It follows a core principle of "least privilege," ensuring users only have the access
they truly need. The IAM system typically consists of three main components:
1. Identity Management: This handles the creation, storage, and lifecycle of user identities. This
includes user information, credentials, and group memberships.
2. Authentication: This verifies a user's identity when they try to access a resource. This could involve
checking a username and password, using multi-factor authentication, or other methods.
3. Authorization: This determines what level of access a user has to a resource after they've been
authenticated. This is based on factors like the user's role, permissions assigned to them, and any
access control policies in place.
Standards and Protocols:
1. SAML (Security Assertion Markup Language): SAML allows different websites or services to share
login information securely.
2. SPML (Service Provisioning Markup Language): SPML helps automate the process of creating,
updating, and deleting user accounts and access rights across different systems.
3. XACML (eXtensible Access Control Markup Language): XACML defines a way to set detailed
access rules for who can access what data or services and under what conditions.
4. OAuth (Open Authorization): OAuth enables you to give third-party applications limited access to
your resources without giving them your password.

Q3. Governance, Risk, and Compliance (GRC) (5) May 2023


A GRC framework is a model for managing governance and compliance risk in a company. It involves
identifying the key policies that can drive the company toward its goals. By adopting a GRC framework,
you can take a proactive approach to mitigating risks, making well-informed decisions, and ensuring
business continuity.
Governance:
 Governance is the set of policies, rules, or frameworks that a company uses to achieve its business goals.
 It defines the responsibilities of key stakeholders, such as the board of directors and senior management.
Risk management:
 Businesses face different types of risks, including financial, legal, strategic, and security risks.
 Proper risk management helps businesses identify these risks and find ways to remediate any that are
found.
Compliance:
 Compliance is the act of following rules, laws, and regulations.
 It applies to legal and regulatory requirements set by industrial bodies and also for internal corporate
policies.

Q4. Attacks, and vulnerabilities in cloud computing (5) May 2023


Attacks on cloud computing.
1. Cloud malware injection attacks
Malware injection attacks are done to take control of a user's information in the cloud.
2. Abuse of cloud services
Hackers can use cheap cloud services to arrange DoS and brute force attacks on target users,
companies, and even other cloud providers.
3. Denial of service attacks
DoS attacks are designed to overload a system and make services unavailable to its users.
4. Side channel attacks
A side channel attack is arranged by hackers when they place a malicious virtual machine on the
same host as the target virtual machine.
Vulnerabilities:
1. Session Riding
Session riding occurs when an attacker steals a user's cookie and uses the application in the user's name.
2. Virtual Machine Escape
An attacker can remotely exploit a hypervisor by exploiting a vulnerability in the hypervisor itself.
3. Internet Dependency
By using the cloud services, we're dependent upon the Internet connection, so if the Internet temporarily
fails due to a lightning strike or ISP maintenance, the clients won't be able to connect to the cloud
services.
4. Data Protection and Portability
When deciding to switch cloud service providers for a lower we must address the of data
movement and deletion.

Q5. What is security ? Why is it required in cloud computing.


1. Cloud security, also known as cloud computing security, is a collection of security measures designed
to protect cloud-based infrastructure, applications, and data.
2. These measures ensure user and device authentication, data and resource access control, and data
privacy protection.
3. They also support regulatory data compliance.
4. Cloud security is employed in cloud environments to protect a company's data from Distributed Denial
of Service (DDoS) attacks, malware, hackers, and unauthorized user access or use.
Benefits :
 Lower upfront costs
 Reduced ongoing operational and administrative expenses
 Increased reliability and availability
 Centralized security
 Greater ease of scaling
Why is it required:
Security in cloud computing is crucial to any company looking to keep its applications and data protected
from bad actors. Maintaining a strong cloud security posture helps organizations achieve the now widely
recognized benefits of cloud computing. Cloud security comes with its own advantages as well, helping you
achieve lower upfront costs reduced ongoing operational and administrative costs, easier scaling, increased
reliability and availability, and improved DDoS protection.

You might also like