Points de contrôle
Create GCS bucket
/ 10
Copy startup script and code to Cloud Storage bucket
/ 10
Deploy instances and configure network
/ 20
Create managed instance groups
/ 20
Create HTTP(S) load balancers
/ 10
Update the frontend instances
/ 10
Scaling GCE
/ 10
Update the website
/ 10
Hosting a Web App on Google Cloud Using Compute Engine
- GSP662
- Overview
- Setup and requirements
- Task 1. Enable Compute Engine API
- Task 2. Create Cloud Storage bucket
- Task 3. Clone source repository
- Task 4. Create Compute Engine instances
- Task 5. Create managed instance groups
- Task 6. Create load balancers
- Task 7. Scaling Compute Engine
- Task 8. Update the website
- Congratulations!
GSP662
Overview
There are many ways to deploy web sites within Google Cloud. Each solution offers different features, capabilities, and levels of control. Compute Engine offers a deep level of control over the infrastructure used to run a web site, but also requires a little more operational management compared to solutions like Google Kubernetes Engines (GKE), App Engine, or others. With Compute Engine, you have fine-grained control of aspects of the infrastructure, including the virtual machines, load balancers, and more.
In this lab you will deploy a sample application, the "Fancy Store" ecommerce website, to show how a website can be deployed and scaled easily with Compute Engine.
What you'll learn
In this lab you learn how to:
- Create Compute Engine instances
- Create instance templates from source instances
- Create managed instance groups
- Create and test managed instance group health checks
- Create HTTP(S) Load Balancers
- Create load balancer health checks
- Use a Content Delivery Network (CDN) for Caching
At the end of the lab, you will have instances inside managed instance groups to provide autohealing, load balancing, autoscaling, and rolling updates for your website.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.
This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
- Time to complete the lab---remember, once you start, you cannot pause a lab.
How to start your lab and sign in to the Google Cloud console
-
Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:
- The Open Google Cloud console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account. -
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}} You can also find the Username in the Lab Details panel.
-
Click Next.
-
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}} You can also find the Password in the Lab Details panel.
-
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges. -
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
- Click Activate Cloud Shell at the top of the Google Cloud console.
When you are connected, you are already authenticated, and the project is set to your Project_ID,
gcloud
is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
- (Optional) You can list the active account name with this command:
- Click Authorize.
Output:
- (Optional) You can list the project ID with this command:
Output:
gcloud
, in Google Cloud, refer to the gcloud CLI overview guide.
Set your region and zone
Certain Compute Engine resources live in regions and zones. A region is a specific geographical location where you can run your resources. Each region has one or more zones.
Run the following gcloud commands in Cloud Console to set the default region and zone for your lab:
Task 1. Enable Compute Engine API
- Enable the Compute Engine API by executing the following:
Task 2. Create Cloud Storage bucket
You will use a Cloud Storage bucket to house your built code as well as your startup scripts.
- From Cloud Shell, execute the following to create a new Cloud Storage bucket:
$DEVSHELL_PROJECT_ID
environment variable within Cloud Shell is to help ensure the names of objects are unique. Since all Project IDs within Google Cloud must be unique, appending the Project ID should make other names unique as well.
Click Check my progress to verify the objective.
Task 3. Clone source repository
Use the existing Fancy Store ecommerce website based on the monolith-to-microservices
repository as the basis for your website.
Clone the source code so you can focus on the aspects of deploying to Compute Engine. Later on in this lab, you will perform a small update to the code to demonstrate the simplicity of updating on Compute Engine.
- Clone the source code and then navigate to the
monolith-to-microservices
directory:
- Run the initial build of the code to allow the application to run locally:
It will take a few minutes for this script to finish.
- Once completed, ensure Cloud Shell is running a compatible nodeJS version with the following command:
- Next, run the following to test the application, switch to the
microservices
directory, and start the web server:
You should see the following output:
- Preview your application by clicking the web preview icon then selecting Preview on port 8080.
This opens a new window where you can see the frontend of Fancy Store.
- Close this window after viewing the website and then press CTRL+C in the terminal window to stop the web server process.
Task 4. Create Compute Engine instances
Now it's time to start deploying some Compute Engine instances!
In the following steps you will:
- Create a startup script to configure instances.
- Clone source code and upload to Cloud Storage.
- Deploy a Compute Engine instance to host the backend microservices.
- Reconfigure the frontend code to utilize the backend microservices instance.
- Deploy a Compute Engine instance to host the frontend microservice.
- Configure the network to allow communication.
Create the startup script
A startup script will be used to instruct the instance what to do each time it is started. This way the instances are automatically configured.
- In Cloud Shell, run the following command to create a file called
startup-script.sh
:
- Click Open Editor in the Cloud Shell ribbon to open the Code Editor.
-
Navigate to the
monolith-to-microservices
folder. -
Add the following code to the
startup-script.sh
file. You will edit some of the code after it's added:
- Find the text
[DEVSHELL_PROJECT_ID]
in the file and replace it with your Project ID:
The line of code within startup-script.sh
should now resemble:
-
Save the
startup-script.sh
file, but do not close it yet. -
Look at the bottom right of Cloud Shell Code Editor, and ensure "End of Line Sequence" is set to "LF" and not "CRLF".
- If this is set to CRLF, click CRLF and then select LF in the drop down.
- If this is already set to LF, then leave as is.
-
Close the
startup-script.sh
file. -
Return to Cloud Shell Terminal and run the following to copy the
startup-script.sh
file into your bucket:
It will now be accessible at: https://storage.googleapis.com/[BUCKET_NAME]/startup-script.sh
.
[BUCKET_NAME] represents the name of the Cloud Storage bucket. This will only be viewable by authorized users and service accounts by default, therefor inaccessible through a web browser. Compute Engine instances will automatically be able to access this through their service account.
The startup script performs the following tasks:
- Installs the Logging agent. The agent automatically collects logs from syslog.
- Installs Node.js and Supervisor. Supervisor runs the app as a daemon.
- Clones the app's source code from Cloud Storage Bucket and installs dependencies.
- Configures Supervisor to run the app. Supervisor makes sure the app is restarted if it exits unexpectedly or is stopped by an admin or process. It also sends the app's stdout and stderr to syslog for the Logging agent to collect.
Copy code into the Cloud Storage bucket
When instances launch, they pull code from the Cloud Storage bucket, so you can store some configuration variables within the .env
file of the code.
- Copy the cloned code into your bucket:
node_modules
dependencies directories are deleted to ensure the copy is as fast and efficient as possible. These are recreated on the instances when they start up.
Click Check my progress to verify the objective.
Deploy the backend instance
The first instance to be deployed will be the backend instance which will house the Orders and Products microservices.
- Execute the following command to create an
e2-standard-2
instance that is configured to use the startup script. It is tagged as abackend
instance so you can apply specific firewall rules to it later:
Configure a connection to the backend
Before you deploy the frontend of the application, you need to update the configuration to point to the backend you just deployed.
- Retrieve the external IP address of the backend with the following command, look under the
EXTERNAL_IP
tab for the backend instance:
Example output:
-
Copy the External IP for the backend.
-
In the Cloud Shell Explorer, navigate to
monolith-to-microservices
>react-app
. -
In the Code Editor, select View > Toggle Hidden Files in order to see the
.env
file.
In the next step, you edit the .env
file to point to the External IP of the backend. [BACKEND_ADDRESS] represents the External IP address of the backend instance determined from the above gcloud
command.
- In the
.env
file, replacelocalhost
with your[BACKEND_ADDRESS]
:
-
Save the file.
-
In Cloud Shell, run the following to rebuild
react-app
, which will update the frontend code:
- Then copy the application code into the Cloud Storage bucket:
Deploy the frontend instance
Now that the code is configured, deploy the frontend instance.
- Execute the following to deploy the
frontend
instance with a similar command as before. This instance is tagged asfrontend
for firewall purposes:
Configure the network
- Create firewall rules to allow access to port 8080 for the frontend, and ports 8081-8082 for the backend. These firewall commands use the tags assigned during instance creation for application:
The website should now be fully functional.
- In order to navigate to the external IP of the
frontend
, you need to know the address. Run the following and look for the EXTERNAL_IP of thefrontend
instance:
Example output:
It may take a couple minutes for the instance to start and be configured.
-
Wait 3 minutes and then open a new browser tab and browse to
http://[FRONTEND_ADDRESS]:8080
to access the website, where [FRONTEND_ADDRESS] is the frontend EXTERNAL_IP determined above. -
Try navigating to the Products and Orders pages; these should now work.
Click Check my progress to verify the objective.
Task 5. Create managed instance groups
To allow the application to scale, managed instance groups will be created and will use the frontend
and backend
instances as Instance Templates.
A managed instance group (MIG) contains identical instances that you can manage as a single entity in a single zone. Managed instance groups maintain high availability of your apps by proactively keeping your instances available, that is, in the RUNNING state. You will be using managed instance groups for your frontend and backend instances to provide autohealing, load balancing, autoscaling, and rolling updates.
Create instance template from source instance
Before you can create a managed instance group, you have to first create an instance template that will be the foundation for the group. Instance templates allow you to define the machine type, boot disk image or container image, network, and other instance properties to use when creating new VM instances. You can use instance templates to create instances in a managed instance group or even to create individual instances.
To create the instance template, use the existing instances you created previously.
- First, stop both instances:
- Then, create the instance template from each of the source instances:
- Confirm the instance templates were created:
Example output:
- With the instance templates created, delete the
backend
vm to save resource space:
- Type and enter y when prompted.
Normally, you could delete the frontend
vm as well, but you will use it to update the instance template later in the lab.
Create managed instance group
- Next, create two managed instance groups, one for the frontend and one for the backend:
These managed instance groups will use the instance templates and are configured for two instances each within each group to start. The instances are automatically named based on the base-instance-name
specified with random characters appended.
- For your application, the
frontend
microservice runs on port 8080, and thebackend
microservice runs on port 8081 fororders
and port 8082 for products:
Since these are non-standard ports, you specify named ports to identify these. Named ports are key:value pair metadata representing the service name and the port that it's running on. Named ports can be assigned to an instance group, which indicates that the service is available on all instances in the group. This information is used by the HTTP Load Balancing service that will be configured later.
Configure autohealing
To improve the availability of the application itself and to verify it is responding, configure an autohealing policy for the managed instance groups.
An autohealing policy relies on an application-based health check to verify that an app is responding as expected. Checking that an app responds is more precise than simply verifying that an instance is in a RUNNING state, which is the default behavior.
- Create a health check that repairs the instance if it returns "unhealthy" 3 consecutive times for the
frontend
andbackend
:
- Create a firewall rule to allow the health check probes to connect to the microservices on ports 8080-8081:
- Apply the health checks to their respective services:
- Continue with the lab to allow some time for autohealing to monitor the instances in the group. You will simulate a failure to test the autohealing at the end of the lab.
Click Check my progress to verify the objective.
Task 6. Create load balancers
To complement your managed instance groups, use HTTP(S) Load Balancers to serve traffic to the frontend and backend microservices, and use mappings to send traffic to the proper backend services based on pathing rules. This exposes a single load balanced IP for all services.
You can learn more about the Load Balancing options on Google Cloud: Overview of Load Balancing.
Create HTTP(S) load balancer
Google Cloud offers many different types of load balancers. For this lab you use an HTTP(S) Load Balancer for your traffic. An HTTP load balancer is structured as follows:
- A forwarding rule directs incoming requests to a target HTTP proxy.
- The target HTTP proxy checks each request against a URL map to determine the appropriate backend service for the request.
- The backend service directs each request to an appropriate backend based on serving capacity, zone, and instance health of its attached backends. The health of each backend instance is verified using an HTTP health check. If the backend service is configured to use an HTTPS or HTTP/2 health check, the request will be encrypted on its way to the backend instance.
- Sessions between the load balancer and the instance can use the HTTP, HTTPS, or HTTP/2 protocol. If you use HTTPS or HTTP/2, each instance in the backend services must have an SSL certificate.
- Create health checks that will be used to determine which instances are capable of serving traffic for each service:
- Create backend services that are the target for load-balanced traffic. The backend services will use the health checks and named ports you created:
- Add the Load Balancer's backend services:
- Create a URL map. The URL map defines which URLs are directed to which backend services:
- Create a path matcher to allow the
/api/orders
and/api/products
paths to route to their respective services:
- Create the proxy which ties to the URL map:
- Create a global forwarding rule that ties a public IP address and port to the proxy:
Click Check my progress to verify the objective.
Update the configuration
Now that you have a new static IP address, update the code on the frontend
to point to this new address instead of the ephemeral address used earlier that pointed to the backend
instance.
- In Cloud Shell, change to the
react-app
folder which houses the.env
file that holds the configuration:
- Find the IP address for the Load Balancer:
Example output:
- Return to the Cloud Shell Editor and edit the
.env
file again to point to Public IP of Load Balancer. [LB_IP] represents the External IP address of the backend instance determined above.
-
Save the file.
-
Rebuild
react-app
, which will update the frontend code:
- Copy the application code into your bucket:
Update the frontend instances
Now that there is new code and configuration, you want the frontend instances within the managed instance group to pull the new code.
Since your instances pull the code at startup, you can issue a rolling restart command:
--max-unavailable
parameter. Without this parameter, the command would keep an instance alive while restarting others to ensure availability. For testing purposes, you specify to replace all immediately for speed.
Click Check my progress to verify the objective.
Test the website
- Wait 3 minutes after issuing the
rolling-action replace
command in order to give the instances time to be processed, and then check the status of the managed instance group. Run the following to confirm the service is listed as HEALTHY:
- Wait until the 2 services are listed as HEALTHY.
Example output:
If neither instance enters a HEALTHY state after waiting a little while, something is wrong with the setup of the frontend instances that accessing them on port 8080 doesn't work. Test this by browsing to the instances directly on port 8080.
- Once both items appear as HEALTHY on the list, exit the
watch
command by pressing CTRL+C.
gcloud compute forwarding-rules list --global
Task 7. Scaling Compute Engine
So far, you have created two managed instance groups with two instances each. This configuration is fully functional, but a static configuration regardless of load. Next, you create an autoscaling policy based on utilization to automatically scale each managed instance group.
Automatically resize by utilization
- To create the autoscaling policy, execute the following:
These commands create an autoscaler on the managed instance groups that automatically adds instances when utilization is above 60% utilization, and removes instances when the load balancer is below 60% utilization.
Enable content delivery network
Another feature that can help with scaling is to enable a Content Delivery Network service, to provide caching for the frontend.
- Execute the following command on the frontend service:
When a user requests content from the HTTP(S) load balancer, the request arrives at a Google Front End (GFE) which first looks in the Cloud CDN cache for a response to the user's request. If the GFE finds a cached response, the GFE sends the cached response to the user. This is called a cache hit.
If the GFE can't find a cached response for the request, the GFE makes a request directly to the backend. If the response to this request is cacheable, the GFE stores the response in the Cloud CDN cache so that the cache can be used for subsequent requests.
Click Check my progress to verify the objective.
Task 8. Update the website
Updating instance template
Existing instance templates are not editable; however, since your instances are stateless and all configuration is done through the startup script, you only need to change the instance template if you want to change the template settings . Now you're going to make a simple change to use a larger machine type and push that out.
Complete the following steps to:
-
Update the
frontend
instance, which acts as the basis for the instance template. During the update, put a file on the updated version of the instance template's image, then update the instance template, roll out the new template, and then confirm the file exists on the managed instance group instances. -
Modify the machine type of your instance template, by switching from the
e2-standard-2
machine type toe2-small
.
- Run the following command to modify the machine type of the frontend instance:
- Create the new Instance Template:
- Roll out the updated instance template to the Managed Instance Group:
- Wait 3 minutes, and then run the following to monitor the status of the update:
This will take a few moments.
Once you have at least 1 instance in the following condition:
- STATUS: RUNNING
- ACTION set to None
- INSTANCE_TEMPLATE: the new template name (fancy-fe-new)
-
Copy the name of one of the machines listed for use in the next command.
-
CTRL+C to exit the
watch
process. -
Run the following to see if the virtual machine is using the new machine type (e2-small), where [VM_NAME] is the newly created instance:
Expected example output:
Make changes to the website
Scenario: Your marketing team has asked you to change the homepage for your site. They think it should be more informative of who your company is and what you actually sell.
Task: Add some text to the homepage to make the marketing team happy! It looks like one of the developers has already created the changes with the file name index.js.new
. You can just copy this file to index.js
and the changes should be reflected. Follow the instructions below to make the appropriate changes.
- Run the following commands to copy the updated file to the correct file name:
- Print the file contents to verify the changes:
The resulting code should look like this:
You updated the React components, but you need to build the React app to generate the static files.
- Run the following command to build the React app and copy it into the monolith public directory:
- Then re-push this code to the bucket:
Push changes with rolling replacements
- Now force all instances to be replaced to pull the update:
Note: In this example of a rolling replace, you specifically state that all machines can be replaced immediately through the --max-unavailable
parameter. Without this parameter, the command would keep an instance alive while replacing others. For testing purposes, you specify to replace all immediately for speed. In production, leaving a buffer would allow the website to continue serving the website while updating.
Click Check my progress to verify the objective.
- Wait 3 minutes after issuing the
rolling-action replace
command in order to give the instances time to be processed, and then check the status of the managed instance group. Run the following to confirm the service is listed as HEALTHY:
- Wait a few moments for both services to appear and become HEALTHY.
Example output:
-
Once items appear in the list with HEALTHY status, exit the
watch
command by pressing CTRL+C. -
Browse to the website via
http://[LB_IP]
where [LB_IP] is the IP_ADDRESS specified for the Load Balancer, which can be found with the following command:
The new website changes should now be visible.
Simulate failure
In order to confirm the health check works, log in to an instance and stop the services.
- To find an instance name, execute the following:
- Copy an instance name, then run the following to secure shell into the instance, where INSTANCE_NAME is one of the instances from the list:
-
Type in "y" to confirm, and press Enter twice to not use a password.
-
Within the instance, use
supervisorctl
to stop the application:
- Exit the instance:
- Monitor the repair operations:
This will take a few minutes to complete.
Look for the following example output:
The managed instance group recreated the instance to repair it.
- You can also go to Navigation menu > Compute Engine > VM instances to monitor through the console.
Congratulations!
You successfully deployed, scaled, and updated your website on Compute Engine. You are now experienced with Compute Engine, Managed Instance Groups, Load Balancers, and Health Checks!
Next steps / Learn more
- Watch this video case study on Hosting Scalable Web Applications on Google Cloud
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated April 26, 2024
Lab Last Tested December 15, 2023
Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.