Jump to Content
Containers & Kubernetes

Introducing GKE Autopilot: a revolution in managed Kubernetes

February 24, 2021
Drew Bradstock

Senior Director, GKE Product Management, Google Cloud

In the years since Google invented Kubernetes, it has completely revolutionized IT operations, becoming the de facto standard for organizations looking for advanced container orchestration. Organizations that need the highest levels of reliability, security, and scalability for their applications choose Google Kubernetes Engine (GKE). In the second quarter of 2020 alone, more than 100,000 companies used our application modernization platforms and services—including GKE—to build and run their applications. Until now, Kubernetes still involved a fair bit of manual assembly and tinkering to optimize it for your needs. Today, we're introducing GKE Autopilot, a revolutionary mode of operations for managed Kubernetes that lets you focus on your software, while GKE Autopilot manages the infrastructure.

For many businesses, the flexibility and power that Kubernetes and GKE offers is ideal, giving them a high level of control over most aspects of their cluster configurations. For others though, this level of control and choices can be overwhelming or unnecessary for their workloads’ requirements, as they just want a simple way to build a more secure and consistent development platform. Autopilot can help, allowing businesses to embrace Kubernetes and simplifying operations by managing the cluster infrastructure, control plane, and nodes.

With its optimized, ready-for-production cluster, Autopilot offers a strong security posture and ops-friendly configuration, reducing the need to learn the nitty-gritty details of cluster configuration. By managing the cluster infrastructure, Autopilot also helps reduce Day-2 operational and maintenance costs, while improving resource utilization. Autopilot is a hands-off fully managed Kubernetes experience that allows you to focus more on your workloads and less on managing cluster infrastructure. 

One GKE, two modes of operation

With the launch of Autopilot, GKE users can now choose from two different modes of operation, each with their own level of control over their GKE clusters and the relative responsibilities related to GKE. 

GKE already offers an industry-leading level of automation that makes setting up and operating a Kubernetes cluster easier and more cost effective than do-it-yourself and other managed offerings; Autopilot represents a significant leap forward. In addition to the fully managed control plane that GKE has always provided, using the Autopilot mode of operation automatically applies industry best practices and can eliminate all node management operations, maximizing your cluster efficiency and helping to provide a stronger security posture.

https://storage.googleapis.com/gweb-cloudblog-publish/images/gke_autopilot.max-1800x1800.jpg
GKE Autopilot

GKE has always been about simplifying Kubernetes, while still giving you control. Perhaps you still want to customize your Kubernetes cluster configurations or manually provision and manage the cluster’s node infrastructure. If so, you can continue to use GKE with the current mode of operation in GKE, referred to as Standard, which provides the same configuration flexibility that GKE offers today.

https://storage.googleapis.com/gweb-cloudblog-publish/images/gke_zonal_cluster.max-900x900.jpg
GKE Standard

Leave the management to GKE 

Early access customers have found that choosing Autopilot has the potential to dramatically improve the performance, security, and resilience of their Kubernetes environments, while reducing the overall operational load required for managing Autopilot clusters. Here are some of the benefits they are excited about. 

Optimize for production like a Kubernetes expert
With Autopilot, GKE creates clusters based on battle-tested and hardened best practices learned from Google SRE and engineering experience. These optimized configurations are ready for production, helping reduce the GKE learning curve. GKE also automatically provisions cluster infrastructure based on your workload specifications and can take care of managing and maintaining the node infrastructure. 

“Reducing the complexity while getting the most out of Kubernetes is key for us and GKE Autopilot does exactly that!” - Mario Kleinsasser, team leader at STRABAG BRVZ

Enjoy a stronger security posture from the get-go
GKE already does a lot to help secure your cluster—from hardening the lowest level of hardware, through the virtualization, operating system, Kubernetes, and container layers. With Autopilot, GKE helps secure the cluster infrastructure based on years of experience running the GKE fleet. Autopilot implements GKE hardening guidelines and security best practices, utilizing GCP unique security features like Shielded GKE Nodes and Workload Identity. In addition, Autopilot blocks certain features deemed as less safe such as External IP Services or legacy authorization, and disabling CAP_NET_RAW. By locking down individual Kubernetes nodes, Autopilot further helps reduce the cluster's attack surface, and minimize ongoing security configuration mistakes.

Use Google as your SRE for both nodes and the control plane
Google SRE already handles cluster management for GKE; with Autopilot, Google SREs manage your nodes as well, including provisioning, maintenance, and lifecycle management. Because Autopilot nodes are locked down, sysadmin-level modifications that could result in nodes being unsupportable can be prevented. Autopilot also supports maintenance windows and a pod disruption budget for maintenance flexibility. In addition to GKE's SLA on hosts and the control plane, Autopilot also includes an SLA on Pods—a first.

“GKE Autopilot is the real serverless K8s platform that we've been waiting for. Developers can focus on their workloads, and leave the management of underlying infrastructure to Google SREs.” - Boris Simandoff, VP Engineering, at Via Transportation, Inc

Pay for the optimized resources you use
With Autopilot, we provision and scale the underlying compute infrastructure based on your workload specifications and dynamic load, helping to provide highly efficient resource optimization. Autopilot dynamically adjusts compute resources, so there’s no need to figure out what size and shape nodes you should configure for your workloads. With Autopilot, you pay only for the pods you use and you’re billed per second for vCPU, memory and disk resource requests. No more worries about unused capacity!

Video Thumbnail

Welcoming the GKE partner ecosystem

We designed Autopilot to be broadly compatible with how GKE has always worked, as well as with partner solutions. Out of the gate, Autopilot supports logging and monitoring from Datadog and CI/CD from GitLab. Both work just as they do in GKE today—no need to configure things differently or use sidecars. Our goal is full partner compatibility, and many more integrations are expected in the coming months.

Join the Kubernetes revolution

We’re proud of the dramatic efficiency that GKE brings to running complex, distributed applications, and GKE Autopilot represents the next big leap forward in terms of management and operations. Autopilot is generally available today1; we encourage you to see the difference that it brings to your Kubernetes environment. Get started today with the free tier

To learn more about GKE Autopilot, tune into this week's episode of the Kubernetes Podcast with GKE Autopilot Product Manager Yochay Kiriaty.

Save the date: Build the future with Google Kubernetes Engine online event is on March 11th. Join us to learn what’s new in the world of containers and Kubernetes at Google Cloud, get access to exclusive demos and hear from experts. See you there!


1. You can currently access Autopilot from the command line interface, and we are gradually rolling it out to the Google Cloud Console for all GCP regions. If you don't see the Autopilot option in Cloud Console yet, use the CLI or try again later.

Posted in