Jump to Content
Hybrid & Multicloud

Introducing Anthos for VMs and tools to simplify the developer experience

October 13, 2021
https://storage.googleapis.com/gweb-cloudblog-publish/images/CloudNext21_11.max-2600x2600.jpg
Jeff Reed

VP, Product Management, Cloud Security

Chen Goldberg

GM & VP, Cloud Runtimes

Try Google Cloud

Start building on Google Cloud with $300 in free credits and 20+ always free products.

Free trial

When it comes to software development using Google Cloud, we have three guiding principles. First, developing on Google Cloud needs to be open—we rely heavily on open-source technologies so that it's easier to move apps between environments, recruit skilled developers, and access the latest innovations sooner. Second, developing for Google Cloud should also be easy—we strive to offer intuitive, integrated tools that run well wherever you build your code, while minimizing your operational overhead. Finally, running on Google Cloud should be transformative—we offer services that help unleash your imagination, along with best practices and professional services to help you bring your ideas to life. 

Today, at Google Cloud Next ‘21, we announced a variety of new tools and capabilities to deliver on those principles. 

Opening Anthos to virtual machines 

Since announcing Anthos, our open-source-based platform for hybrid and mutlicloud deployments in 2018, we have continued to receive strong reception from customers and partners. In fact, in Q2 2021, Anthos compute under management grew more than 500% year-over-year. Anthos unifies the management of infrastructure and applications across on-premises, edge, and multiple public clouds, as well as ensuring consistent operation at scale. Based on Google Kubernetes Engine (GKE), Anthos was originally designed to run applications in containers. To help you make that transition, we automated the process to migrate and modernize existing apps using Migrate for Anthos and GKE from various virtual machine environments to containers. 

While we have seen many customers make the leap to containerization, some are not quite ready to move completely off of virtual machines (VMs). They want a unified development platform where developers can build, modify, and deploy applications residing in both containers and VMs in a common, shared environment. Today, we are announcing Anthos for Virtual Machines in preview, allowing you to standardize on Kubernetes while continuing to run some workloads that cannot be easily containerized in virtual machines. Anthos for VMs will help platform developers standardize on an operation model, process and tooling; enable incremental modernization efforts; and support traditional workloads like Virtual Network Functions (VNFs) or stateful monolithic workloads. 

You can take advantage of Anthos for VMs in two ways – either by attaching your vSphere VMs, or shifting your VMs as-is. For customers with active VMware environments, the Anthos control plane can now connect to your vSphere environment and attach your vSphere VMs, allowing you to apply consistent security and policies across clusters, gain visibility into the health and performance of your services, and manage traffic for both VMs and containers. Alternately, Anthos for VMs allows you to shift VMs as-is onto Anthos with KubeVirt, an open-source virtualization API for Kubernetes. Now you can build, modify, and deploy applications residing in both application containers as well as VMs on a common, shared Anthos environment. This is a great option for organizations that prefer to use open-source virtualization, as those same organizations often prefer to run Anthos on bare metal. To help get started, we provide you with a fit assessment tool to identify which approach to take. 

Taking your Anthos experience further

We’re also making it easier for you to manage containerized workloads already running in other clouds through Anthos. While you can already run containers in AWS and Azure from Anthos, we’re taking this a step further with the new Anthos Multi-Cloud API. Generally available in Q4 ‘21, this new API lets you provision and manage GKE clusters running on AWS or Azure infrastructure directly from the command line interface or the Google Cloud Console, all while being managed by a central control plane. This gives you a single API to manage all your container deployments regardless of which major public cloud you're using, thus minimizing the time you spend jumping between user interfaces to accomplish day-to-day management tasks like creating, managing, and updating clusters. 

Over the past year, we’ve brought some of the innovations originally developed for hybrid and multicloud use cases in Anthos back to GKE running in Google Cloud. Specifically, Anthos Config Management and Anthos Service Mesh are now generally available for GKE as standalone services with pay-as-you-go pricing. GKE customers can now use Anthos Config Management to take advantage of config and policy automation at a low incremental per-cluster cost, and use Anthos Service Mesh to enable next-level security and networking on container-based microservices.

Last but not least, we are excited to announce that starting today, Anthos Service Mesh is generally available to support a hybrid mesh. This gives you the flexibility to have a common mesh that spans both your Google Cloud and on-prem deployments. 

Customers like Western Digital have already experienced many benefits from adopting Anthos as their application modernization platform:

"As a global storage leader with sophisticated manufacturing facilities around the world, Western Digital sees cloud technology as an enabler of our key business priorities: reducing time to deliver products and services, rationalizing our entire application footprint, and meeting customer demand for IoT and edge applications,” said Jahidul Khandaker, senior vice president and CIO, Western Digital. “Anthos is our unified management platform of choice—it gives us insights across our Google Cloud and on-premises environments, while keeping the doors open for a multi-cloud future. Anthos has delivered several advantages for our developers: a richer user experience, greater security, and enhanced flexibility to manage factory applications—no matter where they reside—on-prem, in the cloud or a mix of both."

Easy does it

In addition to being an open platform, we strive to make Google Cloud easy to use for operators as well as developers. For example, earlier this year we introduced GKE Autopilot, a mode of operations in GKE that empowers you to simplify operations by offloading the management of infrastructure, control plane, and nodes. With GKE Autopilot, customers like Ubie, a Japanese-based healthcare technology company, have eliminated the need to configure and maintain infrastructure, which helped their development teams focus on making healthcare more accessible.

With Cloud Run, our serverless compute platform, you can abstract away infrastructure management entirely. This year, our focus has been on bringing the simplicity of Cloud Run to more workloads, like traditional applications written in Java Spring Boot, ASP.NET, and Django, among others. Along with a new second generation execution environment for enhanced network and CPU performance, we’ve added committed-use discounts and introduced new CPU allocation controls and price, allowing you to save up to 17% and 25%, respectively, on your compute bill. New connectors for Workflows, integration between Cloud Functions and Secret Manager, and support for min instances are just a few of the other ways we’ve made it easier to build modern, serverless apps. 

Easy for developers 

Developers spend a lot of time inside their integrated development environments (IDEs), writing code. Last year we announced Cloud Shell Editor, which makes the process of writing code as seamless as possible. It comes with your favorite developer tools (e.g., docker, minikube, skaffold, and many more) preinstalled, and this year, we added ~100 live tutorials to it—no more switching between the documentation, the terminal, and your code! 

Once that code is ready, you want building it and deploying it to be as seamless as possible. Today we are announcing Cloud Build Hybrid, which lets you build, test, and deploy across clouds and on-prem systems, so developers get consistent CI/CD tooling across their environments, and platform engineers don't have to worry about maintaining and scaling their systems. Cloud Build is also integrated with Artifact Registry, which now allows you to store not only in containers, but also language-specific artifacts in one place. Finally, with the recently launched Google Cloud Deploy, which is a managed, continuous delivery service initially for GKE, we’re making it easy to scale your pipelines across your organization.

Easy for operators

When your applications are up and running, you need to observe and analyze them for better operations and business insights. While we already offer a fully managed metrics and alerting service with Cloud Monitoring, some Kubernetes users want to continue using open-source Prometheus without the scaling and management headaches. This is precisely why today we are announcing the preview of Managed Service for Prometheus, helping you avoid vendor lock-in and delivering compatibility with your existing Prometheus alerts, workflows, and Grafana dashboards. Now you have all of the benefits of Prometheus, minus the management hassle. 

To give you easy diagnostics and deeper insights from across your business and systems, today we also combined Cloud Logging with the performance and power of BigQuery to introduce Log Analytics. Currently in preview, Log Analytics allows you to rapidly store, manage, and analyze log data. This enables you to effectively move your operations from a reactive to proactive model. 

Zero-trust simplified for application developers

We also make it easy for developers to build secure applications from the get-go, whether they’re writing code, running it through the CI/CD pipeline, or in production. This zero-trust software supply chain is made possible by fully managed services that provide you with a consistent way to define and enforce policy, establish provenance, and prevent modification or tampering. 

And we’re continuing to enhance our zero-trust software supply chain capabilities with new features. For example, developers can now scan containers for vulnerabilities using the simple “gcloud artifacts docker images scan” command. Now generally available, we’re also announcing that you can pair Cloud Run with Binary Authorization and, in a few clicks, ensure that only trusted container images make it to production. In addition, Binary Authorization now integrates with Cloud Build to automatically generate digital signatures and make it easy to set up deploy-time constraints that ensure only images signed by Cloud Build are sanctioned. Learn more about how we are making security easier here.

Transform your cloud with Google

No matter where you are along the journey to transform your applications, we are here to partner with you. Whether its with the new product functionality we described today at Next, research and best practices such as the 2021 Accelerate State of DevOps report from Cloud’s DevOps Research and Assessment (DORA) team, or professional services such as the Google Cloud Application Modernization Program (CAMP), we’re here to help.

Posted in