Sh*t happens, how your vendor reacts is what matters. Google Cloud had a recent high visibility outage. What has impressed me has been the company's transparency and ownership throughout. TLDR: In early May 2024, a Google Cloud incident impacted UniSuper in Australia due to a misconfiguration of the Google Cloud VMware Engine (GCVE) service. A blank parameter caused the customer's private cloud to default to a fixed term, leading to automatic deletion. Google Cloud worked with UniSuper to restore services and implemented measures to prevent future occurrences. The incident affected only this customer and one service, with no data loss due to robust backups. You can read Google full statement here: https://lnkd.in/e3uB6PHu You can also read our analysis of the availability of the three largest hyperscalers for 2023 here: https://lnkd.in/eiHwF7dW
Steven Dickens’ Post
More Relevant Posts
-
About the GCP incident of #UniSuper customer in Australia What happened? During the initial deployment of a Google Cloud VMware Engine (GCVE) Private Cloud for the customer using an internal tool, there was an inadvertent misconfiguration of the GCVE service by Google operators due to leaving a parameter blank. This had the unintended and then unknown consequence of defaulting the customer’s GCVE Private Cloud to a fixed term, with automatic deletion at the end of that period. The incident trigger and the downstream system behavior have both been corrected to ensure that this cannot happen again. This incident did not impact any Google Cloud service other than this customer’s one GCVE Private Cloud. Other customers were not impacted by this incident. My question is : Is a single cloud enough to secure your backups? or shall we always use Multicloud and cross cloud backups ? https://lnkd.in/ebSCuf5f
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
Google Cloud's swift response to a recent incident with UniSuper showcases their commitment to transparency and resilience in cloud infrastructure. - 🌐 Incident involved Google Cloud VMware Engine (GCVE) misconfiguration - 🔄 Swift recovery thanks to robust backups and customer collaboration - 🚀 Measures implemented to prevent future incidents #CloudComputing #Resilience #Transparency https://lnkd.in/gu6uMgyn
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
The story of UniSuper disaster - During the initial deployment of a Google Cloud VMware Engine (GCVE) Private Cloud for the customer using an internal tool, there was an inadvertent misconfiguration of the GCVE service by Google operators due to leaving a parameter blank. This had the unintended and then unknown consequence of defaulting the customer’s GCVE Private Cloud to a fixed term, with automatic deletion at the end of that period. The incident trigger and the downstream system behavior have both been corrected to ensure that this cannot happen again. This incident did not impact any Google Cloud service other than this customer’s one GCVE Private Cloud. Other customers were not impacted by this incident. https://lnkd.in/gwftzWwb
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
Ever wondered why backups are a constant IT drumbeat? This Google Cloud incident with UniSuper in Australia shows exactly why. A configuration error led to the deletion of a critical system, highlighting the potential for data loss even with a major cloud provider. But here's the good news: UniSuper got everything back thanks to their backups! Stored securely in a separate location (Google Cloud Storage in this case), these backups, along with strong disaster recovery practices, were the key to a swift recovery. This story is a powerful reminder: backups and DR aren't just IT jargon, they're your safety net in a digital world. So, have you checked your backups lately?
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
Google provides details about the recent incident when UniSuper’s cloud infrastructure was deleted. 🤦♂️ An unexpected series of events led to the accidental misconfiguration during the setup of UniSuper’s Private Cloud services, which ultimately caused the deletion of their Private Cloud subscription. Highlights the importance of configuration reviews, #iac, #4eyes principles, and backups 🛡️ https://lnkd.in/e6Pt5Wxq
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
I've been following the recent cloud outage on GCP, and I must say, kudos to Google for providing a detailed RCA on the topic. The bottom line seems to be a configuration issue in the capacity management tool, coupled with faulty defaulting business logic in the tool that led to this issue. However, the default system behavior in case of missing or buggy parameters left a lot to be desired. It's interesting to see the blog insisting on it not being a systemic issue on Cloud. But, under the shared responsibility model, cloud infrastructure is the responsibility of the provider. We should hope and assume that they have a system of checks and balances that will alert them (if not the customer) if mishaps occur. Not alerting customers during capacity management exercise and its effects is another area that needs attention. I wonder what would be the repercussions if the provider does not deliver on its responsibilities. Opnions/Thoughts/Responses are welcome #startegy #cloudstrategy #hybridcloud #multicloud #opencloud
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
Yes I know we go ON about the importance of backing up your Microsoft and Google data (cloud data). All of these cloud providers even state in their ToS that you are responsible for your own data, and we now have a bloody good example of this. Due to a miss configuration and entire global accounts data was deleted for UNISuper last week from Google's cloud service. Thank god they had a backup with a third party otherwise a week of disruption would have been an entire data loss across the board. West View IT backs up our client data on Microsoft 3653 times a day for exactly this purpose. Does yours? #smallbusiness #itsupport #backup
What caused the UniSuper Google Cloud outage
theregister.com
To view or add a comment, sign in
-
Agree lesson learnt for Data Backup. You can never have too many Data Backup.
Google Cloud accidentally nukes a major customers data - including their replicated instances. What are the lessons here? 1. You can never have too many backups. 2. See #1
What caused the UniSuper Google Cloud outage
theregister.com
To view or add a comment, sign in
-
Misconfigurations in the cloud can be problematic, and in this instance it meant a customer’s GCVE Private Cloud was deleted. Kudos to Google the transparency and communication outlined in the blog here provides important information and learnings. #cloud #Google
Details of Google Cloud GCVE incident | Google Cloud Blog
cloud.google.com
To view or add a comment, sign in
-
Thomas Kurian recently addressed an incident where UniSuper's (a Google Cloud customer) Private Cloud subscription was accidentally deleted. According to Kurian, this was a "one-of-a-kind occurrence" that has never happened before with any of Google Cloud's clients globally. This incident underscores the importance of having a robust Disaster Recovery (DR) Strategy & Plan. Simply relying on out-of-the-box availability settings in the cloud is not enough. Be prepared for those "one-of-a-kind" events to ensure your data is always protected. Act Now! #CloudComputing #DisasterRecovery #GoogleCloud #DataProtection #UniSuper #TechNews
What caused the UniSuper Google Cloud outage
theregister.com
To view or add a comment, sign in