Google's newest cloud region went down for 1 hour and 30 minutes due to 'transient voltage' issues that rebooted network hardware.

The issue occurred at Google's australia-southeast2 region in Melbourne, Australia, which launched just a month ago on 25 July.

Google-server_1.width-358.jpg
– Google

"Any service that uses Cloud Networking may have seen impact," Google said in a status report about the incident that began on 23 August 2021 19:50 US Pacific time (24 August local time) and ended 21:20.

"From preliminary analysis, the root cause of the issue was transient voltage at the feeder to the network equipment, causing the equipment to reboot. In order to mitigate the issue, traffic within the australia-southeast2 region was redirected temporarily."

Cloud Interconnect suffered 100 percent packet loss, while Cloud Storage, Cloud Run, Cloud SQL, Cloud Filestore, Cloud Spanner, and Cloud Bigtable saw a 100 percent error rate.

Public IP traffic connectivity failed from 19:51 to 20:41 on Cloud Networking. Cloud NAT experienced control plane failures from 19:51 to 20:00.

Cloud VPN HA dropped up to 83 percent of traffic between 19:51 and 20:21, while Legacy VPN dropped ~100 percent of traffic between 19:51 and 20:41.

Control plane operations on regional clusters of the Google Kubernetes Engine failed between 19:50 and 20:04. There was increased latency from 20:05 to 20:41. All requests to container.googleapis.com failed.

Persistent Disk had up to 100 percent device unavailability between 19:51 and 20:13.

Cloud IAM had an ~80 percent error rate from 19:52 to 20:10. Cloud Pub/Sub had an increased error rate and latency of up to 95 percent between 19:50 and 20:12.

As for Cloud Dataproc, new cluster creation failed from 20:09 until 21:20.

"We apologize for the inconvenience this service disruption may have caused," Google said.

Get a monthly roundup of Hyperscale news, direct to your inbox.