After running a data center for its own services in Oregon for over a decade, Google has now launched its Cloud Platform in the region. The new ’us-west1’ data center is expected to cut latency across the West Coast.
Additionally, Google has launched two Cloud Machine Learning APIs into open beta.
From coast to coast
In a blog post, the company said that the Oregon Cloud Region offers Google Compute Engine, Google Cloud Storage and Google Container Engine services, with two separate Compute Engine zones for high availability applications.
Initial testing showed a 30-80 percent reduction in latency for users in cities such as Vancouver, Seattle, Portland, San Francisco and Los Angeles when compared to Google Cloud’s ’us-central1’ in Council Bluffs, Iowa.
Citing gaming as a key beneficiary of this reduction in lag, Google quoted Paul Manuel, director of Multiplay Game Services. He said: “Regional latency is a major factor in the gaming experience. Google Cloud Platform’s network is one of the best we’ve worked with, from a tech perspective but also in terms of the one-on-one support we’ve received from the team.”
GCP now consists of five regions - Western US, Central US, Eastern US, Western Europe and East Asia.
The company plans to add Tokyo as a region later this year and says that it will announce “more than 10” additional regions in 2017 - an upgrade over previous comments that put the figure at exactly ten.
Google defines regions as “collections of zones. Zones have high-bandwidth, low-latency network connections to other zones in the same region.
“In order to deploy fault-tolerant applications that have high availability, Google recommends deploying applications across multiple zones and multiple regions.
“This helps protect against unexpected failures of components, up to and including a single zone or region.”
Google AI gets sentimental
Eager to compete with the likes of Amazon Web Services and Microsoft Azure, Google has turned to artificial intelligence to give its cloud an edge.
Only yesterday, it was revealed that Google’s DeepMind had been used to cut PUE in some of its data centers by 15 percent. Now, the company is bringing two IBM Watson-esque Machine Learning APIs into open beta.
The new Google Cloud Natural Language API reveals the structure and meaning of text in English, Spanish and Japanese (other languages forthcoming). It claims to be able to “understand the overall sentiment of a block of text,” identify relevant entities in a block of text and “label them with types such as person, organization, location, events, products and media,“ and identify parts of speech and “create dependency parse trees for each sentence to reveal the structure and meaning of text.”
The company says that the API could be useful for digital marketers who can analyze product reviews online, or customer service centers which can determine sentiment from transcribed calls. UK-based online food retailer Ocado was mentioned as an alpha customer.
Talking to the Wall Street Journal, the company revealed that it has worked with one large customer to analyze more than two billion minutes of customer-service calls to understand whether the customer ended up satisfied or not.
“That’s been an intractable, unsolvable problem,” Rob Craft, a machine learning product manager at Google, said.
Also entering open beta is the Cloud Speech API, which gives enterprises and developers access to speech-to-text conversion in more than 80 languages, both for apps and IoT devices.
Google says that more than 5,000 companies signed up for the Speech API alpha, including HyperConnect and VoiceBase.
Building on the alpha, the company is introducing new features such as the ability to add custom words or phrases to improve recognition - for example, having a smart TV listening for “rewind” and “fast-forward”.
Google’s machine-learning products are “great stuff, and it’s finally packaged in a way that developers really want,” Forrester Research principal analyst John Rymer told WSJ.
“But they’re not alone.…And (Amazon) and Microsoft are far ahead.”