Archived Content

The following content is from an older version of this website, and may not display correctly.

Paul Maritz, former CEO of EMC subsidiary VMware who has now taken the helm of EMC's latest venture Pivotal, outlined in some detail what the new company will be doing and what its product road map for the rest of the year will be like in his keynote presentation at the EMC World conference in Las Vegas Tuesday.

 

EMC announced the creation of Pivotal in April. The storage giant owns 62% of the firm, VMware owns 28%, while the US industrial behemoth GE owns 10%.

 

While its parent companies EMC and VMware are busy automating data center hardware, Pivotal is thinking about what can be built on top of that fully automated software-defined data center of the future. It will be a new platform, higher up the stack, that will bring together big data, “fast data” and the new generation of applications, while emphasizing choice.

 

“We believe that there’s a need and a huge opportunity to drive new value in our industry through a new generation of applications that exploit, amongst other things, big data and the full power of the cloud,” Maritz said. “In this [new] generation, we're going to see new data fabrics and the new fundamental compute architecture sitting underneath that.”

 

By new-generation applications, EMC means applications built on Node JS, Ruby on Rails, Spring, Python and Hadoop. The company contrasts these against the “traditional” business applications like Java, Oracle database, SAP or Microsoft Dynamics.

 

As new architectures are built for the next-generation applications, there are no signs that the traditional applications will go away. The way EMC is addressing this fact is by ensuring they can run efficiently in software-defined data centers, while being able to interact with next-generation apps.

 

Learning from internet giants

One of the things Pivotal is working on is to bring some of the attributes internet giants like Google and Facebook have built into their architectures into the enterprise. Google, for example, could not index the information on the web as it did on top of a compute architecture that existed when Google started, so the company had to innovate.

 

The first thing Google did was build the capability to store and reason over much larger data sets much more quickly than existing architectures could at the time.

 

One of the first innovations to come out of this effort was the Google File System (GFS). GFS, Maritz said, has become a “very important substrate” in this new generation of computing.

 

While GFS is proprietary Google technology, it has served as a corner stone in the popular open-source software framework Apache Hadoop, which turns commodity hardware into powerful data-processing clusters. The Hadoop version of GFS is the Hadoop Distributed File System (HDFS).

 

EMC recently launched a Hadoop distribution called Pivotal HD.

 

Secondly, the internet giants have a culture of rapid innovation. Facebook, as an example, encourages every newly hired developer to write and deploy an application on its website on their first day of work.

 

“They have figured out how to build their processes, their infrastructure, to allow them to introduce new experiences very rapidly,” Maritz said. In order to do that, you have to automate.

 

“They built their data centers, their compute architectures, their data fabrics, with automation in mind.”

 

If today's companies want to be competitive in the future, “they're going to have to learn how to get some of this mojo,” and Pivotal wants to be the company to provide enterprises with this mojo.

 

Big and 'fast' data

Future applications will also have to be able to take advantage of the “internet of things” and the amount of data that will be available to them when nearly every device in the world has an IP address. Maritz brought GE as an example.

 

A single transatlantic flight by an aircraft with GE engines can generate as much as 30 terabytes of data. GE would like to be able to ingest that data, analyze it and react to it.

 

This is what EMC execs mean when they talk about “fast data”. It is being able to absorb massive amounts of data and analyze it in real time.

 

Product: Pivotal One

These are all capabilities Pivotal is working to build into its new platform for the cloud era.

 

The first release of the platform is going to be called Pivotal One, and Maritz expects the company to roll it out in the fourth quarter. It will be “anchored in open source”, it will be compatible with a variety of cloud types and it will be mindful of developers in the enterprise.

 

Pivotal wants to build a semantic capability on top of data fabrics built around the principles of using scale-out object store (in the form of HDFS). This semantic capability will be based on in-memory technology.

 

That means scale-out memory technology, where lots of memory spaces work together to reason over and interact with massive pools of data. Massive amounts of data sitting on top of HDFS substrate will be ingested, cleaned up and analyzed in real time.

 

Technology from another EMC subsidiary Greenplum will provide the capability to do in-memory scale-out queries, and technology from GemFire (another acquisition) will provide in-memory transactional capability.

 

Another big part of Pivotal's and EMC's strategy in taking the next step in the industry's evolution is ensuring application mobility across different types of hardware and different clouds.

 

Pivotal's infrastructure automation layer is its open-source Platform-as-a-Service technology Cloud Foundry, but the goal is to make its solutions compatible with other vendors' clouds as well. 

 

Finally, the company also has a services team, called Pivotal Labs (the name of the company EMC bought and used as a core to build its new subsidiary on), which will help users adopt any of the layers Pivotal offers for their needs.

 

Customers will be able to buy Pivotal One as a suite, or buy parts of it, some of whom, such as Pivotal HD, are already available.