Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

Oracle compels Intel to build a special chip

  • Print
  • Share
  • Comment
  • Save

It isn’t the sort of thing you see a CPU manufacturer doing: producing a special run of processors designed to handle a specific workload. But in an exclusive interview with DatacenterDynamics, a senior Intel manager confirms that it was indeed Oracle that had the idea of building a processor whose performance profiles could be changed on demand, depending on the workload being scheduled for it.

“We’re able to put cores into ultra-low power states, and then bring them back up as needed,” states Patrick Buddenbaum, director of enterprise marketing for Intel.  He’s talking about his company’s new Xeon E7-8895 v2 processor — one notch above what had been Intel’s top-of-the-line, the E7-8890 v2, since February. Though a look at the model number would appear to tell the story of a slight performance improvement, there’s a bigger story behind the replacement of the “0” with a “5.” It’s about a feature that a database software company — in this case, Oracle — literally requested of Intel. As Buddenbaum tells us, not only did Intel comply with the feature request, but it made that feature available to Oracle first and foremost.

“We did work with Oracle,” he explains. “They came to us and said, ‘Hey, we love the characteristics of these different configurations. Rather than having a customer purchase a specific SKU — such as 15-core 2.8 GHz, or 6-core 3.4 GHz — is there a way we can take advantage of these power states to be able to then dynamically guarantee, if you will, that one part can operate in these different configurations?’”

Chilly three-way
Last February, Intel introduced the 8890 v2 model, whose specs max out at 15 cores clocked at 2.8 GHz. The key selling point for this 8890 was to present maximum memory caching capacity for advanced analytics (a colossal 37.5 MB, especially for in-memory databases along the trail first blazed by SAP HANA.  Oracle is one of the companies finding itself following that trail, with the latest relaunch of its 12c database incorporating far more capabilities than it had before.

But also in that new v2 family were some scaled-down options, including the E7-8891 v2 with 10 cores clocked at 3.2 GHz, and the E7-8893 with 6 cores clocked at 3.4 GHz.  Fewer cores, greater clock speed. This choice would give customers the opportunity to evaluate their usual workloads and choose servers with processors best suited to the task at hand.

A “big data” processing unit running Hadoop may be better suited to more cores, because such highly iterative tasks are more conducive to parallelism. By contrast, an analytics unit doing more number-crunching, where tasks are more scalar and less conducive to being subdivided into threads, may be better suited to a processing environment with fewer cores but more speed.

Oracle is a company that likes to sell products that do pretty much everything in a given category (its CEO, Larry Ellison, just loves to make lists) out-of-the-box. The good/better/best motif, or the two-extremes motif with a best-of-both-worlds option in the middle, is not Oracle’s way of selling servers. When Oracle re-premiered 12c, Ellison also stood on stage in front of his company’s latest Exadata servers, and he hinted (several times) that Oracle may have been working more directly than Intel than ever before to improve Exadata’s performance.

Usually that does not mean the processor producer takes a tip from the server producer and hardware maker with regard to alterations to the processor product line.  But this time, it did.  As a result, the Xeon E7-8895 v2 allows its own performance profile to be adjusted on demand, adopting the specifications of an 8890, 8891, or 8893 while live, without rebooting.  Buddenbaum tells us this is on account of a technology that some Xeons have already been making use of: powering down unused cores, for what he calls a “very-low-power state.”

“The work we did with Oracle was basically taking advantage of that feature and capability to then essentially validate, and then as part of our manufacturing process, be able to guarantee, these difference performance and SKU levels,” says Intel’s senior marketing manager.  “We are able to validate that the 8895 can run in all of these different configurations.  For example, the 15-core 2.8 GHz 8890 SKU would certainly be validated for that configuration.  But there’s no guarantee that it could run as a six-core 3.4 GHz product.”

Tight coherence
So the 8895 is not a redesign of the 8890, nor is it a one-off product made exclusively for Oracle.  But its appearance in the Exadata Database Machine X4-8 is the result of a close enough relationship between the two companies that Intel could accommodate Oracle’s request. And while working relationships between software and hardware producers are not uncommon, we are now entering a phase of the evolution of manufacturing where even pre-existing relationships can have new impacts on production and marketing.

In a taped interview released Thursday morning by Oracle, Intel’s datacenter group general manager Diane Bryant gave indications that this is the beginning of a new trend for her company. “The Xeon E7 v2 that we launched earlier this year was directly targeted at in-memory analytics and in-memory computing,” said Bryant, “so it’s very well-optimized for the Oracle solution. And as a definition partner, we worked very closely to make sure that the instructions and the features that we included in the E7 v2 would benefit the Oracle software stack.

“So all of that optimization went into the Exadata platform that’s being launched,” Bryant continues, “and we’ve got some exciting announcements coming up.  As our collaboration continues, we are actually co-defining — with the Oracle engineers [and] with the Intel engineers — next-generation instructions that will further accelerate the Oracle database solution, and those will be coming in future processor generations.  Things such as memory enhancements, vector manipulation acceleration, and cluster interconnect performance acceleration.”

Buddenbaum expands on this point a bit, telling us that there’s explicit evidence of this working relationship in the specifications not just of the 8895, but the 8890 launched last February:  “The sheer size of the memory footprint on our systems — 6 TB for a four-socket, 12 TB for an eight-socket — matches perfectly, as we made the industry transition to in-memory computing.

If you look at Oracle embracing the columnar database format as part of [12c], they were able to leverage our CPU-level, advanced vector instructions [AVX]. It helped accelerate their database performance. So it really was grounded in working with the database organization.”

There isn’t anything particularly prohibiting Intel from acting on its pre-existing partnerships with other companies, for similar gains in the future.  But when I stretched the scope of the question to encompassing Oracle’s hardware competitors like HP and Dell, rather than its software competitors like SAP and IBM, Intel’s Buddenbaum seemed a bit reluctant to having his company openly welcome design suggestions from server makers.

While the Intel/Oracle relationship clearly benefits Exadata, it apparently began with an interest in optimizing Oracle’s software workload.

So if HP is looking for an edge for its ProLiant servers, it may need its Vertica analytics database division to approach Intel first.

Related images

  • Intel Xeon E7 v2

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

  • The CFD Myth: Why there is no real time CFD.

    Wed, 20 May 2015 14:00:00

    Join DatacenterDynamics and Siemens as we explore the myth of real time CFD and expose the necessity for Real-Time Environmental Monitoring. Description: Computational fluid dynamics (CFD) can be used to numerically solve very complex cooling problems in data centers as well as evaluate multiple cooling strategies (cold aisle containment, hot aisle containment, in-line CRAC versus traditional CRAC placement, etc). In order to continually improve in dynamically evolving data center configurations, the CFD model needs to be at the core of any asset management solution and must be fed by real-time monitoring information. The ideal DCIM system offers CFD capability as part of its core solution instead of an external application in order to allow for continuous improvements and validation of your cooling strategy and air handling choices. This presentation will focus on the benefits of a single data model for asset management, real-time monitoring, and CFD simulation; and how it will benefit your bottom line. Agenda: 1.Introduction : Siemens DC Portfolio and Datacenter Clarity LC Suite 2.What is CFD and why CFD is increasing in importance 3.How Datacenter Clarity LC makes CFD practical 4.The CFD myth: CFD versus Real-Time Environmental Monitoring 5.Traditional inputs and results when running a CFD analysis 6.Advanced RCI analytics/metrics with Datacenter Clarity LC 7.Use cases leveraging RCI analytics/metrics 8.Customer testimonials 9.DCIM-driven CFD versus standalone CFD 10.Summary & Q/A

  • A pPUE approaching 1- Fact or Fiction?

    Tue, 5 May 2015 14:00:00

    Rittal’s presentation focuses on the biggest challenge facing data centre infrastructures: efficient cooling. The presentation outlines the latest technology for rack, row, and room cooling. The focus is on room cooling with rear door heat exchangers (RHx)

  • APAC - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Wed, 29 Apr 2015 05:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach to: • Data center security concerns and the best strategies to address them • Top risk management challenges and planning to guard against them • Regulatory standards and steps to comply with them • Industry best practices and how to implement them in your business • Interoperability and the path for achieving it • Engagement and how to put it into practice

  • Americas - “I Heard It Through the Grapevine” – Managing Data Center Risk

    Tue, 28 Apr 2015 18:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach to: • Data center security concerns and the best strategies to address them • Top risk management challenges and planning to guard against them • Regulatory standards and steps to comply with them • Industry best practices and how to implement them in your business • Interoperability and the path for achieving it • Engagement and how to put it into practice

  • EMEA - “I Heard It Through the Grapevine" – Managing Data Center Risk

    Tue, 28 Apr 2015 14:00:00

    Join this webinar to understand how to minimize the risk to your organization and learn more about Anixter’s unique approach to: • Data center security concerns and the best strategies to address them • Top risk management challenges and planning to guard against them • Regulatory standards and steps to comply with them • Industry best practices and how to implement them in your business • Interoperability and the path for achieving it • Engagement and how to put it into practice

More link