In the fight between the cloud and on-premise, users could end up being the losers; Edge use cases need to get more specific; and rack power densities could reach 60kW without settling the air versus liquid cooling debate, according to a packed keynote panel at DCD>San Francisco that took on the most important issues.
Seven speakers from companies including Facebook, LinkedIn and Switch raised the big issues at the start of Day One of the DCD event, in San Francisco's Marriott Marquis, opening debates that will continue into the second day and beyond.
The panel started by skewering the rumors of the death of the data center, and moved on to propose a more balanced approach to Edge, and point to the realities of the data center world: increasing densities, the standardization of hardware, and a continuing effort to build earn the trust of end user customers.
“There's some marketing material out there that says the data center is dead,” Switch CTO Eddie Schutter said - but the reality is more complex.
Enterprise customers, Schutter said, have moved out of their enterprise data centers as “that's not what their core business is, it's not where they make their revenue generally. So they're looking for how to shed those costs, and moving their applications to the cloud is an obvious [approach].”
But, Schutter noted, “what we're seeing is actually a realization among customers that a lot of their applications don't run financially effectively in the cloud, and have moved back towards the data center.”
However, despite being disillusioned with the cloud, these companies don't usually go back to building their own data centers, but accept that they must manage a mixture of solutions.
“I think it's safe to say that, most CIOs are looking to move about half or more of their applications to a cloud-native environment, about 40 percent to colocation space, and [keep the] last 10-20 percent in-house.
This hybrid model is “a huge market opportunity,” said Schutter: "I've not heard one CIO yet say that hybrid is not part of their strategy. So I would assume that hybrid is a requirement - providing a diversity of deployment in cloud, diversity of deployment of physical attributes, a diversity of where it's located.”
Dean Nelson, who recently stepped down as head of Uber’s infrastructure division, agreed.
“It's an ‘and’ not an ‘or.’ We keep saying it's going to be in the cloud, or it's going to be on-prem. When we talk about hybrid, the reality is, it’s going to be percentages,” he said. “Cloud can't serve everything that everybody needs - they serve a lot, they are hyperscale for a reason, they've got platforms that support quite a bit of work - but as you start to get bigger, you have the economies of scale. And the issue with when we cross that line, because at some point, you can't afford to be 100 percent in the cloud.”
Nelson believes that when a business passes the 50,000-75,000 server range, “you can get scale and costs that are better than the cloud.” But you're of course still not as big as the cloud players. “So that means you're not everywhere. This is where the ‘and’ comes in.”
To expand globally, to meet GDPR requirements, data residency laws, or latency demands, that is where cloud makes sense again.
When at Uber, Nelson pursued the ‘tripod strategy.’ “And that is to do hyperscale, standardized large deployments to get the economies of scale, then you got two cloud providers that are enabling capabilities. So for example, everything was terminated for Edge, there's already an Edge climate in many clouds' infrastructures, so you could run in servers all over the world, very quickly, cost effectively.”
But all this takes careful planning and a deep understanding of what your company needs, Nelson warned. “The one thing that I would tell everybody is that if you do not have a clear taxonomy and understanding of all the elements of your stack and where your money is going, you're going to waste big dollars and you're going to make decisions that are uninformed.
“There are some very large critical capital events that could backfire on you. And you don't want to have that conversation.”
Switch’s Schutter had another point that organizations should be aware of when making large capital decisions: Impact on purchasing power.
“I think a lot of organizations forget that today they may have 100,000 machines and do a rotation every few years. If they're moving 30 or 40 percent of their equipment into the cloud, they've lost the buying power they have with the OEMs they are working with,” Schutter said.
“And there's an added chain effect on the entire cost of the application. So when they discover that the cloud cost them more than they thought it would, they have to go back and redevelop new relationships with new forecasts in order to get those numbers back. So it is a fairly significant shift for digital transformation to make that commitment like that, and if you miss it's very expensive for the organization. It will take 10 years, three to four lifecycles, to get back up and fix that.”
Some businesses, he said, are “finding themselves first stuck with an application that was maybe $1,000 per user suddenly becoming $5,000.”
Another shift in the market is to an ecosystem of higher density racks, primarily driven by the growth of artificial intelligence and data analytics workloads, Chris Orlando, CEO and co-founder of high density colo provider ScaleMatrix, said.
“Our average workloads for customers were somewhere in the 5-7kW range across our colocation portfolio. As we spread out across the nation to five different data centers, for those that are adopting AI and other compute intense workloads, we have customers that are kinda leapfrogging the 10-20kW market and asking for those specific applications coming in at 30/40/50kW. It's a smaller portion of the market, but it is one that we see growing much more rapidly than we thought.”
The company uses its own proprietary 'Dynamic Density Control' closed loop water cooling system and localized heat exchange at the cabinet level to allow it to reach such high densities. “We looked at where the industry was headed, and realized that water cooling was probably going to be a way to solve some of the future demands for customers.
“We're really going after those customers and showing them how water efficiency can be used to help lower costs, without introducing anything too exotic into the mix.”
Over at Switch, “we can do 60kW air cooled in a multi-tenant space,” Schutter said. But, he said, the full load is rarely actually used.
“One of the things that we have discovered is that even though the capacity has been built for a 60kW cabinet, people operate at 40/50 percent utilization on an average basis. So it's interesting to note that even though there’s this fear of high density scenario, operationally applications are still operating the same as they always have - never fully consume everything that's available.”
Edge and autonomous vehicles
“Edge, Micro Edge... there's like seven definitions of it, it’s a new ambiguous term,” Nelson said as the panel turned to the new topic. “Everybody's got a definition for it, but one thing is clear - Edge is going to be a significant player, period. There isn't a doubt about it.”
But Nelson did admit that he has doubts about one much-touted Edge use case: “In the last couple years, I changed my mind with autonomous vehicles.
“[I thought] they produce four terabytes of data a day - we should have all this data that's coming out of these cars. And it needs to be processed in the car, processed at the Edge - process, process, process.”
The reality, he said, was that the vast majority of the processing will happen in the car, and only “snippets of data will be taken out to be able to do something with that. It's not terabytes of data that's coming out of this car.”
Edge will still be required elsewhere, of course: “Virtual reality, augmented reality, anything that is mission critical with latency requirements.”
He added: “There's going to be a distribution, there's going to be an Edge, there's going to be a Micro Edge, then there's going to be little things, the IoT Edge that's everywhere. But it's all going to keep going. How do we deal with it? It's not in the traditional way in which we build things today. The Edge will change all that.”
Working as an HPC analyst for weapons and military equipment manufacturer Northrop Grumman has given Steve Schwarzbek a different view on the Edge.
“The Voyager spacecraft has passed the heliopause [a boundary of the solar system, where signals take 17 hours to reach Earth]. There's a case where that object needs to have enough intelligence on it for 30 some years. So we're going to have computing that's just there, and understand what data really needs to be sent home. We talked about a data center that's going to be your system of record, that's going to keep your key data.
“It's got to be secure, it's got to be 100 percent uptime.”
Closer to home, “your refrigerator is going to talk back to you.”
One of the companies that would, perhaps, love to build your sentient fridge is Facebook, which has expanded from just a social networking and advertising business to exploring ventures in smart home devices, and cryptocurrencies.
“From the Facebook perspective, the Edge is app specific, and keeping in mind the user experience,” company data center project manager Skyler Holloway said.
“When you think about using something like, Portal [Facebook’s smart display], latency is something that is very intrusive to a conversation, and if there's something that has lag in the system then it takes your service from being successful to not successful.
“But in the first example of compiling news feeds or older pictures, the latency requirements are quite different.”
Facebook takes a case by case approach to see if it is worth making the “additional investment to decrease latency. Because we want to be assured that when do invest more than we have any tangible benefit to the business and that the end user will benefit.”
In the past, extremely low latency was not particularly core to Facebook’s business model. “As we started looking at virtual reality or immersive experiences it becomes more important,” Holloway said.
“So it's about striking the right balance of moving forward with new technology ahead of time, but striking the balance between when it will work for you, and when it's actually required, so that you're not jumping too far ahead and investing money in ways that don't actually benefit the bottom line.”
Healthcare and wearables
“We’re moving very, very slowly,” Steve Press said.
As VP of data center operations at the massive healthcare company Kaiser Permanente, Press' revelation is not surprising. “We're not early adopters, by any means.”
The company has far more regulations to abide by than the average business, including HIPAA, SOX and PCI, “and for us to get our agreements in place, so that our audit chain can go from Kaiser data centers or cloud providers back to who's responsible, working out all the details around that has really been a huge task.”
The other issue for the company is that in healthcare, “usually the applications are not developed as cloud native. And one of the other big issues is the interfaces - one of our mission critical apps interfaces to more than 70 other mission critical apps in our environment, to pull all the information together to take care of our patients,” Press said.
“To make all those interfaces work in the cloud between our data centers, and our cloud providers is, quite the undertaking. So we're ready, but we're moving a lot slower than we had originally forecasted.”
Looking ahead, Press does see promise in a world connected. “I think the biggest area that we want help from the industry is in wearables - Internet of Things on people. When you think about the ultimate Edge or the Micro Edge, you are all it.”
Wearables could allow Kaiser to do critical life monitoring on patients, 24/7. There’s just one problem: “I don't think anybody would trust [the wearables] yet. Every has network dropouts, if you’re doing cardiac rehab monitoring, and you miss an incident – whose fault is it? Is it the cell phone provider, the ISP?
“5G might be more reliable, but we don’t know. That's probably the biggest issue for us in healthcare.”
Open hardware and 5G
“Open hardware is an amazing space,” LinkedIn’s head of infrastructure engineering Zaid Ali Kahn, said. “Software is super mature, but open hardware has gone through some very interesting trends.
“Obviously, [the Open Compute Project] opened up the market. And then we came up with Open19 because we wanted to solve the 19in rack, but at the end of the day, you're also computing by hardware. And then when I was in China two weeks ago, talking to the folks at Baidu, and they’re working a very similar thing, Project Scorpio," now
known as the Open Data Center Committee (ODCC).
Seeing the disparate open hardware efforts, Kahn came to the realization that different groups made sense for teams trying to move fast, trying to quickly make changes in a specialized area. “But if we want to go big, we have to go together.”
And, to build out 5G, it’s time to go big. “So there's some really interesting discussions we've been having with OCP and in China as well; we all feel that these rack architectures are very similar.
“Let's combine and try to collaborate, because at the end of the day, the 5G Edge will require smaller and faster deployments, so we will need a very simplified architecture.”
Stick to DCD for coverage of Day 2 of the conference soon, as well as interviews and deep dives from the show floor.