Edge computing is attracting huge investment, both from the telecoms and cloud sectors as well as vertical sectors like retail and manufacturing.
At their 2017 Data Centre Conference, Gartner forecasted that 40 percent of large enterprises will be integrating edge computing principles into their projects by 2021. The current figure is at less than one percent, so we’re anticipating enormous growth in the market. And no wonder, because the demand for a better and faster customer experience is constantly increasing, along with latency as our networks grow and become more widely distributed.
Meeting this demand is now critical for a business’ bottom line and has a direct correlation to sales. Amazon found every 100ms of latency cost them one percent in sales. Google found an extra 0.5 seconds in search page generation time dropped traffic by 20 percent.
Service providers in particular are looking to harness software-defined network (SDN) and network function virtualization (NFV) architectures in order to be more agile. The convergence of wireline and wireless networks, and the increasing demand for high bandwidth and low latency is also pushing cloud computing environments to the edge of the network.
Building an edge data center is clearly an endeavor that requires a great deal of planning and preparation, so to clarify the process, we’ve created a five-point checklist for any service provider to consider before moving to the “edge”.
This means location both in terms of geographic area and also the characteristics of the physical site itself.
Think about your target market – is it close enough to them to ensure little latency and an excellent customer experience? Regulatory compliance is also now an essential consideration given that data is going to be stored on-site; ensure you’re compliant with and conscious of local data regulations and how they will impact your business.
When it comes to the potential building itself, there are a wealth of factors to consider. Does the square footage allow for the number of racks and cabinets you’ll require? Thinking further ahead, does the space allow for expansion in the near future? It’s also important to consider the existing infrastructure that the building has in place already. If there is none, then the building may require retrofitting.
Power planning is an obvious yet critical consideration for any data centers, but the needs of an edge data center are very distinct. While power redundancy is typically a necessity for traditional data centers, at the edge this may not be available or be too costly.
Given that, in a best-case scenario, power will enter the facility from different entrance points, businesses should consider whether the building can be serviced by multiple utility grids.
However, planning for the worst is equally important. Can the back-up generators support the data center for at least 48 hours in the case of a power outage?
Heating and cooling
Heating, venting and air conditioning (HVAC) is pivotal to a successful data center operation. Given that it is one of the biggest consumers of power – 50% of all power used by a data center is due to HVAC – service providers must ensure that this operation is as efficient, simple and cost-effective as possible.
For example, adopting free-cooling or hot-aisle/cold-aisle designs can be a cost-effective, simple solution to the temperature control problem. It is also important to establish temperature monitoring systems, principally by using temperature sensors on the racks.
Given the value of the infrastructure inside and the data it holds, most of the design concerns for edge data centers are rooted in security and safety.
This means physical security measures, not just the more obvious cyber security aspect. Perhaps consider biometrics as an additional layer of security on top of key cards and traditional identification.
Fire safety is also paramount, and there are data center-specific considerations to bear in mind here too. The usual sprinkler systems are out of the question; to prevent expensive equipment from getting wet, special inert gas-based systems are often installed instead.
Physical layer infrastructure
Given that analysts predict huge growth in the volume of data going to the edge, the key here again is future proofing. The physical layer infrastructure should be designed from the outset to support multiple upgrades; it would be prudent to put in place a 3-5 year roadmap to consider all eventualities.
Any service provider looking to truly harness cloud computing and SDN/NFV needs to examine its physical layer infrastructure with both the present and the future in mind.
Given the complexity of building an edge date center, these five points are merely scratching the surface, but hopefully they can serve as signposts for the move to the “edge”.
Service providers need to consider both the present and the future when building an edge data center that is equipped to support the explosive projected growth in demand.
With Gartner stating that a mere 7ms delay in virtual or augmented reality could cause users motion sickness, it is essential that edge data centers can provide the latency needed to support next-generation technologies as they move towards mainstream adoption.