Mattias Fridström, chief evangelist for Telia Carrier, says lower networking hardware costs are forcing data centers and metro networks to fundamentally change how they conduct their business: “Any location with fiber can now become a data center, opening up new opportunities for designing, managing, and operating cloud and on-demand computing resources.”
In the past the networking hardware costs were extremely prohibitive, and so connecting different data centers to each was often an expensive exercise. Organizations such as Google, Facebook, Amazon and Intel have been at the forefront of the software designed revolution in computing, and they are now moving into the networking arena with SDN and SD-WAN. This is displacing the traditional costly propriety silicon purveyors on network equipment. With lower costs and higher speed connections, the dynamics are changing. In turn this is transforming the costs associated with data centers, public, hybrid and private clouds – making them more accessible and more affordable.
Restricted capacity
For years, the network capacity inside the data center has been restricted by their underlying technology, but with the advent of new silicon and signal processing, the costs have been pushed down. At the same time network performance has increased inside the data center. In the past that used to be restricted to 10Gb/s connectivity, but they now commonly have 100GB/s or higher at their disposal. So, lower costs and higher performance have become the new norm. This means it is now possible to exploit this new high capacity WAN connectivity.
Cost is always an inhibitor the reduction and commodity hardware as well as the open source software-defined functionality brings flexibility to all sizes of organizations, while changing the dynamics of the WAN and the new possibilities that it can bring. A word of caution though: latency and its effects must be considered when planning new installations.
Locate anywhere
Fridström claims that fiber optics now makes it possible to build and locate a data center anywhere. In my view you can now create global access to data centers to mitigate disaster recovery (DR) geographical constraints. In doing so it becomes possible to move computing closer to the consumer, and perhaps even closer to the edge. However, the speed of light is finite and that can cause issues, making it harder to move volumes of data between data centers. Network latency and packet loss remain issues that can diminish data center performance.
Many organizations fail to factor in the effect of the speed of light when designing geographically dispersed solutions. For high-speed trading platform data, the distance between data centers affects the time between transactions. However, for low speed transactional data composed of a small number of data packets, a few milliseconds of delay isn’t critical.
Data accelerate
When transferring large volumes of data such as workloads or back up as a service transactions latency and packet loss is a massive throughput killer. You can’t make the speed of light go faster, so you must find another way around the problem. Data acceleration solutions such as PORTrockIT, through the use of parallelization and AI, can have a dramatic effect on restoring data throughput. Unlike WAN optimizations, they can also permit encrypted files to be transmitted securely at speed between data centers that are located outside of their own circles of disruption.
WAN optimization solutions often can’t deal with encrypted data, requiring the data to often be sent unencrypted to ensure that a speedier data transmission can be achieved. Moreover, while WAN optimization and SD-WAN vendors often claim they deal will latency, they often don’t sufficiently to make a difference to network performance at the higher WAN speeds that are now available. In contrast data acceleration solutions use machine learning to mitigate the effects of data and network latency. With them, the possibility of having optimized data centers and disaster recovery sites in different parts of the world, with impact of latency being much reduced, becomes more feasible.
New opportunities
Yet lower costs create opportunities for designing, managing, and operating on-demand cloud computing resources. Indeed, with service providers now thinking globally, fiber opens up a whole range of opportunities for organizations both large and small. Many still believe that public is the only cloud model available. However, the larger organizations with virtualized, distributed data centers are linked with high speed fiber. So, they can create their own cloud infrastructure for cloud storage and cloud computing. This nevertheless will leave the debate about whether it’s cheaper for them to outsource to third-party data centers, or whether it’s cheaper to have their own data centers.
So, with the changing data center in mind and the need for data acceleration still being important, here are my top tips:
- Understand the performance and latency requirements of your applications be these databases, DRaaS or BaaS of end user application.
- Employ data acceleration solutions such as PORTrockIT and to lower the SLA requirements of your WAN in terms of latency and packet loss SLAs.
- Remember that using SD-WANs is a great idea for managing WANs, but they won’t fix the latency and packet loss issues.
- Software-defined open source network software can considerably reduce both capital and operational costs.
Future-gazing
Predicting technology ten-year period is a dangerous as spinning on a dime. However, looking forward at some of the current trends it becomes possible to theorize about the future from what is currently seen within today’s market. Firstly, with ever increasing data volumes, data center power and energy consumption is bound to increase exponentially. This will generate much heat too, which could be used to heat homes. Data centers are going to have to tackle the greenhouse gases.
Increased fiber coverage and performance networks allows data centers to be placed pretty much anywhere, but rural areas are still not having their needs met in many countries. This means that data centers are likely to remain within the vicinity of urban areas. However, improved Government investment in network infrastructure could enable more data centers to be located in cheaper and less urbanized areas – whether that be in the UK or elsewhere in the world.
It’s also worth remembering that the web is the cloud, with all this interconnectivity it will be possible for anyone and everyone, and not just large data centers, to supply spare storage and compute capacity to a commodity brokerage the same way the electricity is bought and sold now. So, the changing data center may find that it faces an increasing amount of untraditional competition in the future – offering more choice to organizations and to the consumer.
The changing data center will also be increasingly software-defined, hyperscaled and virtual. With the ascendency of artificial intelligence and software-defined infrastructure, there will be massive requirements for compute power. It will create the opportunity to have hyperscaled and virtual computer spread across multiple data centers. This will solve many of the complex issues that people today think of as being impossible to resolve. So, the ongoing impact of the changing data center and of lower networking costs will eventually make the impossible, very much possible to achieve.
David Trossell is CEO and CTO of networking software specialist Bridgeworks