Microservices are a modern application design approach. Based on the principles of Service Oriented Architecture (SOA) microservices use software containers and distributed computing components in order to quickly build new cloud applications. So how does this approach affect the future for data center design, and how will it benefit companies?
The rise in popularity around microservices has been motivated by the need to manage the peaks and troughs of demand. If a large volume of customers hit a website and begin to consume that service, then the individual components necessary to support that service can grow elastically to manage the load.
An end to overprovisioning?
The biggest difference here between microservices and more traditional IT infrastructures is the approach to how workloads are made up, and how they would be provisioned. For standard tiered applications, physical servers or virtual machines would be used to host large parts of the application. Physical machines would tend to be over-provisioned so they can manage the predicted load, but this can represent an overspend if those peaks don’t materialise.
Virtualization was developed to reduce this risk and make more efficient use of physical IT resources. Virtual machines can be provisioned to cope with additional workload demand, but they are still linked to a specific set of physical servers in a location.
In comparison, container-based microservices can be more distributed across physical infrastructure in multiple locations. New containers can therefore be provisioned closer to the source of demand. In the example of a global web application, new containers can be started in the closest physical location to where the demand from customers is coming from, which helps reduce latency. The key is making sure your data is replicated to those locations.
One challenge here is that demand can be unpredictable. Previous approaches to dealing with peak demand varied from simply spending more through to using virtualization to increase utilization rates. However, these older architectures did not reflect the levels of fluctuation that might exist today and how this might stress different parts of the application infrastructure. Couple this with better provisioning than deprovisioning processes, and wastage will naturally occur.
Microservices aims to reduce this wastage and additional cost by letting resources be provisioned more efficiently across the whole data center, rather than within a monolithic application.
For applications designed with distributed computing elements to support microservices, managing this more closely is an essential part of the overall approach. If more data streaming components are required, then these can be added automatically in a coordinated fashion; if the database has to grow by a certain number of nodes, then these can be added too.
However, while a distributed database like Cassandra will tend to grow over time, a streaming service based on something like Kafka will vary almost daily. Similarly, the use of tools like Apache Spark for near real-time analytics will vary too. As these peaks in demand will go up and down, resources within the data center environment can be moved and re-allocated. Overall efficiency and resilience can be improved across the whole cloud application, rather than looking at each tier of the application separately.
The use of microservices will put more emphasis on smaller numbers of physical machines. Some of these may be bare-bones physical boxes with the minimum amount required for standing up containers
This variability in levels of application components can help reduce the amount of physical assets needed still further. By allocating all resources more dynamically, the overall size of the environment can be reduced, saving space and cost. Major peaks in demand can potentially be met by making use of more hybrid or public cloud services. When data is reliably replicated, this becomes a much easier proposition.
Cutting to the bone
From a data center design perspective, the use of microservices will put more emphasis on smaller numbers of physical machines. Some of these may be bare-bones physical boxes with the minimum amount required for standing up containers. To orchestrate the data center, management tools like Kubernetes and Mesos will manage the provisioning and control of these containers.
Alongside this, however, bigger pipes that can supply larger volumes of network connectivity will be essential, particularly for those companies that are running across multiple locations. Companies may choose to run smaller data centers but in a more distributed fashion. Alternatively, companies may choose to use third parties or public cloud providers to expand services. Whichever strategy they choose, the network that connects these multiple sites together will be important for service quality.
Looking ahead, microservices will continue the trends in data center design that started with x86 virtualization. However, this new model will lead to increased flexibility in how software containers are assigned to physical machines for processing. Replacement of sections within each application can be achieved with minimal impact on the overall service, which represents another opportunity for more resilience and efficiency.
For companies that don’t host sensitive data as part of these applications, hybrid strategies based on use of external providers or public cloud services will see data center designs continue to shrink as well. For those with critical data sensitivities and compliance requirements, adopting microservices will lead to more use of in-house physical IT resources. The overall result for data center implementations should be greater utilization and more efficiency for power consumed compared to that used to support more traditional applications.
Patrick McFadin is chief evangelist at DataStax