Organizations of all sizes are transforming the way they work. Whether through developing new projects or driving greater efficiencies in existing operations, this digital transformation has made businesses increasingly dependent on hundreds – or in some cases thousands – of applications which power every imaginable business process from product development, to sales and marketing.

This boom in applications has created significant challenges for infrastructure management, with the vast majority creating a management headache.

One way that many organizations respond to this challenge is through managing infrastructure at the lowest common denominator. This can help stop the management headache and in some cases provide an initial reduction in costs, given that management frequently represents as much as a 300 percent of an asset’s cost.

However, over the long term this approach tends to be exceedingly expensive and painfully inefficient, as all application data is managed uniformly without regard to its performance, data protection and security requirements which can lead to over-allocation of capacity, inefficient use of resources, and compromises in how the application is delivered, maintained and grown.

Bespoke application management

Each application operates according to its own ‘DNA’. This not only accounts for performance characteristics, which can change over the course of a day or seasonally within a business, but also how it should be protected, secured and maintained in relation with the businesses’ other applications.

Treating each application according to its bespoke requirements is no mean feat, as such personalisation adds to the management overhead.

Some people automatically assume that faster is better with application management, but that is not always the case. Just as a tractor may be slow, it doesn’t mean it is bad at its job as you couldn’t plough a field with a Porsche. An example would be in Financial Services, where the outright performance of a data archive is inconsequential, yet the penalties imposed when data cannot be retrieved can be draconian. Of course, there are applications where speed (or reduced latency) is highly desired: think of a trading/clearing application in a financial institution or an application that models corporate risk/exposure or data analytics, where reducing processing time can provide advantages of time to market or time to revenue.

Requirements can change between the applications within a single organization depending on the business operation. A retail chain, for example, may want to encrypt their customer data and back it up every 15 minutes. However, performance may be a greater priority for stock management applications, to drive greater efficiencies. If all this data were to be set to the same requirements of encryption, back-up and performance, it is much more expensive and wasteful for an organization.

One approach that many companies are taking to avoid managing to the lowest common denominator, is through policy-based management. Managing policies allows for everything to be managed according to its specific characteristics, but avoids a management headache by introducing automation and standardisation so organizations can manage numerous, similar applications simultaneously and then monitor any applications that fall out of compliancy.

So, should you decide to change the frequencies of back-up of credit card details from every 10 minutes to every 5 minutes, the application manager can make the change to the policy rather than to each individual application, then monitor and report the conformance of the applications to the desired policy. Not only does this allow for faster orchestration but also a line of demarcation between the application consumers, application owners, and the infrastructure teams that have to deliver, maintain and support the shared infrastructure

Building policy frameworks

How an organization builds a policy depends on how it defines the business.

If it were a service provider, for instance, with tiered offerings dependant on how much people are paying, the requirements for the applications which are directly linked to every tier will differ based on the value-added services which are explicitly linked to that group. For instance, a top tier may have greater performance and data protection than a lower tier option. Often, a mistake is to label these tiers as Platinum, Gold, Silver or Bronze (we always want Platinum or Gold right?), a more appropriate definition would be to define as Value, Standard and Extra (similar to the way our supermarkets market their brands). This drives people to make optimised choices and not opt for the most expensive without considering the cost ramifications and ‘value’ of those services to business/application.

Automation is key to policy management, as frequently the application owner isn’t the network or storage owner and they won’t know how to configure the devices for this end goal. Integration and automation can instead provide a tick box service for users to easily check against the data they want to be encrypted, protected or what Quality of Service is required with regards to the level of performance to a particular application and importantly what capabilities are of the infrastructure supporting those services.

The management framework should be abstracted from the complexities of the infrastructure, so the user just provisions for what he wants the application to be, and the automated programme then manages the process. Choosing a solution that integrates with VMWare and OpenStack is particularly useful, as it provides a framework for applications vendors from which people can select those elements.

Predictive analytics also play an important role in effective policy management, as it provides the data points to make these policy decisions. If I know what is being deployed and how it is being utilised, I can authoritatively determine what my application requirements are and make those changes by proactively managing the environment based off current data, rather than gut feel or expensive consultancy.

Some applications may have stable requirements for most of the month, but consistently exceed those requirements at the end of every month or specific times of the year, for instance during payroll or the festive season. This can create an application-data gap, which impedes rapid data delivery to the application and impacts operations.

Understanding that the infrastructure isn’t meeting those increased monthly demands allows the infrastructure management to apply the appropriate remedy for that period, whether that is increasing the memory or adding Flash. Predictive analytics permit the sizing and enactment of change based on fact, as opposed to simply guessing and following an expensive process of ‘blind’ upgrades resulting at managing once again at the lowest common denominator.

Intelligent infrastructure management

Organizations looking for more efficient and cost-effective infrastructure management must take the daunting step away from managing storage to the lowest common denominator, and invest in customised application management.

And as the number of enterprise applications continues to boom, it has never been more important for organizations to move towards an intelligent infrastructure management framework. Adopting a policy framework approach, which incorporates both automation and predictive analytics, will help organizations increase effectiveness without adding to the management burden.

Richard Fenton is a systems engineering manager at Nimble Storage UK and Ireland.