For decades, Wall Street has pushed the boundaries of how data centers—the facilities that house computing, storage, and networking systems that underpin corporate technology infrastructure—are designed. It is not a coincidence that past paradigm shifts in data center architecture can be traced back to innovations by the banking industry. The rise of an interconnected global financial system that includes a complex web of institutions, markets, trading partners, and instruments has brought with it a need for better, faster, more powerful computing at tremendous scale.

Historically, throughout the 1960s and ‘70s, enterprise data centers were primarily made up of purpose-built, “big iron” mainframes, machines that filled up entire cabinets and handled large-scale, high-performance computing. Minicomputers provided a less expensive alternative, but it was not until the ‘80s that computing power rapidly got a lot cheaper with the introduction of a PC-based microprocessor architecture known as x86. That is when major changes on Wall Street started to influence the architecture of data centers.

Quantitative finance

As quantitative finance emerged on Wall Street in the ‘90s, banks seized on the opportunity to use the x86 architecture along with open standards like Linux to drive innovations in the client-server model, a form of distributed computing. High-frequency trading became popular in the early 2000s at the same time advancements in server virtualization—technology that allows more applications to run on the same physical server—began to drive further efficiencies, resulting in financial firms defining the blueprint for its standardization and, consequently, the gold standard for modern enterprise data centers.

Wall Street is at it again—but this time it is leveraging an architecture that can easily “scale out” to enable infrastructure services that are decoupled from dependencies on physical hardware. These infrastructure services can be run on commodity hardware in private data centers or consumed on-demand like a utility from “public clouds” made available by Amazon, Google, and others. Such consumer Internet giants have built hyperscale data centers, on the order of millions of servers each, to support the massive scale of their own operations while allowing other companies to rent infrastructure, including raw computing power and storage capacity, by the hour. Companies like Netflix, Instagram, and Snapchat have largely based their technology stacks on a single public cloud.

Unlike those companies, however, Wall Street will not settle for a single public cloud. Building on top of a single platform locks a company into one provider’s set of services and pricing. Additionally, the company would ultimately be limited by the capacity, economies of scale, and cost efficiencies that a sole provider achieves. Wall Street’s solution to this problem is to instead “virtualize” these platforms by running applications across the infrastructure of multiple public clouds, anticipating the day when computing capacity across clouds is traded like a commodity and applications can be flexibly moved across environments. Just a few months ago, the Deutsche Börse Group, which runs the Frankfurt stock exchange, launched such an open marketplace called Cloud Exchange AG.

Building a technology infrastructure footprint across multiple public clouds is an ambitious undertaking that requires a new set of data center design principles. I have had the opportunity to discuss many of these principles with CIOs, CISOs, and IT professionals across Wall Street. These are the cornerstones for Wall Street’s grand vision:

 Responsibility for data security vannot be delegated

Security concerns pose one of the biggest challenges to financial companies looking to use public clouds. Public clouds support a “shared responsibility” model: the provider commits to handling security for certain systems. But at its core, the cloud is simply a third-party computing system that cannot and should not be trusted, especially when dealing with sensitive financial information that is further subject to regulatory compliance. Auditors, regulators, and law enforcement follow a “single responsibility” model that places responsibility for data security on the company that owns the data rather than the third-party provider.

Undeterred, financial enterprises are addressing these constraints with a radical new approach to security: assume that data breaches will become the norm. In practice, financial enterprises are fully encrypting their data with state-of-the-art cryptography and ensuring the keys needed to unlock that data remain under their exclusive control. This means that in most cases if there was a software breach or even if bad actors were to somehow break into the physical facilities where the data is stored, that data would be rendered useless and unreadable without access to the right key, which remains under the authoritative control of the enterprise. When the enterprise is done using that data, the key gets destroyed before relinquishing the machine back to the provider for its other customers to use.

This security model has to work seamlessly so that it does not negatively impact the agility and flexibility that public clouds provide. Default protocols like Transport Layer Security enable reliable security for billions of online banking transactions every year and are transparent to end users who are unaware they are even being used. Drawing on the success of these protocols, Wall Street is now taking a similar approach to embedding security controls like data encryption into the cloud infrastructure itself, ensuring they are always on by default and transparent to application developers and infrastructure teams.

Infrastructure should be programmatically and dynamically composed

The industry-standard approach to provisioning infrastructure and deploying applications today follows a bottom-up pattern. IT procures and configures hardware based on capacity forecasts, carves up that hardware via virtualization, and doles it out to application developers based on what resources are needed to run their applications. Financial enterprises are looking at how to invert that workflow. Developers are shifting from writing applications that are monolithic to applications built on microservices that use “containers,” a type of software that helps developers write and deploy code more quickly across different systems. This means that applications can be written to be portable across environments and that infrastructure needs to be able to run those applications in multiple places. Developers now have the flexibility to spawn infrastructure programmatically and configure it in real-time across multiple public clouds based on business requirements such as cost or latency. When an application needs to be deployed, the enterprise can automatically select the provider and resources that are most cost-effective to get the job done.

The new stack must be based on open industry standards

While many consumer Internet companies have opted to use the public cloud for their technology infrastructure, even the largest of them, e.g., Netflix, today primarily focus on delivering one or a few applications to their customers. In contrast, leading Wall Street banks operate thousands of critical applications that serve customers, trading partners, and employees. Running these applications across multiple public clouds requires infrastructure services to interoperate consistently regardless of environment, abstracting away underlying differences in providers.

The only viable path forward to achieve this is standardization within the industry, which Wall Street has been driving. Goldman Sachs recently invested in Docker, the company behind the open-source container project of the same name, and also joined as a founding member of the Open Container Initiative, a nascent effort meant to foster standards among container implementations. Additionally, Goldman Sachs is among a group of organizing members behind another effort called the Open Compute Project that seeks to openly publish designs for various data center technologies. That group also includes Fidelity, Bank of America, JPMorgan Chase, and Capital One. This is one of several recent examples of financial firms working together to address shared goals. In August, JPMorgan Chase, Goldman Sachs, and Morgan Stanley announced a joint initiative called “SPReD,” abbreviated for Securities Product Reference Data, to clean and store reference financial data.

Conclusion

Wall Street’s technology stack continues to rapidly evolve. Recent Federal Reserve regulations for stress testing necessitate ever-larger amounts of computing power to carry out capital and risk analysis. Additionally, exponential growth in the size of datasets used to assess credit risk already has firms looking for more storage capacity.

In response, Wall Street firms are collectively building a data center architecture defined in software that allows them to extend beyond their private environments and tap into the hyperscale resources of multiple public clouds. This architecture enables the firms to dynamically provision infrastructure without the hardware capacity constraints they have faced to date and to support applications designed to be portable across systems.

This infrastructure incorporates tightly integrated encryption for transparent, end-to-end data security and is based on open standards. It remains to be seen what other developments on Wall Street will accelerate this new approach to data center architecture. One thing is clear: the future of the data center looks cloudy.

Wei Lien Dang is senior product manager at Bracket Computing. Previously, he worked at Amazon Web Services and Splunk