This is Part 2 of a two-part article. Read Part 1 here
On the networking front, copper has reached the end of the line as a primary backbone design. Copper based backbones will begin to decline rapidly, as improved compute and storage performance further drives the need for increased network bandwidth to be delivered over larger distances. Network backbones will soon be all fiber with copper used only at for the last few meters to connect servers to top of rack and end of row switches (that may soon be fiber as well). The significant size difference of fiber vs copper is also a factor, cable trays in data centers will have more available capacity with fiber, instead of overflowing with far larger and heavier copper cabling.
Look towards a “SiPh” future (Intel’s new name for Silicon Photonics) for data centers. Photonics will begin making inroads into some leading edge HPC the designs and then may be part of the next mainstream networking standard. Besides the existing fiber 40-100 Gbit networking standards, earlier this year Corning in conjunction with Intel, announced the MXC Connector, which provides up to 64-fiber connectivity, to deliver up to 1.6 terabits per second - at lengths up to 300 meters – far beyond anything that copper can hope to deliver. We have moved from individual Servers to Bladeservers and now toward “Rackscale” computing, coupled with the “Software Defined Everything” virtualizing everything in the data center, which will also begin to re-shape physical design of the data center.
And while I don’t normally pontificate on any end-point devices such as tablets and smartphones, I thought a special mention should go out to Apple’s iBeacon technology, which is now embedded in their iOS 7. It allows mobile positional tracking accurate to within a few feet, even when inside of a structure such as a shopping mall. This gives marketing and sales groups even far more granular consumer information to add to their “Big Data” profile of consumer behavior and preferences (of course this just reminds me of when Scott McNealy then head of Sun Microsystem said “Get Over It… You have zero privacy anyway,” back in 1999). However to put it all in context, Google already has long been doing this with your online searches and by analyzing the content of your gmail accounts.
Of course this is music to Larry Ellison, CEO of Oracle, who later echoed McNealy’s sentiment by saying “The privacy you’re concerned about is largely an illusion”, since Oracle makes real-time analytical software to help sort out all that additional consumer metadata that will be coming in from mobile device tracking. This added bit of tracking technology may make the “No Such Agency’s” efforts look mild in comparison, from privacy advocates point of view, but it will certainly fuel the need for more “big data” data centers.
Notably, given the more recent events, I feel that I would be remiss if I omitted to give my “Horse Built by Committee Award” for “How not to build a web portal (and back-end database)”, to the senior project managers who directed those who designed and built the Healthcare.gov website (note this is not a political or social healthcare comment, just a personal observation). Hopefully, by the time this is published it will all be working properly.
The bottom line
It is getting harder to polish up my old crystal ball and guess I will need to try to Google Glass next year (and check to see if they added a “predict future” button, even if it is in beta)
So what is “The Right Stuff” for 2014? Flexibility, and Economy, many organizations will need to review their presumed need for direct data center ownership and operation, as the cost of building, operating and upgrading their own facilities becomes more expensive, and less of a strategic advantage. So like NASA, they will need to outsource to one degree or another and may be driven to migrate toward co-lo and cloud services and a hybrid of both.
For new data centers think Sustainable Resource Optimization. While we may not see a data center power by wood pellets, sustainability will have a greater impact on site selection. Decisions will based on taking advantage of favorable climatic conditions, as well as fuel types or sustainable power, either generated by onsite (i.e. fuel cells) or supplied by local generation (solar, where possible), via Power Purchasing Agreements. This will not just be for Apple, Ebay and Microsoft scale sites, even some enterprise and co-location providers (and their customers) will begin to see the social, political and business value of this. This will be coupled with better overall energy efficiency (not just facility PUE), perhaps aided by DCIM or simply by better cooperation between IT and Facilities - and if all the planets were to align, perhaps even with senior management.
Climate Change and catastrophic weather – The 100 Year Event is the New Normal. Organizations will need to revise their traditional Disaster Recovery (cold site) strategies and move towards multiple geographically diverse sites with Active-Active replication, to deliver Business Continuity rather than DR. This will become the de-facto alternative and has become more technically and economically feasible due to virtualization, coupled with lower cost of bandwidth and cloud services.
Stay tuned and see how these predictions pan out, and have a Happy New Year!
Views expressed are those of the author and do not necessarily reflect the views of DatacenterDynamics.