What is currently referred to as artificial intelligence is not true artificial intelligence. We are at the very beginning of AI research, with techniques like deep learning enabling software to “learn” from past examples of a solution to a problem. In 2015, Google Deepmind’s AlphaGo exemplified this beautifully. At its core, it represented Machine Learning applied correctly to the GO board game – the software being fed millions of play hours of GO matches – which, through deep learning, helped determine whether AlphaGo’s next move was good or bad.
Narrow AI is a very efficient way to complete a single task. Playing chess, making purchasing suggestions, training a spam filter or even translating languages are all forms of Narrow AI. Albeit fascinating advancements, computer vision, speech recognition and natural language processing still fall within the same category. The technology behind driverless cars is widely accepted as a synthesis of multiple forms of Narrow AI.
We can safely assume that a form of Narrow AI that could help mission critical facilities reach near-perfect efficiency, based on the equipment on site, will soon be developed and made available. But what about going a step further into the future, and taking a look at the data center using a form of General AI or human-level AI? Having a form of AI that can not only syphon through massive amounts of data but is also able to plan for the unexpected, think ‘out of the box’ and see the bigger picture, will advance all current forms of technology, some to a point of a complete overhaul. How would the data center look in such an age?
Extrapolating the current purpose of a data center - to store and provide the means of access to data from a physical location over a network - we can paint a picture of its evolution. With speed of access becoming more and more important, we can envision a scenario where the classic data center will be shrunk to the size of a chip, maybe even becoming an item of wearable tech. Through advancements in telecommunications, such chips would be perpetually interconnected, acting like a worldwide distributed computing system.
In a distributed system, every part knows what the others hold in regards to data. If one chip would need a resource located elsewhere, it would know straight away where to look for it and how to retrieve it. This also ensures a certain degree of redundancy: If one unit fails, its contents will be transferred to others, depending on availability of resources.
Having a General AI administer this system would take care of the complicated data management issues. The AI would know in advance when a part of said system would become unstable - by crunching data constantly, the slightest skew in the measurements would raise a red flag, then lead to decisions on how to mitigate the situation – whether by replacing, repairing or upgrading. With General AI, all of this would happen without any of us realizing it.
As the field of computer science evolved, clever use of semiconductors and transistors shrunk the computer from the size of a room, to the size of a desk, then to the size of a notebook, finally reaching the size of the palm. What’s stopping us from applying the same principles to the data center?
Vlad-Gabriel Anghel is our in-house technical consultant and customer support manager at DCPro