The demand for computing power is increasing rapidly among medium-sized companies. This poses enormous challenges for company IT administrators and established data center providers. The reasons for the increasing demand lie in the general progress in digitization, networking (IoT), the constantly increasing volume of data and the disruptive blockchain technology. Modern chip architectures are moving ever closer to the limits of what is physically feasible. Data centers can only become even faster and more performant by integrating circuits even more closely into even smaller spaces. The solution is simple - but very hot.

As early as 1965, Intel co-founder Gordon Moore predicted that the performance of processors would double every 18 to 24 months and at the same time reduce costs. This "Moore's law" has been true for 53 years. The problem, however, is that the demand for computing power is growing even faster than the progress in processor performance. Cloud solutions promise to make the most modern hardware usable as efficiently as possible for many customers. After all, behind the "clouds" are countless physical data centers at earthly locations such as Amsterdam, Frankfurt, Dubai and Dallas. Cloud data centers bundle the demand for computing power and therefore require high-performance IT infrastructures - and consume gigantic amounts of electricity. An end to this is not in sight. New technologies are accelerating global IT power consumption.

Moore's Law won't help us

Hardware
– Cloud&heat

As the Bitcoin Energy Consumption Index shows, the energy consumption of this technology alone rose more than fivefold in just one year: from 14 TWh in July 2017 to over 72 TWh in July 2018. In other words, while global consumption a year ago was roughly equivalent to the annual electricity consumption of Iceland (population: 335,000), the current figure now significantly exceeds the energy consumption of Austria with its 8.7 million inhabitants. A doubling of efficiency every one and a half to two years in accordance with Moore's Law is therefore no longer sufficient to meet the faster growing demand.

The good news is that there are solutions to this problem in the form of graphics processors units (GPUs). Compared to conventional computer processors units (CPUs), graphics processors can initially perform fewer complex operations. They were designed to bring high-resolution images and textures to the screen in high temporal sequence. Therefore, graphics cards were developed for very simple command structures. To put it bluntly, these are "simply knitted" - but highly parallel and therefore extremely fast. And it is precisely this feature that will gain in importance for modern IT requirements. Now is the time for IT managers from the company and the outsourced data centers to provide offers for the use of ultra-fast graphics processors.

Fast graphics processors are predestined for new applications such as artificial intelligence, machine or deep learning and all kinds of simulations (AR/VR) to map the necessary computing power. One of the best-known examples of cost-efficient simulations are the crash tests of the automotive industry, in which many thousands of cars are protected from the scrap press by extremely computationally intensive simulations. Virtual roads for the future topic of autonomous driving are also indispensable for industry. Simulations of driver assistance systems make them safer. The battle for the best autonomous vehicle is decided in computer centers.

The trend towards GPUs is both a solution and a problem

It is not the CPU processor market leader Intel that is benefiting from this trend, but the two graphics chip and card manufacturers Nvidia and AMD. Both have recorded double-digit growth rates in the last four quarters. In particular, Nvidia benefits from its high-priced, high-performance professional graphics cards of the Tesla series. The CUDA programming technology developed by Nvidia can be used advantageously, for example, for artificial intelligence (e.g. tensor flow). Meanwhile, AMD scores in the price-performance range and thus profits in particular from the hype of the blockchain data centers. For these data centers alone, more than 3 million graphics cards were sold last year. The majority of these were AMD graphics cards.

The manufacturer Nvidia changed the license conditions for its GeForce and Titan graphics cards at the end of last year and prohibits the use of these inexpensive cards in data centers. Instead, Nvidia offers the professional graphics card Tesla (starting at approx. 8,000 USD) for data centers. This delivers the highest computing power in the smallest space, but costs almost ten times more than the Gforce chips. This change in the AGBs could further boost AMD, the eternal number two for graphics cards.

The boom in mobile devices is also contributing to a sharp rise in power requirements and the need for high-performance data centers. After all, smartphones and tablets are becoming increasingly popular both at home and at work, and applications are becoming increasingly sophisticated. This quickly limits the internal chip at its performance limit and the battery capacity. More and more app developers are coupling their apps to high-performance data centers to outsource calculations. However, only very few users are aware that the use of apps and cloud services then results in high power consumption in the corresponding data centers. As a result, a business tablet in a data center consumes on average five times as much power as it needs.

In short, graphics processors and thus more powerful data centers are facing a growth spurt in order to meet the changing demands on IT infrastructures. However, this is also accompanied by a higher power density in data centers and, of course, a significant increase in power consumption with a corresponding impact on the environment due to increased CO2 emissions. This cannot compensate for the ongoing further development and improvement of chip architectures. This makes it all the more important to use the installed capacities as efficiently as possible, e.g. through virtualization and cloud solutions, and to use energy-saving code and algorithms wherever possible - and electricity from renewable sources. In the interests of environmental protection, but also of economic efficiency, it is necessary to act responsibly here. This is certainly possible.