Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information on how we use and manage cookies please take a look at our privacy and cookie policies. Some parts of the site may not work properly if you choose not to accept cookies.

sections

New silicon and software come together to handle IoT in the data center

  • Print
  • Share
  • Comment
  • Save

Between the Internet of Things (IoT) and the cloud, there is an ever increasing demand for more compute and more data analysis. So, how do we start to manage the IoT data deluge knowing that we not only have to deliver super compute performance, but also present data in a way that users can consume it easily? The answer is in the datacenter, a space where we will continue to see important changes being driven by the insatiable need to provide users with a higher quality of experience (QoE) that is efficient, effective and secure while controlling costs.

Meeting this demand requires a new way of thinking, one that is transforming the datacenter from a homogeneous infrastructure to one that adapts to users’ needs, serving them the highest QoE in a scalable, secure and cost-effective manner.

Mix and match

In 2002, AMD made x86 a viable choice in the datacenter with the AMD64 architecture that today is the basis of all 64-bit x86 processors. AMD64 helped democratize the datacenter, lowering costs and increasing performance. AMD64 not only broke through the 4GB memory barrier that for years had a debilitating effect on x86 adoption in the datacenter, it also provided support for 32-bit legacy applications, making the upgrade to 64-bit seamless.

Now similar disruption is poised to occur in the datacenter with 64-bit ARM processors, given increasing IoT application (or workload) demand and the need for visually immersive translation of data. Server silicon based on ARM’s 64-bit ARMv8 architecture will encourage innovation in a way not seen since the introduction of AMD64; allowing engineers to more efficiently serve certain workloads.

Alongside revolutionary silicon will be an ARM software stack that provides familiarity with access to operating systems and applications that are purpose built for the architecture. The introduction of ARMv8 processors into the datacenter will allow engineers to pick and choose architectures that are better suited to particular workloads. While x86 processors continue to provide the best choice for certain workloads like relational database servers, ARMv8 will disrupt scale-out workloads that are typically found in data analytics and big data-related workloads.

With the imminent introduction of 64-bit ARMv8 Silicon into the datacenter, silicon vendors such as AMD and the entire ARM software ecosystem have been working hard to ensure the software infrastructure will be ready. Base functionality such as UEFI and ACPI, operating systems such as Linux and application development languages such as Java will help users to make the most of this new architecture. In addition, a large number of applications prevalent in the datacenter are now available on 64-bit ARMv8 processors and the number will continue to grow.

Another shift under way is the growing use of Graphics Processing Units (GPUs) and Accelerated Processing Units (APUs) in the datacenter to implement machine learning. Datacenter engineers can leverage the parallel computing capabilities of GPUs to implement the backend of machine learning where the rules are learned by analyzing the data, while APUs (combination of CPU and GPU cores in a single socket) can be utilized to implement the decision making front end using the rules.

GPUs and APUs will allow deeper data analysis to gain insights, more responsive front-ends to take actions and an immersive user experience. This type of heterogeneous compute allows workloads to run on the architecture that is best suited in terms of performance and energy efficiency. It also provides increasingly accurate user verification tools such as face, voice and fingerprint recognition.

Interoperability is the key to making a true heterogeneous architecture – one that doesn’t require a developer to worry about underlying designs since workloads are automatically assigned to processors that provide the best performance. Heterogeneous System Architecture (HSA) pioneered by AMD plays a key role in offering this interoperability. The HSA Foundation includes many of the biggest names in silicon and software, working together to create a standard that allows developers to embrace different architectures without having to learn multiple programming languages and paradigms.

The datacenter is now required perform more operations on more bits of data from more sources than ever before, while continuing to provide improvements in power efficiency and lower operating costs. IoT will require datacenters to raise the efficiency bar to meet service level agreements on performance and uptime. To meet such demands, datacenter designers have to make intelligent silicon choices. Choices that provides the best performance and power characteristics for each byte of data that enters the datacenter, whether it originates from a smartphone in someone’s hand or a sensor in a remote field.

Suresh Gopalakrishnan is the general manager and corporate vice president of Server Business Unit at AMD.

Tags

Have your say

Please view our terms and conditions before submitting your comment.

required
required
required
required
  • Print
  • Share
  • Comment
  • Save

Webinars

More link