Hewlett Packard Enterprise (HPE) has added an AI and supercomputer-based offering to its HPE GreenLake portfolio.
HPE GreenLake for Large Language Models (LLMs) gives customers the ability to train, tune and deploy large-scale artificial intelligence (AI) using the GreenLake platform.
GreenLake for LLMs will run on HPE Cray XD supercomputers hosted on the cloud. The service is expected to go live for North America by the end of 2023, followed by Europe in early 2024.
“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” said Antonio Neri, president and CEO, at HPE. “HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly.”
Rather than running multiple parallel workloads in the cloud, like many other high-performance computing (HPC) systems, GreenLake for LLMs gives users access to AI-native architecture specifically designed for AI and simulation workloads on hundreds or thousands of GPUs and CPUs at once.
The first GreenLake for LLMs will be hosted in the Q01 QScale data center, for which HPE became that anchor tenant in May 2023. This will enable the supercomputer to run on almost 100 percent renewable energy.
The service will also give users access to a pre-trained LLM called Luminous and developed by Aleph Alpha. Luminous is available in multiple languages and can be used with the customer's own data to create a customized model.
“By using HPE’s supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources,” said Jonas Andrulis, founder, and CEO, Aleph Alpha. “We are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a service to our end customers to fuel new applications for business and research initiatives.”
HPE GreenLake is a pay-as-you-go on-premise service that brings cloud computing to enterprises. However, unlike other GreenLake offerings, the LLM service will be hosted out of select colocation facilities where the supercomputers will be managed by HPE. This is the first of several AI offerings HPE has plans for, others will include support for climate modeling, healthcare and life sciences, financial services, manufacturing, and transportation.