Quantum computing is coming to next-gen computational facilities. In this, the International Year of Quantum, we want to empower you with the key insights needed to take action and lead in the quantum era.

Jay Guilmart (headshot)
Jay Guilmart, Q-CTRL – Q-CTRL

As a new form of computational accelerator for challenging, high-value problems in optimization, chemistry, and finance, quantum computers have progressed rapidly. They are now approaching the point where they overtake alternative computing technologies in commercially relevant instances of these key workloads. Recent demonstrations from Google even showed that for certain narrowly defined applications, their quantum computer could complete an underlying task that was practically impossible for any supercomputer.

Foreseeing the coming demand, data center provider Equinix has been an early leader in the deployment of quantum computing in the data center, seeking to lower the barrier to entry for the end-user customer to explore commercial quantum use cases. Similarly, Oak Ridge National Laboratory and the Pawsey Supercomputing Centre have been integrating quantum processing units (QPUs) into their HPC systems. IBM has even coined the term “quantum centric supercomputing,” highlighting its vision for the integration of the most advanced quantum and classical computing systems.

The starting gun has been fired, and based on this early movement, we anticipate a much broader range of data center and HPC providers to begin on-premises and private deployments of quantum computers within their facilities. And as BCG estimates 90 percent of value capture will accrue to early adopters, you can’t afford to wait.

So, from a practical perspective, how do you actually integrate quantum computing hardware to begin building your own Software-Defined Quantum Data Center? Below, we’ll break this down into three critical steps and provide critical analysis of the key decisions you’ll need to make to maximize impact.

unnamed (13)
– Q-CTRL

1. Choose hardware

First, if you’re new to quantum computing, it may come as a surprise that there are many different hardware approaches available to choose from. Unlike conventional computing, which many decades ago settled on an architecture based on Silicon transistors, there’s a zoo of competing approaches to building the fundamental “qubits” used to store and process quantum information. Each has strengths and weaknesses, and there is little certainty which approach will ultimately become commoditized, meaning the specific qubit modality in use in a data center contributes limited value or differentiation.

Instead, the primary debate that data-center and HPC buyers interested in adopting quantum computing will face today is whether to buy into the platform of a full-stack provider – but be beholden to their product and price – or explore opportunities with a system integrator, knowing the devices are better for their budgets, but don’t deliver the same “turnkey” experience as with a single vertically integrated vendor.

As cutting-edge tools, quantum computers have historically been built and used by expert teams of hand-selected users. IBM broke out of this narrative, putting the first quantum computer on the cloud in 2016 and enabling broad access. This led to a surge in direct cloud offerings from companies like IBM, Rigetti, Oxford Quantum Circuits, and IonQ, alongside broader multivendor managed services like AWS Braket and Microsoft Azure Quantum. Through these services, users can run their workloads using the public cloud.

With greater emphasis on data sovereignty and lower barriers to entry, on-premises quantum computers have come to the fore. Led by those same cloud providers, users can now purchase and locally install full-stack platforms: ready-to-run, out-of-the-box. These full-system deployments offer a straightforward access point and a heavily serviced experience for those new to the quantum space.

In the latest alternative distribution paradigm, various system integrators are focusing on combining offerings from multiple vendors around a “QPU core” to deliver complete solutions, akin to the traditional computer OEM channels we see today – think Lenovo and Dell. These systems offer more in the way of customization, upgradability, and cost effectiveness, while carrying the risks of multi-vendor integration.

In choosing the right hardware path, buyers should understand the open vs closed system approach that works best for them and pay attention to system-level specifications, which indicate what use cases can be executed on the device, like QPU size and algorithmic-performance benchmarks. Finally, buyers should seek clarity on the level of integrability supported by the vendors – this can be an area where system integrators deliver greater value due to their innate focus on modularity and connectivity vs developing a closed ecosystem.

2. Build abstractions

Just as QPU hardware has yet to become commoditized, the quantum computing stack remains in development, with relatively little consistency in how machines are accessed and programmed. Savvy buyers will have an informed opinion on how to leverage software abstraction to accomplish their key goals.

With the right software abstractions, you can begin to transform quantum processors from fragile, research-grade tools into reliable infrastructure for solving real-world problems. Here are three critical layers of abstraction that make this possible.

First, there’s hardware management. Quantum devices need constant tuning to stay in working shape, and achieving that manually takes serious time and expertise. Intelligent autonomy provided by specialist vendors can now handle the heavy lifting – booting, calibrating, and keeping things stable – without someone standing by to babysit the machine.

Then there’s workload execution. Running a program on a quantum computer isn’t just plug-and-play. You usually have to translate your high-level algorithm into something that works with the quirks of the specific QPU being used, and address errors along the way. Now, software can take care of that translation and optimization behind the scenes, so users can just focus on building quantum algorithms and workloads that address key research or business needs.

Finally, the most exciting opportunity is abstracting the quantum computer entirely. This is where it starts to feel like magic; instead of thinking in terms of highly technical “quantum circuits,” users can program in familiar tools and languages, like Python, and let infrastructure software handle everything in between. Abstracting from quantum instructions up to problem definitions is a huge step toward making quantum computing feel less like a science experiment and more like a truly useful technology.

Put all three layers together, and you’ve got a world where quantum computers can actually slot into real-world workflows – no quantum PhD required. That’s not just good for researchers; it’s what will make quantum computing truly scalable and impactful. In a future article, we will discuss these abstractions in more detail, along with the solutions available right now to implement them.

3. Integrate and deploy

With the quantum computer now purchased and abstracted, the important final step is to effectively surface the quantum resources to a target user base. In support of this task, the extended software infrastructure to access and use quantum computers is still developing, but there are two emerging paths in the market.

Lightweight integration: Today’s most common access format is to surface a quantum computer as a standalone asset to run quantum-specific workloads. This requires a front-end portal that manages the users and jobs accessing the device. This approach is suitable for users experimenting with quantum applications and learning to run quantum algorithms.

Full hybrid computing: A newly developing access format is a complete quantum system integration into an existing IT footprint. In this case, data centers and HPC facilities commonly use resource management systems like Slurm, Load Sharing Facility (LSF), or Portable Batch System (PBS) to manage resource clusters and workloads. These systems are built for programming and executing classical programs, whereas leveraging quantum resources requires a different execution workflow.

This presents a gap – as well as an opportunity – for improved, seamless integration between classical workload management systems with software that is designed and built to enable, manage, and optimize the practical performance of hybrid quantum workloads. Fire Opal’s hybrid optimization package is a step in this direction, but remains an early example in a rapidly developing area.

Nvidia’s CUDA-Q is another prime example, where they are using their existing software and expertise in workload management across GPU-accelerated resources to create a similar framework for incorporating quantum resources.

Quantum computing is coming quickly, and now is the time to begin integrating quantum solutions in both enterprise and research computing facilities. With these three steps, you’ll be able to assemble a leading platform delivering real value to your users with ease.

By acting now, you can be first in the race to dominate the market for the Software-Defined Quantum Data Center.

James Guilmart, senior product manager for Boulder Opal at Q-CTRL also contributed to this article.