Quantum computers were proposed forty years ago, but have yet to make any real impact in the world. Researchers are struggling to get significant amounts of processing power from systems that have to be kept close to absolute zero.
But at the same time, the quantum informatics community is already hard at work on the next step beyond individual quantum computers: an Internet that can connect them to work together.
You can already communicate with plenty of experimental quantum computers on the Internet. For instance, Amazon’s AWS Braket lets you build and test quantum algorithms on different quantum processors (D-Wave, IonQ, or Rigetti) or use a simulator.
Microsoft does something similar on Azure, and IBM also lets users play with a five-qubit quantum system.
Job done? Well no.
Sharing the output of isolated quantum computers is one thing; linking quantum computers up is something else.
The power of quantum computing is in having a large number of qubits, whose value is not determined till the end of the calculation. If you can link those “live” qubits, you’ve linked the internals of your quantum computers, and you’ve effectively created a bigger quantum computer.
Why link up quantum computers?
“It is hard to build quantum computers large enough to solve really big problems,” explains Vint Cerf, one of the Internet’s founders, in an email exchange with DCD.
The problem is harder because the quantum world creates errors: “You need a lot of physical qubits to make a much smaller number of logical qubits - because error correction requires a lot of physical qubits.”
So, we actually do need to make our quantum computers bigger.
But we want to link “live” qubits, rather than just sharing the quantum computer’s output. We want to distribute the internal states of that computer without collapsing the wave function - and that means distributing entanglement (see box)
The quantum Internet means taking quantum effects and distributing them across a network. That turns out to be very complex - but potentially very powerful.
“Scaling of quantum computers is facilitated by the distribution of entanglement,” says Cerf, who works at Google. “In theory, you can make a larger quantum system if you can distribute entanglement to a larger number of distinct quantum machines.”
There’s a problem though.
There are several approaches to building qubits for the development of quantum computers, including superconducting junctions, trapped-ion quantum computers, and others based on photons. One thing they all have in common is they have to be isolated from the world in order to function.
The qubits have to maintain a state of quantum coherence, in which all the quantum states can exist, like the cat in Schrödinger’s famous paradox that is both alive and dead - as long as the box is kept closed. So how do you communicate with a sealed box?
National quantum Internet
That question is being asked at tech companies like Google, and at Universities round the world. Europe has the Quantum Internet Alliance, a grouping of multiple universities and bodies including QuTech, and the US has Q-Next, the US national quantum information science program, headed by University of Chicago Professor David Awschalom
The US Department of Energy thinks the question is important enough to fund, and has supported projects at Q-Next lead partner, the Argonne National Laboratory. Among its successes, Q-Next has shared quantum states over a 52-mile fiber link which has become the nucleus of a possible future national quantum Internet.
Europe’s QIA has notched up some successes too, including the first network to connect three quantum processors, with quantum information passed through an intermediate node, created at the QuTech institute for quantum informatics.
Another QIA member, the Max Planck Institute, used a single photon to share quantum information - something which is important, as we shall see.
Over at Argonne, Martin Suchara, a scientist from the University of Chicago, has DoE funding for his work on quantum communications. But he echoes the difficulty of transmitting quantum information.
“The no-cloning theorem says, if you have a quantum state, you cannot copy it,” says Suchara. “This is really a big engineering challenge.”
Send for the standards body
With that much work going on, we are beginning to see the start of a quantum Internet.
But apart from the technical difficulty, there’s another danger. All these bodies could create several, incompatible quantum Internets. And that would betray what the original Internet was all about.
The Internet famously operates by “rough consensus and running code.” Engineers make sure things work, and can be duplicated in multiple systems, before setting them in stone.
For 35 years, the body that has ensured that running code has been the Internet Engineering Task Force (IETF). It’s the body that curates standards for anything added to our global nervous system.
Since the dawn of the Internet, the IETF has published standards known as “RFCs” (requests for comment). These define the network protocols which ensure your emails and your video chats can be received by other people.
If we are going to have a quantum Internet, we’ll need an RFC which sets out how quantum computers communicate.
Right now, that’s too blue-sky for the hands-on engineers and protocol designers of the IETF. So quantum Internet pioneers have taken their ideas to the IETF’s sister group, the forward-looking Internet Research Task Force (IRTF).
The IRTF has a Quantum Internet Research Group (QIRG), with two chairs: Rodney Van Meter, a professor at Keio University, Japan; and Wojciech Kozlowski at QuTech in Delft. QIRG has been quietly looking at developments that will introduce completely new ways to do networks.
“It is the transmission of qubits that draws the line between a genuine quantum network and a collection of quantum computers connected over a classical network,” says the QIRG’s document, Architectural Principles for a Quantum Internet. “A quantum network is defined as a collection of nodes that is able to exchange qubits and distribute entangled states amongst themselves.”
The work is creating a buzz. Before Covid-19 made IETF pause its in-person meetings, QIRG get-togethers drew quite an attendance, says Kozlowski, “but it was more on a kind of curiosity basis.”
Building up from fundamentals
The fundamental principle of quantum networking is distributing entanglement (see box), which can then be used to share the state of a qubit between different locations.
Suchara explains: “The trick is, you don't directly transmit the quantum state that is so precious to you: you distribute entanglement. Two photons are entangled, in a well-defined state that basically ties them together. And you transmit one of the photons of the pair over the network.”
He goes on: “Once you distribute the entanglement between the communicating endpoints, you can use what is called quantum teleportation to transmit the quantum state. It essentially consumes the entanglement and transmits the quantum state from point A to point B.”
“The quantum data itself never physically enters the network,” explains Kozlowski. “It is teleported directly to the remote end.”
Kozlowski points out that teleporting qubits is exciting, but distributing entanglement is the fundamental thing.
“For example, quantum key distribution can be run on the entanglement based network without any teleportation, and so can a bunch of other applications. Most quantum application protocols start from saying ‘I have a bunch of states,’ and teleportation is just one way of using these states.”
Towards a quantum Internet protocol
In late 2020, Kozlowski co-authored a paper with colleagues at Delft, proposing a quantum Internet protocol, which sets an important precedent, by placing distributed entanglement into a framework similar to the layered stacks - OSI or TCP/IP - which define communication over classical networks.
“Our proposed stack was heavily inspired by TCP/IP, or OSI.” he tells us. “The definitions of each layer are slightly different, but there was this physical layer at the bottom, which was about attempting to generate entanglement.”
That word “attempting” is important, he says: “It fails a lot of the time. Then we have a link responsible for operating on a single link, between two quantum repeaters or an end node and a quantum repeater. The physical layer will say ‘I failed,’ or ‘I succeeded.’ The link layer would be responsible for managing that to eventually say, ‘Hey, I actually created entanglement for you.’”
The protocol has to keep track of which qubits are entangled on the different nodes, and this brings in a parallel network channel: “One important thing about distributing entanglement is the two nodes that end up with an entangled pair must agree which of their qubits are entangled with which qubits. One cannot just randomly use any qubit that's entangled.
"One has to be able to identify in the protocol, which qubit on one node is entangled with which qubit on the other node. If one does not keep track of that information, then these qubits are useless.”
“Let’s say these two nodes generate hundreds of entangled pairs, but only two of them are destined for a particular application, then that application must get the right qubits from the protocol stack. And those two qubits must be entangled with each other, not just any random qubit at the other node.”
The classical Internet has to transmit coordinating signals like this, but it can put them in a header to each data packet. This can’t happen on the quantum Internet, so “header” information has to go on a parallel channel, over the classical Internet.
“The thing that makes software and network protocols difficult for quantum is that one could imagine a packet which has a qubit as a payload, and has a header. But they never ever travel on the same channel. The qubit goes on the quantum channel, and the header would go on the classical channel.”
It needs a classical channel
“It is a hard requirement to have classical communication channels between all the nodes participating in the quantum network,” says Kozlowski.
In its protocol design, the Delft team took the opportunity to have headers on the classical Internet that don’t correspond to quantum payloads on the quantum Internet, says Kozlowski: “We chose a slightly different approach where the signaling and control messages are not directly coupled. We do have packets that contain control information, just like headers do. What is different, though, is that they do not necessarily have a payload attached to them.”
In practice, this means that the quantum Internet will always need the classical Internet to carry the header information. Every quantum node must also be on the classical Internet.
So a quantum Internet will have quantum information on the quantum plane, and a control plane operating in parallel on the classical Internet, handling the classical data it needs to complete the distribution of qubit entanglement.
Applications on top
The Delft proposal keeps things general, by putting teleportation up at the top, as an application running over the quantum Internet, not as a lower layer service. Kozlowski says earlier ideas suggested including teleportation in the transport layer, but “we have not proposed such a transport layer, because a lot of applications don't even need to teleport qubits.”
The Delft paper proposes a transport layer, which just operates directly on the entangled pair, to deliver services such as remote operations.
Over at Argonne, Suchara’s project has an eye on standards work, and agrees with the principles: “How should the quantum Internet protocol stack look like?” he asks. “It is likely that it will resemble in some way the OSI model. There will be layers and some portion of the protocol at the lowest layer will have to control the hardware.”
Like Kozlowski, he sees the lower layer managing photon detectors and quantum memories in repeater nodes.
Above that, he says, “the topmost layer is the application. There are certain actions you have to take for quantum teleportation to succeed. That's the topmost layer.
“And then there's all the stuff in the middle, to route the photons through the network. If you have a complicated topology, with multiple network nodes, you want to go beyond point to point communication; you want multiple users and multiple applications.
Sorting out the middle layers creates a lot of open questions, says Suchara: “How can this be done efficiently? This is what we would like to answer.”
There is little danger of divergence at this stage, but over in Delft, Kozlowski’s colleagues, including quantum network leader Stephanie Wehner, have actually begun to write code towards an eventual quantum internet. While this article was being written, QuTech researchers published a paper, Experimental Demonstration of Entanglement Delivery Using a Quantum Network Stack. The introduction promises: “Our results mark a clear transition from physics experiments to quantum communication systems, which will enable the development and testing of components of future quantum networks.”
Putting this into practice brings up the next set of problems. Distributing entanglement relies on photons (particles of light), which quantum Internet researchers refer to as “flying qubits,” to contrast them with the stationary qubits in the end system, known as “matter qubits.”
The use of photons sounds reassuring. This is light. It’s the same stuff we send down fiber optic cables in the classical Internet. And one of the major qubit technologies in the lab is based on photons.
But there’s a difference here. For one thing, we’re working on quantum scales. A bit sent over a classical network will be a burst of light, containing many millions of photons. Flying qubits are made up of single (entangled) photons.
“In classical communication, you just encode your bit in thousands of photons, or create extra copies of your bit,” says Suchara. “In the quantum world, you cannot do that. You have to use a single photon to encode a quantum state. And if that single photon is lost, you lose your information.”
Also, the network needs to know if the flying qubit has been successfully launched. This means knowing if entanglement has been achieved, and that requires what the quantum pioneers call a “heralded entanglement generation scheme.”
Working with single photons over optical fiber has limits. Beyond a few km, it’s not possible. So the quantum Internet researchers have come up with “entanglement swapping.” A series of intermediate systems called “quantum repeaters” are set up, so that the remote end of a pair can be repeatedly teleported till it reaches its destination.
This is still not perfect. The fidelity of the copy degrades, so multiple qubits are used and “distilled” in a process called “quantum error correction.”
On one level that simply means repeating the process till it works, says Suchara. “You have these entangled photons and you transmit them. If some portion - even a very large portion - of these entangled pairs is lost, that's okay. You just need to get a few of them through.”
Kozlowski agrees: “The reason why distributed entanglement works, and qubit distribution doesn't work, is because when we distribute entanglement, it's in a known form. It's in what we call Bell states. So if one fails, if one is lost, one just creates it again.”
Quantum repeaters will have to handle error correction and distillation, as well as routing and management.
But end nodes will be more complicated, says Kozlowski. “Quantum repeaters have to generate entanglement and do a bit of entanglement swapping. End nodes must have good capacity for a quantum memory that can hold qubits and can actually perform local processing instructions and operations. In addition to generating entanglement, one can then execute an application on it.”
Dealing with time
Quantum networks will also have to deal with other practical issues like timing. They will need incredible time synchronization because of the way entanglement is generated today.
Generating entanglement is more complicated than sending a single photon. Most proposed schemes send a photon from each end of the link to create an entangled pair at the midpoint.
“The two nodes will emit a photon towards a station that's in the middle,” says Kozlowski. “And these two photons have to meet at exactly the same time in the middle. There's very little tolerance, because the further apart they arrive, the lower quality the entanglement is.”
He is not kidding. Entanglement needs synchronization to nanosecond precision, with sub-nanosecond jitter of the clocks.
The best way to deliver timing is to include it in the physical layer, says Kozlowski. But there’s a question: “Should one synchronize just a link? Or should one synchronize the entire network to a nanosecond level? Obviously, synchronizing an entire network to a nanosecond level is difficult. It may be necessary, but I intuitively would say it's not necessary, I want to limit this nanosecond synchronization to each link.
On a bigger scale, the quantum network has a mundane practical need to operate quickly. Quantum memories will have a short lifetime, as a qubit only lasts as long as it can be kept isolated from the environment.
“In general, one wants to do everything as fast as possible, as well, for the very simple reason that quantum memory is very short-lived,” says Kozlowski.
Networked quantum systems have to produce pairs of Bell states fast enough to get work done before stored qubits decay. But the process of making entangled pairs is still slow. Till that technology improves, quantum networked systems will achieve less, the further they are separated.
“Time is an expensive resource,” says the QIRG document. “Ultimately, it is the lifetime of quantum memories that imposes some of the most difficult conditions for operating an extended network of quantum nodes.
As Vint Cerf says: “There is good work going on at Google on quantum computers. Others are working on quantum relays that preserve entanglement. The big issues are maintaining coherence long enough to get an answer. Note that the quantum network does have speed of light limitations despite the apparent distance-independent character of quantum entanglement. The photons take time to move on the quantum network. If entanglement dissipates with time, then speed of light delay contributes to that.”
Demands of standards
The Internet wouldn’t be the Internet if it didn’t insist on standards. So the QIRG has marked “homogeneity” as a challenge. The eventual quantum Internet, like today’s classical Internet, should operate just as well, regardless of whose hardware you are using.
Different quantum repeaters should work together, and different kinds of quantum computers should be able to use the network, just as the Internet doesn’t tell you what laptop to use.
At the moment, linking different kinds of quantum systems is a goal for the future, says Kozlowski: “Currently, they have to be the same, because different hardware setups have different requirements for what optical interaction they can sustain. When they go on the fiber, they get converted to the telecom frequency, and then it gets converted back when it's on the other end. And the story gets more complicated when one has to integrate between two different setups”
“There is ongoing work to implement technology to allow crosstalk between different hardware platforms, but that's ongoing research work. It’s a goal,” he says. “Because this is such an early stage. We just have to live with the constraints we have. Very often, software is written to address near-term constraints, more than to aim for these higher lofty goals of full platform independence and physical layer independence.”
The communication has also got to be secure. On the classical Internet, security was an afterthought, because the original pioneers like Cerf were working in a close-knit community where everyone knew each other.
With 35 years of Internet experience to refer to, the QIRG is building in security from the outset. Fortunately, quantum cryptography is highly secure - and works on quantum networks, almost by definition.
As well as this, the QIRG wants a quantum Internet that is resilient and easy to monitor.
“I think participation in the standardization meetings, by the scientific community and potential users, is really important,” says Suchara. “Because there are some important architectural decisions to be made - and it's not clear how to make them.”
The quantum Internet will start local. While the QIRG is thinking about spanning kilometers, quantum computing startups can see the use in much smaller networks, says Steve Brierley CEO of Riverlane, a company impatient to hook up large enough quantum computers to do real work.
“The concept is to build well functioning modules, and then network them together,” says Brierley. “For now, this would be in the same room - potentially even in the same fridge.”
That level of networking “could happen over the next five years,” says Brierley. “In fact, there are already demonstrations of that today.”
Apart from anything else, long distance quantum networks will generate latencies. As we noted earlier, that will limit what can be done, because quantum data is very short-lived.
For now, not many people can be involved, says Kozlowski: “The hardware is currently still in the lab, and not really outside.”
But, for the researchers in those labs, Kozlowski says: “There's lots to do,” and the work is “really complicated. Everybody's pushing the limits, but when you want to compare it to the size and scope of the classical Internet, it's still basic.”
Professor Andreas Wallraff at ETH University in Zurich used an actual pipe to send microwave signals between two fridges.
The Max Planck Institute has shown quantum teleportation, to transmit quantum information from a qubit to a lab 50 meters away.
At QuTech Professor Stephanie Wehner has shown a “link layer” protocol, which provides reliable links over an underlying quantum network. And QuTech has demonstrated a quantum network in which two nodes are linked with an intermediate repeater.
Over in the US at Argonne, Suchara is definitely starting small with his efforts at creating reliable communications. He is working on sharing quantum states, and doesn’t expect to personally link any quantum computers just yet.
For him, the first thing is to get systems well enough synchronized to handle basic quantum communications: “With FPGA boards, we already have a clock synchronization protocol that's working in the laboratory. We would like to achieve this clock synchronization in the network - and we think the first year of our project will focus on this.”
Suchara thinks that long-distance quantum computing is coming: “What's going to happen soon is connecting quantum computers that are in different cities, or really very far away."
Ideally, he thinks long links will use the same protocol suite, but he accepts short links might need different protocols from long distances: “The middle layers that get triggered into communication may be different for short and long distance communication. But I would say it is important to build a protocol suite that's as universal as possible. And I can tell you, one thing that makes the classical Internet so successful is the ability to interconnect heterogeneous networks.”
Suchara is already looking at heterogeneity: “There are different types of encoding of quantum information. One type is continuous variable. The other type is discrete variable. You can have hybrid entanglement between continuous variable and discrete variable systems. We want to explore the theoretical protocols that would allow connecting these systems - and also do a demonstration that would allow transmission of both of these types of entanglement on the Argonne Labs quantum link network.”
The Argonne group has a quantum network simulator to try out ideas: “We have a working prototype of the protocol. And a simulator that allows evaluation of alternative protocol design. It’s an open source tool that's available for the community. And our plan is to keep extending the simulator with new functionality.”
How far can it go?
Making quantum networks extend over long distances will be hard because of the imperfections of entanglement generation and of the repeaters, as well as the fact that single photons on a long fiber will eventually get lost.
“This is a technological limitation,” says Kozlowski. “Give us enough years and the hardware will be good enough to distribute entanglement over longer and longer distances.”
It’s not clear how quickly we’ll get there. Kozlowski estimates practical entanglement-based quantum networks might exist in 10 to 15 years.
This is actually a fast turnaround, and the quantum Internet will be skipping past decades of trial and error in the classical world. by starting with layered protocol stacks and software-defined networking.
The step beyond to distributing quantum computing, will be a harder one to take, because at present quantum computers mostly use superconducting or trapped-ion qubits, and these don’t inherently interact with photons.
Why do it?
At this stage, it may all sound too complex. Why do it?
Professor Kozlowski spells out what the quantum Internet is not: “It is not aimed to replace the classical Internet. Because, just as a quantum computer will not replace a classical computer, certain things are done quite well, classically. And there's no need to replace that with quantum.”
One spin-off could be improvements to classical networks: “In my opinion, we should use this as an opportunity to improve certain aspects of the OSI model of classical control protocols,” says Suchara.
“For example, one of the issues with the current Internet is security of the control plane. And I think if you do a redesign, it's a great opportunity to build in more security mechanisms into the control plane to improve robustness. Obviously, the Internet has done great over the decades in almost every respect imaginable, but still one of the points that could be improved is the security of the control plane.”
Kozlowski agrees, pointing out that quantum key distribution exists, and there are other quantum cryptography primitives, that can deliver things like better authentication and more secure links.
The improvements in timing could also have benefits, including the creation of longer baseline radio telescopes, and other giant planetary instruments.
The big payoff could be distributing quantum computing, but Kozlowski sounds a note of caution: “Currently it's not 100 percent clear how one would do the computations. We have to first figure out how do we do computations on 10,000 computers, as opposed to one big computer.”
But Steve Brierley wants to see large, practical quantum computers which take high-performance computing (HPC) far beyond its current impressive achievements.
Thanks to today’s HPC systems, Brierley says: “We no longer ‘discover’ aircraft, we design them using fluid dynamics. We have a model of the underlying physics, and we use HPC to solve those equations.”
If quantum computers reach industrial scales, Brierley believes they could bring that same effect to other sectors like medicine, where we know the physics, but can’t yet solve the equations quickly enough.
“We still ‘discover’ new medicines,” he says. “We've decoded the human DNA sequence, but that's just a parts list.”
Creating a medicine means finding a chemical that locks into a specific site on a protein because of its shape and electrical properties. But predicting those interactions means solving equations for atoms in motion.
“Proteins move over time, and this creates more places for the molecule to bind,” he says. “Quantum mechanics tells us how molecules and proteins, atoms and electrons will move over time. But we don't have the compute to solve the equations. And as Richard Feynman put it, we never will, unless we build a quantum computer.”
A quantum computer that could invent any new medicine would be well worth the effort, he says: “I'd be disappointed if the only thing a quantum computer does is optimize some logistics route, or solve a number theory problem that we already know the answer to.”
To get there, it sounds like what we actually need is a distributed quantum computer. And for that, we need the quantum Internet.
This article first appeared in Issue 43 of DCD Magazine