Were you to have traveled through central Kenya in the early 2000s, you may have come across something highly unusual: a dazzle of zebras sporting strange collars.
The animals were not part of a bizarre fashion show, but rather early pioneers of a technology that could one day span the Solar System, connecting other planets to one giant network.
The connected world we inhabit today is based on instant gratification. "The big issue is that the Internet protocols that are on the TCP/IP stack were designed with a paradigm that ‘I can guarantee that I send the information, and that information will be received, and I will get an acknowledgment during an amount of time that is quite small,’” Professor Vasco N. G. J. Soares explained, over a choppy video call that repeatedly reminded us what happens when that paradigm breaks down.
Fifty years on from its invention, the Transmission Control Protocol (TCP) still serves as the de facto backbone of how our connected age operates.
But there are many places where such a setup is not economically or physically possible.
The plains of Africa are one such locale, especially in 2001 when Kenya had virtually no rural cellular connectivity, and satellite connectivity required bulky, power-hungry, and expensive equipment.
Zebras care not for connectivity; they don’t plan their movements around where to find the best WiFi signal. And that was a problem for an international group of zoologists and technologists who wanted to track them.
Faced with a landscape devoid of connection, the team had to come up with a way to study, track, and collect data on zebras - and get that data back from the field.
To pull this off, the group turned to a technology first conceived in the 1990s - delay or disruption-tolerant networking (DTN). At its core is the idea of ‘store and forward,’ where information is passed from node to node and then stored when connectivity falls apart, before being sent to the next link in the chain. Instead of an end-to-end network, it is a careful hop-by-hop approach enabling asynchronous delivery.
In the case of ZebraNet, each equine served as a node, equipped with a collar featuring solar panels, GPS, a small CPU, flash memory, and radio connectivity.
Instead of communicating with satellites or telecoms infrastructure, the migratory habits of each zebra are stored on the collar. Then, when the animal is near another electronic equine, it shares the data. This continues until one of the zebras passes a mobile base station - perhaps attached to a Range Rover - and it uploads all that it has collected.
"It was one base station for about 10-12 collars," project member and Princeton University Professor Margaret Martonosi told DCD. “The main limit on storage capacity had to do with the physical design of what would fit on the circuit board and inside the collar module. Our early papers did some simulation-based estimates regarding storage requirements and likely delivery rates.”
It's an idea that sounds simple on the face of it, but one that requires a surprisingly complex and thought-out approach to DTN, especially with more ambitious deployments.
“How much information you need to store depends on the application,” Soares explained. “So this means that you need to study the application that you're going to enable using this type of connection, and then the amount of storage, and also the technologies that are going to be used to exchange information between the devices.”
You also need to decide how to get data from A, the initial collection point, to Z, the end-user or the wider network. How do you ensure that it travels an efficient route between moving and disconnecting nodes, without sending it down dead ends or causing a bottleneck somewhere in the middle?
This remains an area of vigorous debate, with multiple approaches as to how one should operate a DTN currently being pitched.
The most basic approach is single-copy routing protocols, where each node carries the bundle forward to the next node it encounters, until it reaches its final destination. Adding geographic routing could mean that it only sends it forward when it meets a node that is physically closer to the end state, or is heading in the right direction.
Then there are multiple-copy routing protocols that see each node send it to a bunch of others. Versions of this approach like the ‘epidemic protocol’ would spread data across a network rapidly, but risk flooding all the nodes.
"On a scenario that has infinite resources, this will be the best protocol," Soares said. "But in reality, it's not a good choice because it will exhaust the bandwidth and the storage on all the nodes." ‘Spray and Wait’ tries to build on this by adding limits to control the flooding.
Another approach, ‘PRoPHET,’ applies probabilistic routing to nodes when they move in non-random patterns. For example, after enough study, it would be possible to predict general movement patterns of zebras, and build a routing protocol based upon it.
Each time data travels through the network, it is used to update the probabilistic routing - although this can make it more brittle to sudden, unexpected changes.
For his work at the Instituto Politécnico de Castelo Branco, Soares combined geographic routing with Spray and Wait to form the routing protocol ‘GeoSpray.’
"My scenario was assuming vehicles moving and data traveling between them, and so I would need the geographic information," he said. "A single copy is the best option if you can guarantee connection to the destination, but sometimes you have to use multiples to ensure that you will find someone that will get it there for you eventually.”
Each approach, the amount of storage, and how long nodes store data before deleting it, has to be crafted for the application.
In South Africa, a DTN deployment was used to connect rural areas. Communities used e-kiosks to send emails, but the data was just stored on the system. When a school bus passed, it transferred the data to the bus, brought it to the city, and connected it to the wider net. When it returned, it brought any replies with it.
But as we connect every inch of the globe, such spaces for DTN are shrinking. “The spread of cell connectivity across so much of the world has certainly been helpful for overall connectivity and does supplant DTN to a degree,” Martonosi admitted.
“On the other hand, the cost of cell connectivity is still high (often prohibitively so) for many people. From a cost perspective, collaborative dynamic DTNs and mesh networks seem like a very helpful technology direction.”
Following ZebraNet, Martonosi worked on a DTN system to connect rural parts of Nicaragua, C-Link, and SignalGuru to share vehicle data. Due to increasing connectivity, such efforts "have not caught on widely," she said.
"But you can see aspects of these techniques still around - for example, the bluetooth-based contact tracing apps for Covid-19 are not dissimilar from some aspects of ZebraNet and C-Link’s design."
Terrestrial DTN proponents now primarily focus on low-power IoT deployments, or situations where networks have been impacted - such as natural disasters, or battlefields.
Indeed, the US Defense Advanced Research Projects Agency (DARPA) is one of the largest funders of DTN, fearing that the connectivity-reliant US military could be easily disrupted.
"DTN represents a fundamental shift in networking protocols that will result in military networks that function reliably and securely, even in the changing conditions and challenging environments where our troops must succeed now and in the future," BBN Technologies CEO Tad Elmer said after his company received $8.9m from DARPA to explore battlefield DTN.
The agency has published much of its work, but whether all of its research is out in the open is yet to be seen. However, DARPA was also instrumental in funding the development of the TCP/IP-based Internet, which was carried out in public.
"The irony is that when Bob [Kahn] and I started to work on the Internet, we published our documentation in 1974," TCP/IP co-creator Vint Cerf told DCD. "Right in the middle of the Cold War, we laid out how it all works.
“And then all of the subsequent work, of course, was done in the open as well. That was based on the belief that if the Defense Department actually wanted to use this technology, it would need to have its allies use it as well, otherwise you wouldn't have interoperability for this command and control infrastructure.
Then, as the technology developed, “I also came to the conclusion that the general public should have access to this,” Cerf recalled. “And so we opened it up in 1989, and the first commercial services started. The same argument can be made for the Bundle Protocol.”
With the DTN Bundle Protocol (published as “RFC5050”) Cerf is not content with ushering in the connected planet. He eyes other worlds entirely.
“In order to effectively support manned and robotic space exploration, you need communications, both for command of the spacecraft and to get the data back,” he said. “And if you can't get the data back, why the hell are we going out there? So my view has always been ‘let's build up a richer capability for communication than point-to-point radio links, and/or bent pipe relays.’
“That's what's driven me since 1998.”
DTN is perfect for space, where delay is inevitable. Planets, satellites, and spacecraft are far apart, always in motion, and their relative distances are constantly in flux.
“When two things are far enough apart, and they are in motion, you have to aim ahead of where it is, it’s like shooting a moving target,” Cerf said. “It has to arrive at the then when the spacecraft actually gets to where the signal is propagating.”
Across such vast distances, “the notion of 'now' is very broken in these kinds of large delay environments,” he noted, adding that the harsh conditions of space also meant that disruptions were possible.
What we use now to connect our few solar assets relies primarily on line-of-sight communication and a fragile network of overstretched ground stations.
With the Bundle Protocol, Cerf and the InterPlanetary Internet Special Interest Group (IPNSIG) of the Internet Society hope to make a larger and more ambitious network possible in space.
An earlier version, CFDP, has already been successfully trialed by Martian rovers Spirit and Opportunity, while the International Space Station tested out the Bundle Protocol in 2016. “We had onboard experiments going on, and we were able to use the interplanetary protocol to move data back and forth - commands up to the experiments, and data back down again,” Cerf said.
With the Artemis Moon program, the Bundle Protocol may prove crucial to connecting the far side of the Moon, as well as nodes blocked from line-of-sight by craters.
“Artemis may be the critical turning point for the interplanetary system, because I believe that will end up being a requirement in order to successfully prosecute that mission.”
DTN could form the backbone of Artemis, LunaNet, and the European Space Agency’s Project Moonlight. As humanity heads into space once again, this time it will expect sufficient communication capabilities.
“We can smell the success of all this; we can see how we can make it work,” Cerf said. “And as we overcome various and sundry barriers, the biggest one right now, in my view, is just getting commercial implementations in place so that there are off-the-shelf implementations available to anyone who wants to design and build a spacecraft.”
There’s still a lot to work out when operating at astronomical distances, of course.
“Because of the variable delay and the very large potential delay, the domain name system (DNS) doesn't work for this kind of operation,” Cerf said. “So we've ended up with kind of a two-step resolution for identifiers. First you have to figure out which planet are you going to and then after that you can do the mapping from the identifier to an address at that locale, where you can actually send the data.
“In the [terrestrial] Internet protocols, you do a one step workout - you take the domain name, you do a lookup in the DNS, you get an IP address back and then you open a TCP connection to that target. Here, we do two steps before we can figure out where the actual target is.”
Again, as with the zebras, cars, and other DTN deployments, understanding how much storage each space node should have will be crucial to its effective operation.
But working that out is still an open question. “If I know where the nodes are, and I know the physics, and I know what the data rates could be, how do I know I have a network which is capable of supporting the demand?” Cerf asked.
“So I went to the best possible source for this question, Leonard Kleinrock at UCLA." Kleinrock is the father of queuing theory and packet switching, and one of the key people behind ARPANET.
“He's still very, very active - he's 87, but still blasting on,” said Cerf. “I sent him a note saying, ‘look, here's the problem, I've got this collection of nodes, and I've got a traffic matrix, and I have this DTN environment, how do I calculate the capacity of the system so that I know I'm not gonna overwhelm it?”
Two days later, Kleinrock replied with “two pages of dense math saying, ‘okay, here's how you formulate this problem,’” Cerf laughed.
Kleinrock shared with DCD the October 2020 email exchange in which the two Internet pioneers debate what Kleinrock described as an "interesting and reasonably unorthodox question."
"Here's our situation," Cerf said in the email, outlining the immense difficulty of system design in a network where just the distance of Earth to Mars can vary from 34 million to 249 million miles. “The discrete nature of this problem vs continuous and statistical seems to make it much harder."
Kleinrock provided some calculations, and referenced earlier work with Mario Gerla and Luigi Fratta on a Flow Deviation algorithm. He told DCD: “It suggests the algorithm could be used where the capacities are changing, which means that you constantly run this algorithm as the capacity is either predictably changing or dynamically changing."
Cerf said that Kleinrock proved immensely helpful. “Now, I didn't get the whole answer. I still don't have the whole answer,” he said. “But, I know I have one of the best minds in the business looking at the problem.”
As with many other aspects of the Interplanetary Internet, “this is not a solved problem,” Cerf said.
“But we're on it.”