Leonard Kleinrock cannot be confined.

I spoke to him earlier this year for a feature on delay-tolerant networks, but it quickly became clear that there was a lot more to talk about.

The DTN feature - charting the development of an Internet protocol designed for disrupted environments like space - is available in the latest issue of the magazine. But we thought it was worth publishing the entire Q&A with Kleinrock, who played a crucial role in the development of the ARPANET, queuing theory, and packet switching.

Given the length of our conversation, we're publishing it in two parts - one today, the second tomorrow. You can also read our full interview with TCP/IP co-creator Vint Cerf, published earlier this week.

In this installment, we talk about DTN, the early days of ARPANET, the centralization of the Internet, and how US research culture is failing.

Leonard Kleinrock
– Flickr/Elon University

Sebastian Moss: So Vint Cerf is working on DTN, and he’s struggling with the capacity question. To quote: ‘What capacity of this DTN network, given if I know where the nodes are, and I know what the physics are. And I know what the data rates could be. I have a traffic matrix. Do I have a network which is capable of supporting the demand?’

He said he turned to you for help, is that right?

Leonard Kleinrock: Well, you saw the note that I wrote to Vint [pictured below]. Basically, what it's saying is there's been some work that I did very early, and then a couple of my students pursued some of it.

And we wrote a summary paper a few years ago recalling research from 40 years ago and try to summarise some current applications. It lays out some of the algorithms that could be used to deal with the changing capacity.

It’s a paper from 2014, written by my former student, Mario Gerla, and Luigi Fratta about our work from 40 years ago and a bit about today, but it lays out some of the algorithm - you're familiar with the flow deviation work I did?

Yes.

So basically that there are a number of problems, there was the capacity and flow assignment, there was a topology and flow assignment, etc. And we lay it all out. And the capacity assignment is the one that I described to Vinton [Cerf] in that handwritten note.

The paper suggests the algorithm which could be used where the capacities are changing, which means that you constantly run this algorithm as the capacity is either predictably changing or dynamically changing.

I hadn't given much thought about it until Vint recently spoke to me. And it seems as though you're looking for the routing in a changing network.

So if you think of the network where you're assuming the capacities are given as a function of time, in my original problem, we optimize the capacity as well as the routing.

DTN is in some sense simpler, right? You've already got the capacity. So all you could do to optimize is to choose the routing to use the given capacity to handle the traffic matrix.

And that's the nature of the algorithm which could be running to determine either feasible flow and, if feasible, best flow. Now, I'm not sure this is the best algorithm used these days. But it's an approach and that's about where I am in my thinking.

As far as you know, no one else has worked on a more advanced more up-to-date version?

Not with the changing capacities, no.

Kleinrock Cerf image.PNG
Leonard Kleinrock talks to Vint Cerf about DTN – Leonard Kleinrock

What are the demands of the application to have the algorithm running along the network itself?

Well, what we implemented in the Internet was basically a dynamic shortest path algorithm. And it worked. So if you really wanted a quick attack on this, we could take the existing algorithm, it does not know what the capacities are, but it senses them, because as queues build up, that means lines are getting overtaxed, and so you shift to a more available channel.

So it's basically an adaptive system, which is not mathematically structured except to find the shortest path through the distributed algorithm. And there have been many improvements on the basic shortest path algorithm. But that is running right now in the Internet.

So it’s the capability to constantly sense, in this case its traffic, and in your case it’s capacity. But by the same metric, when queues are building up you shift load in an intelligent way, where you have a distributed algorithm, which explains what is currently the shortest path to a destination.

And that kind of an algorithm, as we've seen in the past and theoretically, can cause problems if there's a mistake or hardware failure or software failure. It can be catastrophic.

I'll give you an example of the early ARPANET. One day, someone at Harvard publicized that he had had zero-length paths to every other node in the network - he was sending out zeros on his table. Crash. That can happen with any algorithm.

The difficulty is that if they want to try this as its first deployment as something in space, it's a little bit more expensive. You don't want to have it crash there.

Well, you're not gonna guarantee. I can tell you that the Internet as it exists right now has built-in failure modes that have not yet been exposed cause nobody has looked at the full control algorithms. They interfere, they conflict. And they can enter a phase where you can cause a calamity. It's happened in the past.

Kleinrock Cerf
– Leonard Kleinrock

You, and everyone involved in ARPANET and TCP/IP, has been open about the fact that you didn't see how big this would become and you didn't plan in the kind of security and redundancy features that maybe now would be useful. Do you think with DTN, because we've had that experience now of the Internet becoming so important that there's a bit more circumspection?

So you mentioned two things, you mentioned scalability and security. They're related but different. The title of my thesis proposal was 'Information Flow in Large Communication Nets.'

The eventually thesis had a different title. But I recognized that we have to design scalability into this thing. Building a 20 node network doesn't teach you anything. So there is scalability, the fact that it's distributed, is a strong answer to the scalable.

Security, you're absolutely right. Not that we didn't realize how big it would get. We didn't, of course, not as big as it is now. But we were dealing with a culture at that time which was much more benevolent. The users were not hostile or malicious. And so it was right not to impose any restrictions on the behavior or the network wouldn't have grown.

But we should have put in some safeguards, which could have been cranked up as the need arose, which would have helped but not solve today's critical security questions. So I'm not sure that DTN can benefit much from the experience except the recognition that yes, security is big, and you're gonna get hacked like mad. Think about the pleasure of hacking a space network.

I guess you'd really have to go further back to the drawing board if you wanted to work out a way to improve things.

Yeah. A tangential comment is that the whole blockchain world these days, much of the focus of their algorithm is dealing with malicious and hostile and malevolent players. Otherwise, it'll be a lot easier.

Although a lot of the compute itself in crypto is by malicious people...

Yes. Not sure that's going to be the case here, but the focus on overcoming bad behavior is there and I'm not sure who in the DTN world is thinking about that. My first thought will be that they're probably not. They're looking at just getting around the craziness of changing networks.

I guess the two kinds of research avenues with DTN are the Interplanetary Internet/NASA-Cerf approach, and then DARPA is funding more military applications. You'd assume that they would at least put some security layer in.

Yes, they do. With security, there are things you don't want things to leak. But also you don't want things to be attacked, that's the hard one. And look what's happening in the States right now with SolarWind and all the rest.

It was an attack. Think about what happened there: One company is providing all the security software for all the major networks. How dumb. A single point of failure built into the network on purpose!

And part of that, I think, is the fact that they didn't bring in people like you and me or scientists thinking about the more general problem. You don't build in a single point of failure. They were thinking about security and rules and then levels of clearance and all the rest. At any rate, I think a lesson we can learn that may or may not apply here is that you want to bring in all of the players, not just the military, not just the space guys, but you want scientists, engineers, even users to weigh in on how this thing can be used and could be heard and offer solutions.

Having seen the Internet grow and now it's coalescing around a few cloud providers and Equinix on interconnection... Does that kind of worry you that there's a centralized economy of a decentralized network?

Yes, I do, because just from an economic point of view, and an equity point of view, they're abusing it, no question about it.

The government here's trying to do something about that, just a little. It violates our original ethos. I like to make the point in those early days, we were using words like ethics, free, open, shared, trusted. Those words don't apply very well these days.

And it's that aspect of it that bothers me. In the 1990s, the Internet took a real turn, a sharp turn from its goal of reaching everybody in a fair, equitable way. And it became a money-making machine.

And the capitalists came in. I don't mean that in a political sense, but in the sense of selling to consumers. That's where [Internet gatekeepers decided] our focus is going to be, that's where applications are going to be. That's what development and engineering is going to focus on. And it stopped shooting for the Moon. That's an appropriate phrase here [laughs]. They started looking at the tiny problems of improving profits.

And that's related to your comment. It's those people who are driving the network and driving the architecture and the applications and the structure. And I think it's a bad thing. I wrote an op-ed piece in the LA Times. And it was basically titled ‘50 years ago, I helped invent the Internet. How did it go so wrong?', and it points to that sharp right turn to the to the profit side.

And my point in that article is what are we going to do about it, with all the spam and the maliciousness, and all of that?

And the answer is, it's going to involve all the stakeholders. It's not just government regulation. It's the government providing a forum for stakeholders to discuss these matters. And who are the stakeholders? Government, industry, scientists, and the users - the users are not complaining enough. They're having their privacy ripped off, they're signing privacy agreements without reading them, understandably. And until there's enough of a complaint, and enough of a way to understand what it is that's happening to the users, and where the scientists get a chance to weigh in, we're probably not going to get a solution.

But it's a major challenge to get all these parties to participate.

And again, I guess there's the two layers, which is the use of the Internet to sell to consumers, and sell consumers, and then there’s the infrastructure side of things.

Yes.

The infrastructure has never been so much in the hands of so few corporate enterprises.

That's right, and they're driving others out.

On an equally depressing topic, I saw you speaking about the short horizon research culture permeating America and the West in general, And it's something I've been following for some time. When it comes to DTN, or new ways of building the Internet, how has the profit-driven, short-term motive of modern research impacted progress?

It has certainly impacted the research. It's a very disturbing situation. I mean, where is the investment these days in shooting for the Moon? The original ethos of the Internet, thanks to J. C. R. Licklider [who helped foresee and fund much of ARPANET and the computing revolution], was this wonderful funding culture where they gave people freedom and money and flexibility and failure was okay.

Some years later, the director of DARPA came to me proposing that I helped them build a network which would deliver two bits per second reliably out of the foxhole - a military application.

He said 'do it in six months, here's a little bit of money, don't fail, and we're going to watch you.' I said, 'Are you kidding? Instead of asking me to build an Internet, you want me to do that?' And that's where it is shifting. That's exactly to your point. That's an example, a very small goal with all the wrong cultural aspect to it. And that's where industry is taking us these days. That's where they've invested their money.

I'm curious on a personal level you must have had a lot of offers to join companies. Even on just like a nominal basis, ‘we want your name on the letterhead’ type deal. But you've mostly stuck true to your research roots.

Yep, I have. Many times.

Is that fair to assume that's like a deep philosophical decision to have done that?

I'm not sure you can give me that much credit. I think the answer really is I enjoy doing research. That's the challenge that I engage in, and it gives me the gratification I want. Yes, I don't want to kowtow to an organization that's doing bad things, certainly.

But as an example, years ago in 1968, three of us formed a company: Irwin Jacobs, Andy Viterbi, and myself. We formed something called Linkabit. And the three of us had equal shares in the company. Irwin was on leave from MIT. Andy and I were faculty members at UCLA.

And then they decided they wanted to move the company down to San Diego, they were going to leave their faculty positions, and grow this company. And I said, ‘fine, go, I'll stay at UCLA.’ And that company grew to be Qualcomm. And I made a little bit of money, but not nearly what could have been. But, instead, I helped make the Internet.

So I didn't know that was going to happen. But that's where I want to spend my effort. Many times I've given up those kinds of requests. Yes, I started a couple of companies, one still exists as a seminar company, but it delivers information and engineering technology. And I have never regretted it, never.

When you speak to your students, where is the priority? Is it ‘I want a nice cushy job at Google or Microsoft’? Or is there still that core love of research that you have?

It's largely [the former], because it is a nice cushy job. Unfortunately, it's not a challenging job.

They want developers, they don't want researchers particularly. But the other thing is getting an academic position now is very difficult. The field is flooded with applicants, the jobs are not that voluminous. And the competition is so heavy, not only for new graduates joining the faculty, but existing faculty getting funding. There's so many chasing small amounts of money and therefore small projects that the focus has been on the ‘two bits per second out of the foxhole’ kind of thing.

And the pressure to publish is becoming enormous. And so a Ph.D. student or graduate, if he doesn't have a half a dozen papers published already, he's not even in the market, which is crazy. They're publishing small, relatively insignificant things. So the drive toward really dedicated research is there, but it's much more diminished now.

Faculty positions are hard. Corporate positions are not that [intellectually] juicy, if you will. I had a job at MIT Lincoln lab when I graduated, which I gave up to come to UCLA. That was a great job, that was research - pure research. You don't find too many now, and the research labs are disappearing as you know.

And so it falls more on the university and more pressure. What I'm seeing as a more fundamental problem, though, is that many of our students are not thinking creatively enough.

An example: A student will get a research result of some sort, let's even say it's an analytical result. And here's the problem statement. Here's the solution. Here's the equation showing the behavior. And then they go on and move on to something else.

I say, ‘wait a minute, what did that solution tell you?’ Is it 'why does it behave that way? Is there something fundamental? What about the parameters or system? How do they affect the nature of the performance? What if I change a parameter?’ They don't ask ‘what is the result I've got? What is it telling me?’

They're not asking the hard questions. They don't ask those deep questions. They get the job done. And they're off. They got another paper.

And that's pretty much true across the board - there are fewer and fewer students who take that view. And I spent a lot of time trying to get my students in that mode. And it's very gratifying once they reach that point of understanding and approach.

Come back Friday for more from Leonard Kleinrock. In the next installment, we talk about the psychological impact of developing something as impactful as the Internet, the promise and peril of blockchain, and the Cold War’s impact on science.

Subscribe to our daily newsletters