The Defense Advanced Research Projects Agency expects to launch a Fast Network Interface Cards (FastNICs) initiative, and could soon solicit private business for the development of next generation network subsystems.

Ahead of the "anticipated" FastNICs broad agency announcement (BAA) - a notice from the government that requests proposals from private firms for research endeavors - DARPA plans to hold a Proposers Day.

The event on July 10 will provide information to potential proposers on the objectives of the initiative. Attendance will be limited to the first 160 registrants, with registration closing at 9:00 AM ET on July 8. Only two representatives per organization are allowed.

In the NIC of time

An old, slow NIC
An old, slow NIC – Wikimedia Commons/Jana.Wiki

In a solicitation document on the Federal Business Opportunities website, first reported by DCD, DARPA outlines its reasoning for the as-yet-unconfirmed FastNICs program:

"Current network subsystems are a bottleneck between multiprocessor servers and the network links that connect them. For example, a single fiber can carry about 100 terabits per second of data traffic, and today’s multicore multiprocessors, GPU-equipped servers and similar processing nodes can (in aggregate) process data at a similar rate. Network stacks, however, are limited both by network interface cards and the system software that uses them, to 10-100 gigabits per second.

"This bottleneck has dramatically worsened as parallelism has replaced clock speed increases to achieve high performance computing. This bottleneck will remain unaddressed due to commercial incentives to pursue incremental technology advances in multiple market siloes. The separate evolution of networks and multicore multiprocessors, as well as memory technologies, memory copying, serialized contention for shared resources, and poor application design all contribute to limiting application throughput. The true bottleneck for processor throughput is the network interface used to connect a machine to an external network, such as an Ethernet, therefore severely limiting a processor’s data ingest capability.

"This network interface bottleneck is especially important for distributed computation that requires significant communication between the computation nodes. Training of deep neural networks is an exemplar of this class of computation; a significant fraction of machine learning research investigates ways in which the network interface bottleneck can be minimized.

"FastNICs will speed up applications, such as the distributed training of machine learning classifiers, by 100x, through the development, implementation, integration, and validation of novel, clean-slate network subsystems. The program objective is to overcome the gross mismatches in computing and network subsystem performance."

The publicly available agenda lists Dr. Jonathan Smith as a program manager. His DARPA page notes that "as a DARPA program manager, Smith seeks to develop and execute programs in cybersecurity, networking, and distributed computing.

"...Smith served as a program manager in DARPA’s Information Processing Technology Office (IPTO) from 2004 to 2006, developing and executing programs including Situation Aware Protocols In Edge Network Technologies (SAPIENT), Adaptive Cognition Enhanced Radio Teams (ACERT), and Brood of Spectrum Supremacy (BOSS)."

DARPA - which famously funded ARPANET, the Internet, and numerous crucial supercomputing projects - was founded in the wake of the Soviet Union's surprise launch of the Sputnik satellite to ensure that the US will never again be bested by another nation's scientific achievements, particularly in the military domain.

Among other programs, DARPA is currently funding the ambitious Electronics Resurgence Initiative, a $1.5 billion project to develop a domestic semiconductor manufacturing sector, exploring novel circuit materials, architectures, and designs.

In the next issue of DCD Magazine, out soon, we look at IBM's efforts to escape the von Neumann bottleneck using phase-change memory for in-memory computing. Subscribe for free today.