Quanta Magazine

Artificial intelligence researchers have celebrated a string of successes with neural networks, computer programs that roughly mimic how our brains are organized. But despite rapid progress, neural networks remain relatively inflexible, with little ability to change on the fly or adjust to unfamiliar circumstances.

In 2020, two researchers at the Massachusetts Institute of Technology led a team that introduced a new kind of neural network based on real-life intelligence — but not our own. Instead, they took inspiration from the tiny roundworm, Caenorhabditis elegans, to produce what they called liquid neural networks. After a breakthrough last year, the novel networks may now be versatile enough to supplant their traditional counterparts for certain applications.

Liquid neural networks offer “an elegant and compact alternative,” said Ken Goldberg, a roboticist at the University of California, Berkeley. He added that experiments are already showing that these networks can run faster and more accurately than other so-called continuous-time neural networks, which model systems that vary over time.

Ramin Hasani and Mathias Lechner, the driving forces behind the new design, realized years ago that C. elegans could be an ideal organism to use for figuring out how to make resilient neural networks that can accommodate surprise. The millimeter-long bottom feeder is among the few creatures with a fully mapped-out nervous system, and it is capable of a range of advanced behaviors: moving, finding food, sleeping, mating and even learning from experience. “It lives in the real world, where change is always happening, and it can perform well under almost any conditions thrown at it,” Lechner said.

Respect for the lowly worm led him and Hasani to their new liquid networks, where each neuron is governed by an equation that predicts its behavior over time. And just as neurons are linked to each other, these equations depend on each other. The network essentially solves this entire ensemble of linked equations, allowing it to characterize the state of the system at any given moment — a departure from traditional neural networks, which only give the results at particular moments in time.

“[They] can only tell you what’s happening at one, two or three seconds,” Lechner said. “But a continuous-time model like ours can describe what’s going on at 0.53 seconds or 2.14 seconds or any other time you pick.”

Liquid networks also differ in how they treat synapses, the connections between artificial neurons. The strength of those connections in a standard neural network can be expressed by a single number, its weight. In liquid networks, the exchange of signals between neurons is a probabilistic process governed by a “nonlinear” function, meaning that responses to inputs are not always proportional. A doubling of the input, for instance, could lead to a much bigger or smaller shift in the output. This built-in variability is why the networks are called “liquid.” The way a neuron reacts can vary depending on the input it receives.