4

I've posted the question in the physics site too.

It is said that

..the largest Lyapunov exponent, which measures the average exponential rate of divergence or convergence of nearby network states.

Lyapunov exponents (LEs) measure how fast nearby trajectories or flows diverge from each other.
Q1: Why does the largest LE measure the average divergence rate, instead of the mean LE?

My thought is that the LEs are somehow eigenvalues of a matrix involved in solving the ODEs $$\tau\frac{dh_i}{dt} = -h_i + \sum_{j=1}^N J_{ij} \phi(h_j),$$ so the solutions would possibly look like a linear combination of $e^{\lambda_i t}$. Since $e^{a t}\gg e^{b t}$ when $a > b$, as $t\to\infty$, the term with the largest LE will be much larger than other terms despite the coefficients, and therefore dominate.

I have not solved the ODEs so I am not sure whether my thought is correct.
How can I strictly solve the equation and answer my question?


This seems to have verified my guess. But the details of calculation are still unclear. It is a non-linear ODE and the solution is possibly more complex than that of a linear ODEs, for which the monotonicity of $e^{\lambda_i t}$ straightforwardly gives the result.

So perhaps the question can be restated as how solutions of non-linear ODEs differ from those of linear ODEs, and whether we can still use the linear algebra method of eigenvalues and eigenvectors to solve the former.
One idea is to linearize the ODEs near the fixed point, with Jacobian (of which LEs are, roughly speaking, eigenvalues).
Even if we can do so, the conclusion seems to be valid only near the fixed point; while here we need to consider $t\to\infty$ for chaos (unstable), and therefore it is almost certain that we would go away from the fixed point.


Q2: Why do the other LEs matter for characterizing chaos?

Q3: We know $g$ is proportional to (sqrt of) the variance of $J$'s every entry ($J_{ij} \text{~} \mathcal{N}(0, g^2/N)$).
Why is it also positively related to the variance of $h_i$. In other words, why stronger coupling results in stronger neuronal signals, from a math perspective? (This question is posted here too; with additional references.)


Here is the orginal paper:

fig1
fig2

1 Answers1

3

Q1: "Average" is a time-average. The orthogonal frame that brings the linearization of a time step into eigenvalue or principal components/SVD normal form is not transported "parallel" with the flow of the ODE. Meaning the frame of one time step, transported forward, is slightly askew to the frame of the next time step. Practically this means that if you start with two close solutions whose difference is in the direction of the most contracting basis vector, then in the next time step the difference will have small components in the directions of all other basis vectors. Generally, on average, over a sufficiently long time span, the fastest growing direction will come to dominate. I'm not sure how they incorporated convergence into this picture.

On the interpretation of the LE computation see also https://scicomp.stackexchange.com/questions/36013/numerical-computation-of-lyapunov-exponent and the discussions there.

Q2: Pure expansion is not chaotic. To stay within or return to some fixed volume you also need compression, as provided by negative LE. Another component to chaos is folding, meaning the flow is not "order preserving" over longer distances like in a linear system. See horseshoe dynamic.

Q3: At the setup of the system $J_{ij}=\frac{g}{\sqrt{N}}z_{ij}$ is randomly chosen, with $z_{ij}$ independent normal. If $g$ and thus $J$ is small (with a large probability), then the dynamic of the first term dominates and the system converges to zero with time scale $\tau^{-1}$. If $h$ stays small, the system is essentially linear, $\tau\dot h=-h+Jh$. The coefficient matrix $-I+J$ can now have positive eigenvalues if $J$ is large enough, for instance $\pmatrix{-1&a\\a&-1}$ has eigenvalues $-1\pm a$. This gives the possibility of not-almost-zero values in the long range. With more dimensions and the non-linear nature of $\phi$, also oscillations become possible. One would have to check if "variation" refers more to amplitude or to rate of change.

Lutz Lehmann
  • 126,666
  • Q1: Yes, it is about a network (of neurons). Each neuron has a signal that is (sometimes) random with Poisson distribution. $\quad$Q2: I am not familiar with the concepts. will check. $\quad$Q3: the matrix $J$ is defined to be a random matrix, since $J$ is part of the ODEs, it affects the system. $h_i$ denotes a neuron, and $i$ ranges from 1 to very large $N$, so we have statistics here. Yes $J$ is chosen randomly but fixed per system (at least according to my understanding of the paper). – Charlie Chang May 15 '22 at 15:08
  • Is there a recommended (survey) article about compression, expansion, folding/order-preserving (and the reasons why these components are necessary for producing chaos)? It could be either theoretical or applied. $\quad$ https://en.wikipedia.org/wiki/Horseshoe_map $\quad$ Q2: so other LEs show varied rates of expansion/compression in different directions, which also characterize the chaos. – Charlie Chang May 15 '22 at 15:30
  • 1
    I do not know such, but they surely must exist. // Always compare with a linear system, in the most simple case with a diagonal matrix. You can construct any LE structure there, obviously without becoming chaotic. The important part seems to be that the forwards dynamic remains restricted to a finite volume that has no stable equilibria that could "vacuum up" the solution space. – Lutz Lehmann May 15 '22 at 15:50
  • I find these concepts mentioned in some textbook. $\quad$ So the perturbation of the system should stay bounded (i.e. an infinitesimal volume remains infinitesimal?) and the system not converge to a point or periodic orbit. Perhaps I need to read something before I understand what you mean. Overall chaos seems more than 'sensitiveness to/expansion of perturbation'. – Charlie Chang May 15 '22 at 16:47
  • 1
    Stay bounded yes, but not stay infinitesimal. Of course the propagators are continuous functions, but after some time the image of the initial volume will no longer look like the intuitive idea of a "volume"... // It is difficult to make up a catalogue of properties that guarantees "chaos", or even to define what a sufficient deviation of "order" is to call a the deterministic process of a dynamical system "chaotic". If the catalogue is incomplete one often can get counter-examples that are not chaotic, if it is too complete it may be too rigid to cover all leading examples. – Lutz Lehmann May 15 '22 at 16:59
  • So there seems to be no strict (universal) definition of chaos; then how to tell if a system is chaotic or not? According to experiments/observations/simulations? $\quad$ It is $-I+J$? In your example for Q3, the perturbations will be expanded in one direction (eigenvalue $-1+a$), and be compressed in the other ($-1-a$). $\quad$ Since $J_{ij}$ is Gaussian distributed around $0$, the non-diagonal entries could be like $a, -a$, which gives complex eigenvalues. (But we can still consider the time-evolution in a similar way.) $\quad$ (For self-reference) The author mentions $J_{ii} =0$ – Charlie Chang May 15 '22 at 17:09
  • 1
    Yes, but in the second case the real part is still $-1$, suppressing all signals rapidly. Thus the need for a higher dimension for more interesting behavior. Apparently systems that don't shrink to zero exist, else one would not make papers about it. – Lutz Lehmann May 15 '22 at 17:12
  • Yes. The real part determines the change of amplitude. $\quad$ There seems to be a universal proof/calculation that shows large $g$ causes large Var$(h_i)$ (perhaps I can find clues from the cited papers). But it can be complicated. Not sure if a simple explanation exists. – Charlie Chang May 15 '22 at 17:30