Further requests for resources should be directed to and will be fulfilled by the Lead Contact, Srdjan Ostojic ( srdjan.ostojic@ens.fr ).

The activity therefore lives in an ( r + 1 ) -dimensional space determined by the r right-connectivity vectors m ( k ) and the input vector I. It is characterized by r overlaps κ ( k ) , each of which quantifies the amount of activity along the corresponding direction m ( k ) . Averaging over the population, the DMF theory then leads to a system of r + 1 nonlinear coupled equations for describing stationary dynamics.

The DMF theory can be directly extended to connectivity structures of rank r greater than one. The equilibrium mean input to unit i is then given by

When a linear readout with weightsis added to the network, its average output is given byi.e., by the projection of the average network firing rate on the readout vector w. This quantity is analogous to κ, except that the vector n is replaced by the vector w, so that similarly to Equation 14 , the average readout can also be expressed asand therefore directly depends on the joint distributionwhich characterizes the geometric arrangement of vectors m, w and I.

To describe the transient dynamics and assess the stability of the obtained equilibrium states, we determined the spectrum of eigenvalues at the obtained equilibrium fixed points. This spectrum consists of two components: a continuous, random component distributed within a circle in the complex plane, and a single outlier induced by the structured part of the connectivity ( Figures S1 A and S1D). The radius of the continuous component and the value of the outlier depend on the connectivity parameters. Although the two quantities in general are non-trivially coupled, the value of the radius is mostly controlled by the strength of the disorder, while the value of the outlier increases with the strengthof the rank-one structure ( Figure S1 F). The equilibrium is stable as long as the real part of all eigenvalues is less than unity. For large connectivity structure strengths, the outlier crosses unity, generating an instability that leads to the appearance of one-dimensional structured activity. Increasing the disorder strength on the other hand leads to another instability, corresponding to the radius of the continuous component crossing unity. This instability gives rise to chaotic, fluctuating activity.

Here F and G are two non-linear functions, the specific form of which depends on the geometrical arrangement of the connectivity vectors m and n and the input vector I. For temporally fluctuating, chaotic dynamics an additional macroscopic quantity (corresponding to the temporal variance) needs to be taken into account. In that case, the full DMF description is given by a system of three non-linear equations for three unknowns. The equilibrium states of the network dynamics are therefore obtained by solving these systems of equations using standard non-linear methods.

The right-hand-sides of Equations 13 and 15 show that both the meanand variancedepend on population-averaged, macroscopic quantities. To fully close the DMF description, the equations for single-unit statistics need to be averaged over the population. For static equilibrium dynamics, this leads to two coupled equations for two macroscopic quantities, the overlap κ and the static, population-averaged variance

The auto-correlation functionquantifies the fluctuations of the activationaround the expected mean. Computing this auto-correlation function shows that it is identical for all units in the network, i.e., independent of i (see Equation 27 ). It can be decomposed into a static variance, which quantifies the fluctuations of the equilibrium values ofacross different realizations of the random component of the connectivity, and an additional temporal variance which is present when the network is in a temporally fluctuating, chaotic state. In a stationary state, the variancecan be expressed aswhereis the average ofover the Gaussian variable

In the last equation, we adopted the short-hand notation. Hereis the average firing rate of unit j, i.e.,averaged over the Gaussian variable. In a geometrical interpretation, the quantity κ represents the overlap between the left-connectivity vector n and the vector of average firing rates. Equivalently, it is given by a population average of, which can also be expressed aswhereis the joint distribution of components of vectors m, n and I.is the variance of(see below), and

At equilibrium (i.e., in absence of transient dynamics) the equation for the meanofis obtained by directly averaging Equation 6 over the random part of the connectivity. For a unit-rank connectivity, it readswhere

The Gaussian processes η i can in principle have different first and second-order statistics for each unit, but are otherwise statistically independent across different units. As a consequence, the activations x i of different units are also independent Gaussian stochastic processes, coupled only through their first and second-order statistics. The core of DMF theory consists of self-consistent equations for the mean μ i and auto-correlation function Δ i I ( t ) .

DMF theory allows one to derive an effective description of the dynamics by averaging over the disorder originating from the random part of the connectivity. Across different realizations of the random connectivity matrix, the sum of inputs to unit i is approximated by a Gaussian stochastic processso that each unit obeys a Langevin-like equation:

Our results rely on a mathematical analysis of network dynamics based on Dynamical Mean-Field (DMF) theory (). To help navigate the analysis, here we provide first a succint overview of the approach. Full details are given further down in the section Details of Dynamical Mean-Field Theory.

More general connectivity structures of rankcan be written as a sum of unit-rank termsand are therefore specified by r pairs of vectorsand, where different m vectors are linearly independent, and similarly for n vectors.

According to our first assumption, the entries of vectors m and n are independent of the random bulk of the connectivity χ i j . Note that the only non-zero eigenvalue of P is given by the scalar product m T n / N , and the corresponding right and left eigenvectors are, respectively, vectors m and n. In the following, we will refer to the eigenvalue m T n / N as the strength of the connectivity structure, and to m and n as the right- and left-connectivity vectors. Here we focus on vectors obtained by generating the components from a joint Gaussian distribution.

We first consider the simplest case whereis a rank-one matrix, which can generally be written as the external product between two one-dimensional vectors m and n:

Similarly to (),is a Gaussian all-to-all random matrix, where every element is drawn from a centered normal distribution with variance. The parameter g scales the strength of random connections in the network, and we refer to it also as the random strength. The second termis a low-rank matrix. In this study, we consider the low-rank part of the connectivity fixed, while the random part varies between different realizations of the connectivity. Our results rely on two simplifying assumptions. The first one is that the low-rank term and the random term are statistically uncorrelated. The second one is that, as stated in Equation 8 , the structured connectivity is weak in the large N limit, i.e., it scales as, while the random connectivity componentsscale as

We considered a particular class of connectivity matrices, which can be written as a sum of two terms:

We study large recurrent networks of rate units. Every unit in the network is characterized by a continuous variable, commonly interpreted as the total input current. More generically, we also refer toas the activation variable. The output of each unit is a non-linear function of its inputs modeled as a sigmoidal function. In line with previous works (), we focus on, but we show that qualitatively similar dynamical regimes appear in network models with more realistic, positively defined activation functions ( Figure S7 ). The transformed variableis interpreted as the firing rate of unit i, and is also referred to as the activity variable.

Details of Dynamical Mean-Field Theory

I i = 0 ∀ i in Here we provide the full details of the mathematical analysis. We start by examining the activity of a network with a rank-one structure in absence of external inputs (in Equation 6 ).

Single-unit equations for spontaneous dynamics η i to unit i, defined by η i ( t ) = g ∑ j = 1 N χ i j ϕ ( x j ( t ) ) + m i N ∑ j = 1 N n j ϕ ( x j ( t ) ) . (20)

We start by determining the statistics of the effective noiseto unit i, defined by The DMF theory relies on the hypothesis that a disordered component in the coupling structure, here represented by χ i j , efficiently decorrelates single neuron activity when the network is sufficiently large. We will show that this hypothesis of decorrelated activity is self-consistent for the specific network architecture we study. η i by averaging over different realizations of the random matrix χ i j ( Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. Rajan et al., 2010 Rajan K.

Abbott L.F.

Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. [ . ] indicates an average over the realizations of the random matrix χ i j , while 〈 . 〉 stands for an average over different units of the network. Note that the network activity can be equivalently characterized in terms of input current variables x i ( t ) or their non-linear transforms ϕ ( x i ( t ) ) . As these two quantities are not independent, the statistics of the distribution of the latter can be written in terms of the statistics of the former. As in standard DMF derivations, we characterize self-consistently the distribution ofby averaging over different realizations of the random matrix). In the following,indicates an average over the realizations of the random matrix, whilestands for an average over different units of the network. Note that the network activity can be equivalently characterized in terms of input current variablesor their non-linear transforms. As these two quantities are not independent, the statistics of the distribution of the latter can be written in terms of the statistics of the former. [ η i ( t ) ] = g ∑ j = 1 N [ χ i j ϕ ( x j ( t ) ) ] + m i N ∑ j = 1 N n j [ ϕ ( x j ( t ) ) ] . (21)

The mean of the effective noise received by unit i is given by: ϕ ( x j ( t ) ) is independent of its outgoing weights), we have: [ η i ( t ) ] = g ∑ j = 1 N [ χ i j ] [ ϕ ( x j ( t ) ) ] + m i N ∑ j = 1 N n j [ ϕ ( x j ( t ) ) ] = m i κ (22)

as [ χ i j ] = 0 . Here we introduced κ : = 1 N ∑ j = 1 N n j [ ϕ ( x j ( t ) ) ] = 〈 n j [ ϕ j ( t ) ] 〉 , (23)

which quantifies the overlap between the mean population activity vector and the left-connectivity vector n. Under the hypothesis that in large networks, neural activity decorrelates (more specifically, that activityis independent of its outgoing weights), we have:as. Here we introducedwhich quantifies the overlap between the mean population activity vector and the left-connectivity vector n. [ η i ( t ) η j ( t + τ ) ] = g 2 ∑ k = 1 N ∑ l = 1 N [ χ i k χ j l ] [ ϕ ( x k ( t ) ) ϕ ( x l ( t + τ ) ) ] + m i m j N 2 ∑ k = 1 N ∑ l = 1 N n k n l [ ϕ ( x k ( t ) ) ϕ ( x l ( t + τ ) ) ] . (24)

Similarly, the noise correlation function is given by [ χ i j ] = 0 . Similarly to standard DMF derivations ( Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. ( i ≠ j ) while it survives in the auto-correlation function ( i = j ) , as [ χ i k χ j l ] = δ i j δ k l / N . We get: [ η i ( t ) η j ( t + τ ) ] = δ i j g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 + m i m j N 2 ∑ k = 1 N ∑ l = 1 N n k n l [ ϕ ( x k ( t ) ) ϕ ( x l ( t + τ ) ) ] . (25)

Note that every cross-term in the product vanishes since. Similarly to standard DMF derivations (), the first term on the r.h.s. vanishes for cross-correlationswhile it survives in the auto-correlation function, as. We get: k = l . This contribution vanishes in the large N limit because of the 1 / N 2 scaling. According to our starting hypothesis, when k ≠ l , activity decorrelates: [ ϕ k ( t ) ϕ l ( t + τ ) ] = [ ϕ k ( t ) ] [ ϕ l ( t + τ ) ] . To the leading order in N, we get: [ η i ( t ) η j ( t + τ ) ] = δ i j g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 + m i m j N 2 ∑ k n k [ ϕ ( x k ( t ) ) ] ∑ l ≠ k n l [ ϕ ( x l ( t + τ ) ) ] = δ i j g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 + m i m j κ 2 (26)

so that: [ η i ( t ) η j ( t + τ ) ] − [ η i ( t ) ] [ η j ( t ) ] = δ i j g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 . (27)

We focus now on the second term in the right-hand side. The corresponding sum contains N terms where. This contribution vanishes in the large N limit because of thescaling. According to our starting hypothesis, when, activity decorrelates:. To the leading order in N, we get:so that: We therefore find that the statistics of the effective input are uncorrelated across different units, so that our initial hypothesis is self-consistent. To conclude, for every unit i, we computed the first- and the second-order statistics of the effective input η i ( t ) . The expressions we obtained show that the individual noise statistics depend on the statistics of the full network activity. In particular, the mean of the effective input depends on the average overlap κ, but varies from unit to unit through the components of the right-connectivity vector m. On the other hand, the auto-correlation of the effective input is identical for all units, and determined by the population-averaged firing rate auto-correlation 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 . η i ( t ) have been determined, a self-consistent solution for the activation variable x i ( t ) can be derived by solving the Langevin-like stochastic process from Once the statistics ofhave been determined, a self-consistent solution for the activation variablecan be derived by solving the Langevin-like stochastic process from Equation 11 . As a first step, we look at its stationary solutions, which correspond to the fixed points of the original network dynamics.

Population-averaged equations for stationary solutions μ i and the variance Δ 0 I of the variable x i with respect to different realizations of the random connectivity coincide with the statistics of the effective noise η i . From μ i and variance Δ 0 I of the input to unit i therefore read μ i : = [ x i ] = m i κ Δ 0 I : = [ x i 2 ] − [ x i ] 2 = g 2 〈 [ ϕ i 2 ] 〉 (28)

while any other cross-variance [ x i x j ] − [ x i ] [ x j ] vanishes. We conclude that, on average, the structured connectivity P i j shapes the network activity along the direction specified by its right eigenvector m. Such a heterogeneous stationary state critically relies on a non-vanishing overlap κ between the left eigenvector n and the average population activity vector [ ϕ ] . Across different realizations of the random connectivity, the input currents x i fluctuate around these mean values. The typical size of fluctuations is determined by the individual variance Δ 0 I , equal for every unit in the network. For any solution that does not depend on time, the meanand the varianceof the variablewith respect to different realizations of the random connectivity coincide with the statistics of the effective noise. From Equations 22 and 27 , the meanand varianceof the input to unit i therefore readwhile any other cross-variancevanishes. We conclude that, on average, the structured connectivityshapes the network activity along the direction specified by its right eigenvector m. Such a heterogeneous stationary state critically relies on a non-vanishing overlap κ between the left eigenvector n and the average population activity vector. Across different realizations of the random connectivity, the input currentsfluctuate around these mean values. The typical size of fluctuations is determined by the individual variance, equal for every unit in the network. 〈 [ ϕ i 2 ] 〉 . To close the equations, these quantities need to be expressed self-consistently. Averaging Δ 0 of the input: μ : = 〈 [ x i ] 〉 = 〈 m i 〉 κ Δ 0 : = 〈 [ x i 2 ] 〉 − 〈 [ x i ] 〉 2 = g 2 〈 [ ϕ i 2 ] 〉 + ( 〈 m i 2 〉 − 〈 m i 〉 2 ) κ 2 . (29)

The r.h.s. of Equation 28 contains two population averaged quantities, the overlap κ and the second moment of the activity. To close the equations, these quantities need to be expressed self-consistently. Averaging Equation 28 over the population, we get expressions for the population-averaged mean μ and varianceof the input: Note that the total population variance Δ 0 is a sum of two terms: the first term, proportional to the strength of the random part of connectivity, coincides with the individual variability Δ 0 I which emerges from different realizations of χ i j ; the second term, proportional to the variance of the right-connectivity vector m, coincides with the variance induced at the population level by the spread of the mean values μ i ∝ m i . When the vector m is homogeneous ( m i = m ¯ ) , input currents x i are centered around the same mean value μ, and the second variance term vanishes. 〈 [ ϕ i 2 ] 〉 . To start with, we rewrite [ ϕ i ] by substituting the average over the random connectivity with the equivalent Gaussian integral: [ ϕ i ] = ∫ D z ϕ ( μ i + Δ 0 I z ) (30)

where we used the short-hand notation ∫ D z = ∫ − ∞ + ∞ e − z 2 / 2 / 2 π d z . To obtain κ, [ ϕ i ] needs to be multiplied by n i and averaged over the population. This average can be expressed by representing the fixed vectors m and n through the joint distribution of their elements over the components: p ( m , n ) = 1 N ∑ j = 1 N δ ( m − m j ) δ ( n − n j ) . (31)

We next derive appropriate expression for the r.h.s. terms κ and. To start with, we rewriteby substituting the average over the random connectivity with the equivalent Gaussian integral:where we used the short-hand notation. To obtain κ,needs to be multiplied byand averaged over the population. This average can be expressed by representing the fixed vectors m and n through the joint distribution of their elements over the components: κ = 〈 n i ∫ D z ϕ ( μ i + Δ 0 I z ) 〉 = ∫ d m ∫ d n p ( m , n ) n ∫ D z ϕ ( m κ + Δ 0 I z ) . (32)

This leads to 〈 [ ϕ i 2 ] 〉 = ∫ d m p ( m ) ∫ D z ϕ 2 ( m κ + Δ 0 I z ) . (33)

Similarly, a suitable expression for the second-order momentum of the firing rate is given by: Δ 0 once the vectors m and n have been specified. Equations 32 and 33 , combined with Equation 29 , provide a closed set of equations for determining κ andonce the vectors m and n have been specified. p ( m , n ) of elements m i and n i to their first- and second-order momenta. That is equivalent to substituting the probability density p ( m , n ) with a bivariate Gaussian distribution. We therefore write: m = M m + Σ m 1 − ρ x 1 + Σ m ρ y n = M n + Σ n 1 − ρ x 2 + Σ n ρ y (34)

where x 1 , x 2 and y are three normal Gaussian processes. Here, M m (resp. M n ) and Σ m (resp. Σ n ) correspond to the mean and the standard deviation of m (resp. n), while the covariance between m and n is given by 〈 m i n i 〉 − M m M n = Σ m Σ n ρ . Within a geometrical interpretation, M m and M n are the projections of N–dimensional vectors m and n onto the unitary vector u = ( 1,1 , … 1 ) / N , Σ m ρ and Σ n ρ are the projections onto a direction orthogonal to u and common to m and n, and Σ m 1 − ρ and Σ n 1 − ρ scale the parts of m and n that are mutually orthogonal. To further simplify the problem, we reduce the full distributionof elementsandto their first- and second-order momenta. That is equivalent to substituting the probability densitywith a bivariate Gaussian distribution. We therefore write:whereand y are three normal Gaussian processes. Here,(resp.) and(resp.) correspond to the mean and the standard deviation of m (resp. n), while the covariance between m and n is given by. Within a geometrical interpretation,andare the projections of N–dimensional vectors m and n onto the unitary vectorandare the projections onto a direction orthogonal to u and common to m and n, andandscale the parts of m and n that are mutually orthogonal. κ = ∫ D y ∫ D x 2 ( M n + Σ n 1 − ρ x 2 + Σ n ρ y ) ∫ D z ∫ D x 1 ϕ ( κ ( M m + Σ m 1 − ρ x 1 + Σ m ρ y ) + Δ 0 I z ) (35)

which gives rise to three terms when expanding the sum M n + Σ n 1 − ρ x 2 + Σ n ρ y . The first term can be rewritten as: M n ∫ D z ϕ ( M m κ + Δ 0 I + Σ m 2 κ 2 z ) = M n ∫ D z ϕ ( μ + Δ 0 z ) = M n 〈 [ ϕ i ] 〉 , (36)

which coincides with the overlap between vectors n and [ ϕ ] along the unitary direction u = ( 1,1 , … 1 ) / N . In the last step, we rewrote our expression for κ in terms of the population averaged statistics μ and Δ 0 ( The expression for κ becomes:which gives rise to three terms when expanding the sum. The first term can be rewritten as:which coincides with the overlap between vectors n andalong the unitary direction. In the last step, we rewrote our expression for κ in terms of the population averaged statistics μ and Equation 29 ). Σ n ρ ∫ D y y ∫ D z ∫ D x 1 ϕ ( κ ( M m + Σ m 1 − ρ x 1 + Σ m ρ y ) + Δ 0 I z ) = κ ρ Σ m Σ n 〈 [ ϕ i ' ] 〉 (37)

which coincides with the overlap between n and [ ϕ ] in a direction orthogonal to u. Here we used the equality: ∫ D z z f ( z ) = ∫ D z d f ( z ) d z (38)

which is obtained by integrating by parts. The second term vanishes, while the third one gives:which coincides with the overlap between n andin a direction orthogonal to u. Here we used the equality:which is obtained by integrating by parts. 〈 [ ϕ i 2 ] 〉 = ∫ D z ϕ 2 ( μ + Δ 0 z ) (39)

as in standard DMF derivations. Through a similar reasoning we obtain:as in standard DMF derivations. Δ 0 : μ = M m κ Δ 0 = g 2 〈 [ ϕ i 2 ] 〉 + Σ m 2 κ 2 κ = M m 〈 [ ϕ i ] 〉 + κ ρ Σ m Σ n 〈 [ ϕ i ' ] 〉 . (40)

To conclude, the mean-field description of stationary solutions reduces to the system of three implicit equations for μ, κ and 〈 [ . ] 〉 are performed with respect to a Gaussian distribution of mean μ and variance Δ 0 . Once μ, Δ 0 and κ have been determined, the single unit mean μ i and the individual variance Δ 0 I are obtained from Both averagesare performed with respect to a Gaussian distribution of mean μ and variance. Once μ,and κ have been determined, the single unit mean μand the individual varianceare obtained from Equation 28 M m ≠ 0 , M n ≠ 0 , ρ = 0 ); (ii) overlap between m and n only in a direction orthogonal to u ( M m = M n = 0 , ρ ≠ 0 ). The dynamical mean-field equations given in Equation 40 can be fully solved to determine stationary solutions. Detailed descriptions of these solutions are provided further down for two particular cases: (i) overlap between m and n only along the unitary direction u (); (ii) overlap between m and n only in a direction orthogonal to u ().

Transient dynamics and stability of stationary solutions We now turn to transient dynamics around fixed points, and to the related problem of evaluating whether the stationary solutions found within DMF are stable with respect to the original network dynamics ( Equation 6 ). x ( t ) = x i 0 + x i 1 ( t ) . For any generic stationary solution { x i 0 } the linearized dynamics are given by the stability matrix S i j which reads: S i j = ϕ ' ( x j 0 ) ( g χ i j + m i n j N ) . (41)

For any given realization of the connectivity matrix, the network we consider is completely deterministic. We can then study the local, transient dynamics by linearizing the dynamics around any stationary solution. We therefore look at the time evolution of a small displacement away from the fixed point:. For any generic stationary solutionthe linearized dynamics are given by the stability matrixwhich reads: If the real part of every eigenvalue of S i j is smaller than unity, the perturbation decays in time and thus the stationary solution is stable.

Homogeneous stationary solutions We first consider homogeneous stationary solutions, for which x i 0 = x ¯ for all units. A particular homogeneous solution is the trivial solution x ¯ = 0 , which the network admits for all parameter values when the transfer function is ϕ ( x ) = tanh ( x ) . Other homogeneous solutions can be obtained when the vector m is homogeneous, i.e., m i = M m for all i. S i j = ϕ ′ ( x ¯ ) J i j . (42)

For homogeneous solutions, the stability matrix reduces to a scaled version of the connectivity matrix: We are thus left with the problem of evaluating the eigenspectrum of the global connectivity matrix J i j . The matrix J i j consists of a full-rank component χ i j , the entries of which are drawn at random, and of a structured component of small dimensionality with fixed entries. We focus on the limit of large networks; in that limit, an analytical prediction for the spectrum of its eigenvalues can be derived. 1 / N scaling, the matrix norm of P i j is bounded as N increases. We can then apply results from random matrix theory ( Tao, 2013 Tao T. Outliers in the spectrum of iid matrices with bounded rank perturbations. J i j therefore consists of two separated components, inherited respectively from the random and the structured terms ( Girko, 1985 Girko V.L. Circular law. χ i j returns a set of N − 1 eigenvalues which lie on the complex plane in a compact circular region of radius g. In addition to this component, the eigenspectrum of J i j contains the non-zero eigenvalues of P i j : in the case of a rank-one matrix, one single outlier eigenvalue is centered at the position ∑ i m i n i / N = 〈 m i n i 〉 . In Because of thescaling, the matrix norm ofis bounded as N increases. We can then apply results from random matrix theory () which predict that, in the large N limit, the eigenspectra of the random and the structured parts do not interact, but sum together. The eigenspectrum oftherefore consists of two separated components, inherited respectively from the random and the structured terms ( Figure S1 A). Similarly to (), the random termreturns a set ofeigenvalues which lie on the complex plane in a compact circular region of radius g. In addition to this component, the eigenspectrum ofcontains the non-zero eigenvalues of: in the case of a rank-one matrix, one single outlier eigenvalue is centered at the position. In Figure S1 B we measure both the outlier position and the radius of the compact circular component. We show that deviations from the theoretical predictions are in general small and decay to zero as the system size is increased. S i j = ϕ ' ( x ¯ ) J i j , we conclude that a homogeneous stationary solution can lose stability in two different ways, when either m T n / N or g become larger than 1 / ϕ ' ( x ¯ ) . We expect different kinds of instabilities to occur in the two cases. When g crosses the instability line, a large number of random directions become unstable at the same time. As in ( Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. g = 0 , this eigenvector coincides exactly with m. For finite values of the disorder g, the outlier eigenvector fluctuates depending on the random part of the connectivity, but remains strongly correlated with m ( Going back to the stability matrix, we conclude that a homogeneous stationary solution can lose stability in two different ways, when eitheror g become larger than. We expect different kinds of instabilities to occur in the two cases. When g crosses the instability line, a large number of random directions become unstable at the same time. As in (), this instability is expected to lead to the onset of irregular temporal activity. When the instability is lead by the outlier, instead, the trivial fixed point becomes unstable in one unique direction given by the corresponding eigenvector. When, this eigenvector coincides exactly with m. For finite values of the disorder g, the outlier eigenvector fluctuates depending on the random part of the connectivity, but remains strongly correlated with m ( Figure S1 C), which therefore determines the average direction of the instability. Above the instability, as the network dynamics is completely symmetric with respect to a change of sign of the input variables, we expect the non-linear boundaries to generate two symmetric stationary solutions.

Heterogeneous stationary solutions S i j is obtained by multiplying each column of the connectivity matrix J i j by a different gain value (see S i j is not trivially related to the spectrum of J i j . A second type of possible stationary solutions are heterogeneous fixed points, in which different units reach different equilibrium values. For such fixed points, the linearized stability matrixis obtained by multiplying each column of the connectivity matrixby a different gain value (see Equation 41 ), so that the eigenspectrum ofis not trivially related to the spectrum of J i j , the eigenspectrum of S i j consists of two discrete components: one compact set of N − 1 eigenvalues contained in a circle on the complex plane, and a single isolated outlier eigenvalue ( Numerical investigations reveal that, as for, the eigenspectrum ofconsists of two discrete components: one compact set ofeigenvalues contained in a circle on the complex plane, and a single isolated outlier eigenvalue ( Figure S1 D). Harish and Hansel, 2015 Harish O.

Hansel D. Asynchronous rate chaos in spiking neuronal circuits. Rajan and Abbott, 2006 Rajan K.

Abbott L.F. Eigenvalue spectra of random matrices for neural networks. Aljadeff et al., 2015b Aljadeff J.

Stern M.

Sharpee T. Transition to chaos in random networks with cell-type-specific connectivity. S i j . To the leading order in N: r = g 1 N ∑ j = 1 N ϕ ′ 2 ( x j 0 ) (43)

which, in large networks, can be approximated by the mean-field average: r = g 〈 [ ϕ i ′ 2 ] 〉 . (44)

As previously noticed in (), the radius of the circular compact set r can be computed as in () by summing the variances of the distributions in every column of. To the leading order in N:which, in large networks, can be approximated by the mean-field average: P i j , the structured connectivity term does not appear explicitly in the expression for the radius. As the structured part of the connectivity determines the heterogeneous fixed point, the value of r however depends implicitly on the structured connectivity term through 〈 [ ϕ i ' 2 ] 〉 , which is computed as a Gaussian integral over a distribution with mean μ and variance Δ 0 given by r = 1 , the DMF equations predict the onset of temporally fluctuating solutions (see later on in Note that, because of the weak scaling in, the structured connectivity term does not appear explicitly in the expression for the radius. As the structured part of the connectivity determines the heterogeneous fixed point, the value of r however depends implicitly on the structured connectivity term through, which is computed as a Gaussian integral over a distribution with mean μ and variancegiven by Equation 40 . In Figures S1 D–S1F, we show that Equation 44 approximates well the radius of finite-size, numerically computed eigenspectra. Whenever the mean-field theory predicts instabilities led by r, we expect the network dynamics to converge to irregular non-stationary solutions. Consistently, at the critical point, where, the DMF equations predict the onset of temporally fluctuating solutions (see later on in STAR Methods ). S i j are strongly correlated, as they both scale with the multiplicative factor ϕ ′ ( x j 0 ) , which correlates with the particular realization of the random part of the connectivity χ i j . As a consequence, χ i j cannot be considered as a truly random matrix with respect to m i ϕ ′ ( x j 0 ) n j / N , and in contrast to the case of homogeneous fixed points, results from ( Girko, 1985 Girko V.L. Circular law. We now turn to the problem of evaluating the position of the outlier eigenvalue. In the case of heterogeneous fixed points, the structured and the random components of the matrixare strongly correlated, as they both scale with the multiplicative factor, which correlates with the particular realization of the random part of the connectivity. As a consequence,cannot be considered as a truly random matrix with respect to, and in contrast to the case of homogeneous fixed points, results from () do not hold. m i ϕ ' ( x j 0 ) n j / N , which can be computed in the mean-field framework (when ρ = 0 , it corresponds to M m M n 〈 [ ϕ i ′ ] 〉 + M n κ Σ m 2 〈 [ ϕ i ′ ′ ] 〉 ). On the other hand, the value of the outlier coincides exactly with the eigenvalue of m i ϕ ' ( x j 0 ) n j / N whenever the random component χ i j is shuffled (black dots in m i ϕ ' ( x j 0 ) n j / N and the specific realization of the random bulk χ i j . We determined numerically the position of the outlier in finite-size eigenspectra ( Figures S1 D–S1F). We found that its value indeed significantly deviates from the only non-zero eigenvalue of the rank-one structure, which can be computed in the mean-field framework (when, it corresponds to). On the other hand, the value of the outlier coincides exactly with the eigenvalue ofwhenever the random componentis shuffled (black dots in Figure S1 F). This observation confirms that the position of the outlier critically depends on the correlations existing between the rank-one structureand the specific realization of the random bulk

Mean-field analysis of transient dynamics and stability of stationary solutions Kadmon and Sompolinsky, 2015 Kadmon J.

Sompolinsky H. Transition to chaos in random neuronal networks. As for heterogeneous fixed points we were not able to assess the position of the outlying eigenvalue using random matrix theory, we turned to a mean-field analysis to determine transient activity. This analysis allowed us to determine accurately the position of the outlier, and therefore the stability of heterogeneous fixed points. The approach exploited here is based on (). x i when averaged across different realizations of the random connectivity and its random eigenmodes. Directly averaging across realizations the network dynamics defined in μ i of unit i: μ ˙ i ( t ) = − μ i ( t ) + m i κ ( t ) . (45)

We consider the stability of the single unit activationwhen averaged across different realizations of the random connectivity and its random eigenmodes. Directly averaging across realizations the network dynamics defined in Equation 6 yields the time evolution of the mean activationof unit i: μ i ( t ) = m i κ ˜ ( t ) , where κ ˜ is the low-pass filtered version of κ: ( 1 + d / d t ) κ ˜ ( t ) = κ ( t ) . Small perturbations around the fixed point solution read: μ i ( t ) = μ i 0 + μ i 1 ( t ) . The equilibrium values μ i 0 correspond to the DMF stationary solution computed from μ i 0 = m i κ 0 . The first-order perturbations thus obey: μ ˙ i 1 ( t ) = − μ i 1 ( t ) + m i κ 1 ( t ) , (46)

indicating that the decay timescale of the mean activity is inherited by the decay time constant of κ 1 . An additional equation for the time evolution of κ 1 thus needs to be derived. We observe that we can write:, whereis the low-pass filtered version of κ:. Small perturbations around the fixed point solution read:. The equilibrium valuescorrespond to the DMF stationary solution computed from Equation 28 and 40 . The first-order perturbations thus obey:indicating that the decay timescale of the mean activity is inherited by the decay time constant of. An additional equation for the time evolution ofthus needs to be derived. ϕ i of unit i can be evaluated at the first order: ϕ i 0 → ϕ i 0 + ϕ i 1 ( t ) = ϕ ( x i 0 ) + ϕ ' ( x i 0 ) x i 1 ( t ) . As a consequence, the first-order in κ reads: κ 1 ( t ) = 〈 n i [ ϕ ' ( x i 0 ) x i 1 ( t ) ] 〉 . (47)

When activity is perturbed, the firing activityof unit i can be evaluated at the first order:. As a consequence, the first-order in κ reads: κ ˙ 1 ( t ) = − κ 1 ( t ) + ( 1 + d d t ) 〈 n i [ ϕ ' ( x i 0 ) x i 1 ( t ) ] 〉 . (48)

Summing Equation 47 to its time-derivative, we get: [ ϕ ' ( x i 0 ) x i 1 ] , we explicitly build x i 0 and x i t : = x i ( t ) as Gaussian variables centered respectively in μ i 0 and μ i t . We will call Δ 0 I 0 and Δ 0 I t the variances of the two variables, and Δ I , t : 0 their two-times correlation defined by Δ I , t : 0 = [ x i t x i 0 ] − [ x i t ] [ x i 0 ] . We can then write the two variables as x i 0 = μ i 0 + Δ 0 I 0 − Δ I , t : 0 x 1 + Δ I , t : 0 y x i t = μ i t + Δ 0 I t − Δ I , t : 0 x 2 + Δ I , t : 0 y (49)

In order to simplify the r.h.s., we start by considering the average with respect to the random part of the connectivity for a single unit i. In order to compute, we explicitly buildandas Gaussian variables centered respectively inand. We will callandthe variances of the two variables, andtheir two-times correlation defined by. We can then write the two variables as x i is given by the difference between x i t and x i 0 , and reads: x i 1 = μ i 1 + Δ 0 I t − Δ I , t : 0 x 2 − Δ 0 I 0 − Δ I , t : 0 x 1 . (50)

The first-order response ofis given by the difference betweenand, and reads: Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. Rajan et al., 2010 Rajan K.

Abbott L.F.

Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Kadmon and Sompolinsky, 2015 Kadmon J.

Sompolinsky H. Transition to chaos in random neuronal networks. x 1 , x 2 and y are standard normal variables. By integrating over their distributions we can write: [ ϕ ' ( x i 0 ) x i 1 ] = ∫ D x 1 ∫ D x 2 ( μ i 1 + Δ 0 I t − Δ I , t : 0 x 2 − Δ 0 I 0 − Δ I , t : 0 x 1 ) ∫ D y ϕ ' ( μ i 0 + Δ 0 I 0 − Δ I , t : 0 x 1 + Δ I , t : 0 y ) . (51)

As in classical DMF derivations (),and y are standard normal variables. By integrating over their distributions we can write: [ ϕ ' ( x i 0 ) x i 1 ] = μ i 1 [ ϕ i ′ ] + ( Δ I , t : 0 − Δ 0 I 0 ) [ ϕ i ' ' ] (52)

where the Gaussian integrals [ ϕ i ' ] and [ ϕ i ' ' ] are evaluated using the fixed point statistics. Integrating by parts as in Equation 38 we get:where the Gaussian integralsandare evaluated using the fixed point statistics. Δ I , t : 0 = Δ 0 I 0 . As a consequence, Δ I , t : 0 − Δ 0 I 0 gives a first-order response: Δ I , 1 : 0 : = Δ I , t : 0 − Δ 0 I 0 = [ x i 1 x i 0 ] − [ x i 1 ] [ x i 0 ] = [ x i 1 x i 0 ] − μ i 0 μ i 1 (53)

which can be rewritten as a function of the global second-order statistics Δ 1 : 0 = 〈 [ x i 1 x i 0 ] 〉 − 〈 [ x i 1 ] 〉 〈 [ x i 0 ] 〉 as: Δ I , 1 : 0 = Δ 1 : 0 − { 〈 μ i 1 μ i 0 〉 − 〈 μ i 1 〉 〈 μ i 0 〉 } = Δ 1 : 0 − Σ m 2 κ ˜ 0 κ ˜ 1 . (54)

Note that, at the fixed point,. As a consequence,gives a first-order response:which can be rewritten as a function of the global second-order statisticsas: Δ 0 1 = Δ 0 t − Δ 0 0 . We consider that, by definition: Δ 1 : 0 = ∑ j = 1 N x j 1 ∂ Δ t : 0 ∂ x j t | 0 Δ 0 1 = ∑ j = 1 N x j 1 ∂ Δ 0 t ∂ x j t | 0 . (55)

Equation 54 can be rewritten in terms of the first-order perturbation for the global equal-time variance:. We consider that, by definition: ∂ Δ t : 0 ∂ x j t | 0 = 1 2 ∂ Δ 0 t ∂ x j t | 0 , (56)

and we conclude that: Δ 1 : 0 = 1 2 Δ 0 1 (57)

We then observe that, when the derivatives are evaluated at the fixed point, we have:and we conclude that: [ ϕ ' ( x i 0 ) x i 1 ] = m i κ ˜ 1 [ ϕ i ' ] + ( Δ 0 1 2 − Σ m 2 κ ˜ 0 κ ˜ 1 ) [ ϕ i ' ' ] . (58)

Equation 52 thus becomes: 〈 n i [ ϕ ' ( x i 0 ) x i 1 ( t ) ] 〉 = κ ˜ 1 [ ( M m M n + ρ Σ m Σ n ) 〈 [ ϕ i ' ] 〉 + ρ κ 0 M m Σ m Σ n 〈 [ ϕ i ' ' ] 〉 ] + Δ 0 1 2 [ M n 〈 [ ϕ i ' ' ] 〉 + ρ κ 0 Σ m Σ n 〈 [ ϕ i ' ' ' ] 〉 ] : = κ ˜ 1 a + Δ 0 1 b (59)

where constants a and b were defined as: a = ( M m M n + ρ Σ m Σ n ) 〈 [ ϕ i ' ] 〉 + ρ κ 0 M m Σ m Σ n 〈 [ ϕ i ' ' ] 〉 b = 1 2 { M n 〈 [ ϕ i ' ' ] 〉 + ρ κ 0 Σ m Σ n 〈 [ ϕ i ' ' ' ] 〉 } . (60)

In a second step, we perform the average across different units of the population, by writing m and n as in Equation 34 . After some algebra, we get:where constants a and b were defined as: κ ˙ 1 ( t ) = − κ 1 ( t ) + ( 1 + d d t ) { κ ˜ 1 a + Δ 0 1 b } , (61)

so that the time evolution of the perturbed variance must be considered as well. The time evolution of κ can be finally rewritten as:so that the time evolution of the perturbed variance must be considered as well. Δ 0 , we rewrite the activation variable x i ( t ) by separating the uniform and the heterogeneous components: x i ( t ) = μ ( t ) + δ x i ( t ) . The time evolution for the residual δ x i ( t ) is given by: δ x i ˙ ( t ) = − δ x i ( t ) + g ∑ j = 1 N χ i j ϕ ( x j ( t ) ) + ( m i − M m ) κ ( t ) (62)

so that, squaring: ( d δ x i ( t ) d t ) 2 + 2 δ x i ( t ) d δ x i ( t ) d t + δ x i ( t ) 2 = g 2 ∑ j = 1 N ∑ k = 1 N χ i j χ i k ϕ ( x j ( t ) ) ϕ ( x k ( t ) ) + ( m i − M m ) 2 κ ( t ) 2 + g ( m i − M m ) κ ( t ) ∑ k = 1 N χ i j ϕ ( x k ( t ) ) . (63)

In order to isolate the evolution law of, we rewrite the activation variableby separating the uniform and the heterogeneous components:. The time evolution for the residualis given by:so that, squaring: dΔ 0 ( t ) d t = − Δ 0 ( t ) + g 2 〈 [ ϕ i 2 ( t ) ] 〉 + Σ m 2 κ ( t ) 2 − 〈 [ ( d δ x i ( t ) d t ) 2 ] 〉 : = − Δ 0 ( t ) + G ( μ , Δ 0 , κ ) − 〈 [ ( d δ x i ( t ) d t ) 2 ] 〉 (64)

as by definition we have: 〈 [ δ x i 2 ( t ) ] 〉 = Δ 0 ( t ) . Averaging over i and the realizations of the disorder yields:as by definition we have: Δ 0 to the first order, we get: Δ ˙ 0 1 ( t ) = − Δ 0 1 ( t ) + μ 1 ∂ G ∂ μ | 0 + Δ 0 1 ∂ G ∂ Δ 0 | 0 + κ 1 ∂ G ∂ κ | 0 . (65)

Expanding the dynamics ofto the first order, we get: ∂ ∂ μ 〈 [ ( d δ x i ( t ) d t ) 2 ] 〉 | 0 = 2 〈 [ d δ x i ( t ) d t ∂ ∂ μ d δ x i ( t ) d t ] 〉 | 0 = 0 (66)

since temporal derivatives for every i vanish when evaluated at the fixed point. Note that we could neglect the contributions originating from the last term of Equation 64 because they do not enter at the leading order. Indeed we have:since temporal derivatives for every i vanish when evaluated at the fixed point. ∂ G ∂ μ | 0 = 2 g 2 〈 [ ϕ i ϕ i ' ] 〉 ∂ G ∂ Δ 0 | 0 = g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i ' ' ] 〉 } ∂ G ∂ κ | 0 = 2 Σ m 2 κ 0 . (67)

A little algebra returns the last three linear coefficients: κ ˙ 1 ( t ) = − κ 1 ( t ) + a κ 1 ( t ) + b { μ 1 ∂ G ∂ μ | 0 + Δ 0 1 ∂ G ∂ Δ 0 | 0 + κ 1 ∂ G ∂ κ | 0 } . (68)

Collecting all the results together in Equation 61 we obtain: μ ˙ 1 ( t ) = − μ 1 ( t ) + M m κ 1 . (69)

By averaging Equation 45 we furthermore obtain: d d t ( μ 1 Δ 0 1 κ 1 ) = − ( μ 1 Δ 0 1 κ 1 ) + M ( μ 1 Δ 0 1 κ 1 ) (70)

where the evolution matrix M is defined as: M = ( 0 0 M m 2 g 2 〈 [ ϕ i ϕ i ' ] 〉 g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i " ] 〉 } 2 Σ m 2 κ 0 2 b g 2 〈 [ ϕ i ϕ i ' ] 〉 b g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i " ] 〉 } b 2 Σ m 2 κ 0 + a ) . (71)

We finally obtained that the perturbation timescale is determined by the population-averaged dynamics:where the evolution matrixis defined as: Note that one eigenvalue of matrix M , which corresponds to the low-pass filtering between κ and μ, is always fixed to zero. M allows to retrieve the largest decay timescale of the network, which indicates the average, structural stability of stationary states. Equations 70 and 71 reveal that, during the relaxation to equilibrium, the transient dynamics of the first- and second-order statistics of the activity are tightly coupled. Diagonalizingallows to retrieve the largest decay timescale of the network, which indicates the average, structural stability of stationary states. When an outlier eigenvalue is present in the eigenspectrum of the stability matrix S i j , the largest decay timescale from M predicts its position. The corresponding eigenvector e ˆ contains indeed a structured component along m, which is not washed out by averaging across different realizations of χ i j . The second non-zero eigenvalue of M , which vanishes at g = 0, measures a second and smaller effective timescale, which derives from averaging across the remaining N − 1 random modes. M for corresponding stationary solutions of mean-field equations. In M . The mismatch between the two values is small and can be understood as a finite-size effect ( Varying g, we computed the largest eigenvalue offor corresponding stationary solutions of mean-field equations. In Figure S1 F we show that, when the stability eigenspectrum includes an outlier eigenvalue, its position is correctly predicted by the largest eigenvalue of. The mismatch between the two values is small and can be understood as a finite-size effect ( Figure S1 E, gray). To conclude, we found that the stability of arbitrary stationary solutions can be assessed by evaluating, with the help of mean-field theory, both the values of the radius ( Equation 44 ) and the outlier ( Equation 71 ) of the stability eigenspectrum. Instabilities led by the two different components are expected to reshape activity into two qualitatively different classes of dynamical regimes, which are discussed in detail, further in STAR Methods , for two specific classes of structures.

Dynamical Mean Field equations for chaotic solutions η i ( Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. When a stationary state loses stability due to the compact component of the stability eigenspectrum, the network activity starts developing irregular temporal fluctuations. Such temporally fluctuating states can be described within the DMF theory by taking into account the full temporal auto-correlation function of the effective noise). For the sake of simplicity, here we derive directly the mean-field equations for population-averaged statistics, and we eventually link them back to single unit quantities. η i , we derive that the auto-correlation function Δ ( τ ) = 〈 [ x i ( t + τ ) x i ( t ) ] 〉 − 〈 [ x i ( t ) ] 〉 2 obeys the second-order differential equation: Δ ¨ ( τ ) = Δ ( τ ) − g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 − Σ m 2 κ 2 . (72)

By differentiating twice Equation 11 , and by substituting the appropriate expression for the statistics of the noise, we derive that the auto-correlation functionobeys the second-order differential equation: In this context, the activation variance Δ 0 coincides with the peak of the full auto-correlation function: Δ 0 = Δ ( τ = 0 ) . We expect the total variance to include a temporal term, coinciding with the amplitude of chaotic fluctuations, and a quenched one, representing the spread across the population due to the disorder in χ i j and the structure imposed by the right-connectivity vector m. 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 , we need to explicitly build two correlated Gaussian variables x ( t ) and x ( t + τ ) , such that: 〈 [ x i ( t ) ] 〉 = 〈 [ x i ( t + τ ) ] 〉 = μ 〈 [ x i 2 ( t ) ] 〉 − 〈 [ x i ( t ) ] 〉 2 = 〈 [ x i 2 ( t + τ ) ] 〉 − 〈 [ x i ( t ) ] 〉 2 = Δ 0 〈 [ x i ( t + τ ) x i ( t ) ] 〉 − 〈 [ x i ( t ) ] 〉 2 = Δ ( τ ) . (73)

In order to compute the full rate auto-correlation function, we need to explicitly build two correlated Gaussian variablesand, such that: Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. Rajan et al., 2010 Rajan K.

Abbott L.F.

Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 = ∫ D z [ ∫ D x ϕ ( μ + Δ 0 − Δ x + Δ z ) ] 2 (74)

where we used the short-hand notation Δ : = Δ ( τ ) and we assumed for simplicity Δ > 0 . As we show later, this requirement is satisfied by our final solution. Following previous studies (), we obtain:where we used the short-hand notationand we assumed for simplicity. As we show later, this requirement is satisfied by our final solution. Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. Rajan et al., 2010 Rajan K.

Abbott L.F.

Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Δ ¨ ( τ ) = − ∂ V ∂ Δ (75)

where the potential V is given by an integration over Δ : V ( Δ , Δ 0 ) = − Δ 2 2 + g 2 〈 [ Φ i ( t ) Φ i ( t + τ ) ] 〉 + Σ m 2 κ 2 Δ (76)

and Φ ( x ) = ∫ − ∞ x ϕ ( x ' ) d x ' . As the potential V depends self-consistently on the initial condition Δ 0 , the shape of the auto-correlation function Δ ( τ ) depends parametrically on the value of Δ 0 . Similarly to previous works, we isolate the solutions that decay monotonically from Δ 0 to an asymptotic value Δ ( τ → ∞ ) : = Δ ∞ , where Δ ∞ is determined by d V / dΔ | Δ = Δ ∞ = 0 . This translates into a first condition to be imposed. A second equation comes from the energy conservation condition: V ( Δ 0 , Δ 0 ) = V ( Δ ∞ , Δ 0 ) . Combined with the usual equation for the mean μ and the overlap κ, the system of equations to be solved becomes: μ = M m κ κ = M n 〈 [ ϕ i ] 〉 + ρ κ 〈 [ ϕ i ' ] 〉 Δ 0 2 − Δ ∞ 2 2 = g 2 { ∫ D z Φ 2 ( μ + Δ 0 z ) − ∫ D z [ ∫ D x Φ ( μ + Δ 0 − Δ ∞ x + Δ ∞ z ) ] 2 } + Σ m 2 κ 2 ( Δ 0 − Δ ∞ ) Δ ∞ = g 2 ∫ D z [ ∫ D x ϕ ( μ + Δ 0 − Δ ∞ x + Δ ∞ z ) ] 2 + Σ m 2 κ 2 . (77)

In order to visualize the dynamics of the solutions of Equation 72 , we study the equivalent problem of a classical particle moving in a one-dimensional potential ():where the potential V is given by an integration overand. As the potential V depends self-consistently on the initial condition, the shape of the auto-correlation functiondepends parametrically on the value of. Similarly to previous works, we isolate the solutions that decay monotonically fromto an asymptotic value, whereis determined by. This translates into a first condition to be imposed. A second equation comes from the energy conservation condition:. Combined with the usual equation for the mean μ and the overlap κ, the system of equations to be solved becomes: Δ 0 and the long-time variance Δ ∞ . The difference Δ 0 − Δ ∞ represents the amplitude of temporal fluctuations. If temporal fluctuations are absent, Δ 0 = Δ ∞ , and the system of equations we just derived reduces to the DMF description for stationary solutions given in The temporally fluctuating state is therefore described by a closed set of equations for the mean activity μ, the overlap κ, the zero-lag varianceand the long-time variance. The differencerepresents the amplitude of temporal fluctuations. If temporal fluctuations are absent,, and the system of equations we just derived reduces to the DMF description for stationary solutions given in Equation 40 μ i = m i κ . (78)

A similar set of equations can be derived for single unit activity. As for static stationary states, the mean activity of unit i is given by Δ ∞ I = g 2 ∫ D z [ ∫ D x ϕ ( μ + Δ 0 − Δ ∞ x + Δ ∞ z ) ] 2 = Δ ∞ − Σ m 2 κ 2 (79)

while the temporal component Δ T I of the variance is identical to the population averaged temporal variance Δ T I = Δ 0 − Δ ∞ . (80)

The static variance around this mean activity is identical for all units and given bywhile the temporal componentof the variance is identical to the population averaged temporal variance To conclude, similarly to static stationary states, the structured connectivity P i j shapes network activity in the direction defined by its right eigenvector m whenever the overlap κ does not vanish. For this reason, the mean-field theory predicts in some parameter regions the existence of more than one chaotic solution. A formal analysis of the stability properties of the different solutions has not been performed. We nevertheless observe from numerical simulations that chaotic solutions tend to inherit the stability properties of the stationary solution they develop from. Specifically, when an homogeneous solution generates two heterogeneous bistable ones, we notice that the former loses stability in favor of the latter. V ( Δ ) is inverted ( Sompolinsky et al., 1988 Sompolinsky H.

Crisanti A.

Sommers H.J. Chaos in random neural networks. Harish and Hansel, 2015 Harish O.

Hansel D. Asynchronous rate chaos in spiking neuronal circuits. d 2 V ( Δ , Δ 0 ) dΔ 2 | Δ ∞ = 0 (81)

and the temporal component of the variance vanishes: Δ 0 = Δ ∞ . These two conditions are equivalent to the expression: 1 = g 2 〈 [ ϕ I ' 2 ] 〉 where, as we saw, g 2 〈 [ ϕ i ' 2 ] 〉 coincides with the squared value of the radius of the compact component of the stability eigenspectrum ( We finally observe that the critical coupling at which the DMF theory predicts the onset of chaotic fluctuations can be computed by imposing that, at the critical point, the concavity of the potential functionis inverted ():and the temporal component of the variance vanishes:. These two conditions are equivalent to the expression:where, as we saw,coincides with the squared value of the radius of the compact component of the stability eigenspectrum ( Equation 44 ). In the phase diagram of Figure 1 B, we solved this equation for g to derive the position of the instability boundary from stationary to chaotic regimes.

Spontaneous dynamics: structures overlapping on the unitary direction u = ( 1,1 , … 1 ) / N . Within the statistical description of vector components, in this situation the joint probability density p ( m , n ) can be replaced by the product two normal distributions (respectively, N ( M m , Σ m 2 ) and N ( M n , Σ n 2 ) ). The mean values M m and M n represent the projections of m and n on the common direction u, and the overlap between m and n is given by M m M n . The components m and n are otherwise independent, the fluctuations representing the remaining parts of m and n that lie along mutually orthogonal directions. In this situation, the expression for κ simplifies to κ = 〈 n i [ ϕ i ] 〉 = M n 〈 [ ϕ i ] 〉 (82)

so that a non-zero overlap κ can be obtained only if the mean population activity 〈 [ ϕ i ] 〉 is non-zero. Choosing independently drawn m and n vectors thus slightly simplifies the mean-field network description. The main qualitative features resulting from the interaction between the structured and the random component of the connectivity can however already be observed, and more easily understood, within this simplified setting. In this section, we analyze in detail a specific case, in which the connectivity vectors m and n overlap solely along the unitary direction. Within the statistical description of vector components, in this situation the joint probability densitycan be replaced by the product two normal distributions (respectively,and). The mean valuesandrepresent the projections of m and n on the common direction u, and the overlap between m and n is given by. The components m and n are otherwise independent, the fluctuations representing the remaining parts of m and n that lie along mutually orthogonal directions. In this situation, the expression for κ simplifies toso that a non-zero overlap κ can be obtained only if the mean population activityis non-zero. Choosing independently drawn m and n vectors thus slightly simplifies the mean-field network description. The main qualitative features resulting from the interaction between the structured and the random component of the connectivity can however already be observed, and more easily understood, within this simplified setting.

Stationary solutions Δ 0 : μ = M m M n 〈 [ ϕ i ] 〉 : = F ( μ , Δ 0 ) Δ 0 = g 2 〈 [ ϕ i 2 ] 〉 + Σ m 2 M n 2 〈 [ ϕ i ] 〉 2 : = G ( μ , Δ 0 ) . (83)

The DMF description for stationary solutions reduces to a system of two non-linear equations for the population averaged mean μ and variance 〈 [ ϕ i ] 〉 and 〈 [ ϕ i 2 ] 〉 are computed as Gaussian integrals similarly to Δ 0 by iterating the equations up to convergence, which is equivalent to numerically simulating the two-dimensional dynamical system given by μ ˙ ( t ) = − μ + F ( μ , Δ 0 ) Δ ˙ 0 ( t ) = − Δ 0 + G ( μ , Δ 0 ) , (84)

since the fixed points of this dynamical system correspond to solutions of 〈 [ ϕ i ] 〉 are evaluated numerically through Gauss-Hermite quadrature with a sampling over 200 points. Unstable solutions can be computed by iterating the same equations after having inverted the sign of the time variable in the first equation. The population averagesandare computed as Gaussian integrals similarly to Equation 39 Equation 83 can be solved numerically for μ andby iterating the equations up to convergence, which is equivalent to numerically simulating the two-dimensional dynamical system given bysince the fixed points of this dynamical system correspond to solutions of Equation 83 . Gaussian integrals in the form ofare evaluated numerically through Gauss-Hermite quadrature with a sampling over 200 points. Unstable solutions can be computed by iterating the same equations after having inverted the sign of the time variable in the first equation. μ − Δ 0 plane the loci of points where the two individual equations μ = F ( μ , Δ 0 ) Δ 0 = G ( μ , Δ 0 ) (85)

are satisfied. In analogy with dynamical systems approaches, we refer to the two corresponding curves as the DMF nullclines. The solutions of As the system of equations in Equation 83 is two-dimensional, we can investigate the number and the nature of stationary solutions through a simple graphical approach ( Figure S1 G). We plot on theplane the loci of points where the two individual equationsare satisfied. In analogy with dynamical systems approaches, we refer to the two corresponding curves as the DMF nullclines. The solutions of Equation 83 are then given by the intersections of the two nullclines. To begin with, we focus on the nullcline defined by the first equation (also referred to as the μ nullcline). With respect to μ, F ( μ , Δ 0 ) is an odd sigmoidal function whose maximal slope depends on the value of Δ 0 and M m M n . When g = 0 and Σ m = 0 , the input variance Δ 0 vanishes. In this case, the points of the μ nullcline trivially reduce to the roots of the equation: μ = M m M n ϕ ( μ ) , which admits either one ( M m M n < 1 ) , or three solutions ( M m M n > 1 ) . Non-zero values of g and Σ m imply finite and positive values of Δ 0 . As Δ 0 increases, the solutions to the equation μ = M m M n 〈 [ ϕ i ] 〉 vary smoothly, delineating the full nullcline in the μ − Δ 0 plane. As in the case without disorder ( g = 0 and Σ m = 0 ), for low structure strengths ( M m M n < 1 ), the μ nullcline consists of a unique branch: μ = 0 ∀ Δ 0 . At high structure strengths ( M m M n > 1 ) , instead, its shape smoothly transforms into a symmetric pitchfork. Δ 0 nullcline is given by the solutions of Δ 0 = G ( μ , Δ 0 ) for Δ 0 as function of μ. As G ( μ , Δ 0 ) depends quadratically on μ, the Δ 0 nullcline has a symmetric V-shape centered in μ = 0 . The ordinate of its vertex is controlled by the parameter g, as the second term of the second equation in μ = 0 . For μ = 0 , the slope of G ( μ , Δ 0 ) in Δ 0 = 0 is equal to g 2 . As a consequence, for g < 1 , the vertex of the Δ 0 nullcline is fixed in (0,0), while for g > 1 , the vertex is located at Δ 0 > 0 and an isolated point remains at ( 0,0 ) . Thenullcline is given by the solutions offoras function of μ. Asdepends quadratically on μ, thenullcline has a symmetric V-shape centered in. The ordinate of its vertex is controlled by the parameter g, as the second term of the second equation in 83 vanishes at. For, the slope ofinis equal to. As a consequence, for, the vertex of thenullcline is fixed in (0,0), while for, the vertex is located atand an isolated point remains at μ = 0 , Δ 0 = 0 , corresponding to the trivial, homogeneous stationary solution. The existence of other solutions are determined by the qualitative features of the individual nullclines, that depend on whether M m M n and g are smaller or greater than one ( M m M n < 1 and g < 1 , only the trivial solutions exist; (ii) for M m M n > 1 , two additional, symmetric solutions exist for non-zero values of μ and Δ 0 , corresponding to symmetric, heterogeneous stationary states; (iii) for g > 1 , an additional solution exist for μ = 0 and, corresponding to a heterogeneous solution in which individual units have non-zero stationary activity, but the population-average vanishes. For M m M n > 1 , this solution can co-exist with the symmetric heterogeneous ones, but in the limit of large g these solutions disappear ( The stationary solutions of the DMF equations are determined by the intersections between the two nullclines. For all values of the parameters, the nullclines intersect in, corresponding to the trivial, homogeneous stationary solution. The existence of other solutions are determined by the qualitative features of the individual nullclines, that depend on whetherand g are smaller or greater than one ( Figure S1 G). The following qualitative situations can be distinguished: (i) forand, only the trivial solutions exist; (ii) for, two additional, symmetric solutions exist for non-zero values of μ and, corresponding to symmetric, heterogeneous stationary states; (iii) for, an additional solution exist forand, corresponding to a heterogeneous solution in which individual units have non-zero stationary activity, but the population-average vanishes. For, this solution can co-exist with the symmetric heterogeneous ones, but in the limit of large g these solutions disappear ( Figure S1 G). μ = 0 , Δ 0 = 0 can be readily assessed using random matrix theory arguments ( M m M n < 1 and g < 1 . At M m M n = 1 , it loses stability due to the outlying eigenvalue of the stability matrix, leading to the bifurcation already observed at the level of nullclines. At g = 1 , the instability is due to the radius of the bulk of the spectrum. This leads to a chaotic state, not predicted from the nullclines for the stationary solutions. The next step is to assess the stability of the various solutions. As explained earlier on, the stability of the trivial statecan be readily assessed using random matrix theory arguments ( Figures S1 A and S1B). This state is stable only forand. At, it loses stability due to the outlying eigenvalue of the stability matrix, leading to the bifurcation already observed at the level of nullclines. At, the instability is due to the radius of the bulk of the spectrum. This leads to a chaotic state, not predicted from the nullclines for the stationary solutions. M m M n > 1 , we however find that as g is increased, the radius of the bulk of the spectrum always leads to a chaotic instability before the outlier becomes unstable. Correspondingly, the μ = 0 and Δ 0 > 0 stationary state that exist for large g is never stable. The stability of heterogeneous stationary states is assessed by determining separately the radius of the bulk of the spectrum and the position of the outlier ( Figures S1 D–S1F). The radius is determined from Equation 44 . The outlier is instead computed as the leading eigenvalue of the stability matrix given in Equation 71 . Note that in the present framework, where the overlap is defined along the unitary direction, it is possible to show that the latter is equivalent to computing the leading stability eigenvalue of the effective dynamical system introduced in Equation 84 , linearized around the corresponding fixed point. The bifurcation obtained when the outlier crosses unity is equivalent to the bifurcation predicted from the nullclines when the symmetric solutions disappear in favor of the heterogeneous solution of mean zero ( Figure S1 G). For, we however find that as g is increased, the radius of the bulk of the spectrum always leads to a chaotic instability before the outlier becomes unstable. Correspondingly, theandstationary state that exist for large g is never stable.

Chaotic solutions μ = F ( μ , Δ 0 , Δ ∞ ) = M m M n ∫ D z ϕ ( μ + Δ 0 z ) Δ 0 = G ( μ , Δ 0 , Δ ∞ ) = [ Δ ∞ 2 + 2 g 2 { ∫ D z Φ 2 ( μ + Δ 0 z ) − ∫ D z [ ∫ D x Φ ( μ + Δ 0 − Δ ∞ x + Δ ∞ z ) ] 2 } + M n 2 Σ m 2 〈 [ ϕ i ] 〉 2 ( Δ 0 − Δ ∞ ) ] 1 2 Δ ∞ = H ( μ , Δ 0 , Δ ∞ ) = g 2 ∫ D z [ ∫ D x ϕ ( μ + Δ 0 − Δ ∞ x + Δ ∞ z ) ] 2 + M n 2 Σ m 2 〈 [ ϕ i ] 〉 2 . (86)

For large g, the instabilities of the stationary points generated by the bulk of the spectrum are expected to give rise to chaotic dynamics. We therefore turn to the DMF theory for chaotic states, which are described by an additional variable that quantifies temporal fluctuations. For the case studied here of connectivity vectors m and n overlapping only along the unitary direction, Equation 77 become μ ˙ = − μ + F ( μ , Δ 0 , Δ ∞ ) Δ ˙ 0 = − Δ 0 + G ( μ , Δ 0 , Δ ∞ ) Δ ˙ ∞ = − Δ ∞ + H ( μ , Δ 0 , Δ ∞ ) . (87)

where the double Gaussian integrals from Δ 0 = Δ ∞ . As the system to be solved is now three-dimensional, graphical approaches have only limited use. Similarly to the stationary state, a practical and stable way to find numerically the solutions is to iterate the dynamical system given bywhere the double Gaussian integrals from Equation 86 can be evaluated numerically as two nested Gauss-Hermite quadratures. Note that stationary states simply correspond to solutions for which M m M n and the disorder strength g. If g > 1 and M m M n < 1 , a single chaotic state exists corresponding to μ = 0 and Δ ∞ = 0 , meaning that the temporally averaged activity of all units vanishes, so that fluctuations are only temporal ( M m M n crosses unity, two symmetric states appear with non-zero values of μ and Δ ∞ . These states correspond to bistable heterogeneous chaotic states ( As for stationary solutions, different types of chaotic solutions appear depending on the values of the structure strengthand the disorder strength g. Ifand, a single chaotic state exists corresponding toand, meaning that the temporally averaged activity of all units vanishes, so that fluctuations are only temporal ( Figure 1 B red). Ascrosses unity, two symmetric states appear with non-zero values of μ and. These states correspond to bistable heterogeneous chaotic states ( Figure 1 B orange) that are analogous to bistable heterogeneous stationary states. g B at which heterogeneous chaotic states emerge (gray boundary in the phase diagram of ( 0 , Δ 0 , 0 ) . A long but straightforward algebra reveals that the stability matrix, evaluated in, is simply given by ( M m M n 〈 ϕ ' 〉 0 0 0 g 2 ( 〈 ϕ 2 〉 + 〈 Φ ϕ ' 〉 − 〈 Φ 〉 〈 ϕ ' 〉 ) Δ 0 0 0 0 g 2 〈 ϕ ' 〉 2 ) , (88)

such that g B corresponds to the value of the random strength g for which the largest of its three eigenvalues crosses unity. The critical disorder strengthat which heterogeneous chaotic states emerge (gray boundary in the phase diagram of Figure 1 ) is computed by evaluating the linear stability of the dynamics in 87 around the central solution. A long but straightforward algebra reveals that the stability matrix, evaluated in, is simply given bysuch thatcorresponds to the value of the random strength g for which the largest of its three eigenvalues crosses unity.

Spontaneous dynamics: structures overlapping on an arbitrary direction In the previous section, we focused on the simplified scenario where the connectivity vectors m and n overlapped only in the unitary direction. Here, we briefly turn to the opposite case where the overlap along the unitary direction u vanishes (i.e., M m = 0 , M n = 0 ), but the overlap ρ along a direction orthogonal to u is non-zero. As we will show, although the equations describing the network activity present some formal differences, they lead to qualitatively similar regimes. The same qualitative results apply as well to the general case, where an overlap exists on both the unitary and an orthogonal direction. μ = 0 . Stationary solutions are now determined by: κ = ρ κ Σ m Σ n 〈 [ ϕ ′ i ( 0 , Δ 0 ) ] 〉 : = F ( κ , Δ 0 ) Δ 0 = g 2 〈 [ ϕ i 2 ( 0 , Δ 0 ) ] 〉 + Σ m 2 κ 2 : = G ( κ , Δ 0 ) . (89)

The network dynamics can be studied by solving the DMF Equations 40 and 77 by setting. Stationary solutions are now determined by: Note that, in this more general case, the relevant first-order statistics of network activity is given by the overlap κ, which now can take non-zero values even when the population-averaged activity 〈 [ ϕ i ] 〉 vanishes. κ = ρ κ Σ m Σ n 〈 [ ϕ i ' ( 0 , Δ 0 ) ] 〉 . As both sides of the first equation are linear and homogeneous in κ, two classes of solutions exist: a trivial solution ( κ = 0 for any Δ 0 ), and a non-trivial one ( Δ 0 = Δ ˜ 0 for any κ), with Δ ˜ 0 determined by: 〈 [ ϕ i ′ ( 0 , Δ ˜ 0 ) ] 〉 = 1 / ( ρ Σ m Σ n ) . (90)

As in the previous case, the stationary solutions can be analyzed in terms of nullclines ( Figure S2 A). The main difference lies in the κ nullcline given by. As both sides of the first equation are linear and homogeneous in κ, two classes of solutions exist: a trivial solution (for any), and a non-trivial one (for any κ), withdetermined by: 0 < ϕ ′ ( x ) < 1 , ρ > 1 / Σ m Σ n . In consequence, the κ nullcline takes qualitatively different shapes depending on the value of ρ: (i) for ρ < 1 / Σ m Σ n , it consists only of a vertical branch κ = 0 (ii) for ρ > 1 / Σ m Σ n an additional horizontal branch Δ 0 = Δ ˜ 0 appears ( Because Equation 90 admits non-trivial solutions only for sufficiently large overlap values:. In consequence, the κ nullcline takes qualitatively different shapes depending on the value of ρ: (i) for, it consists only of a vertical branch(ii) foran additional horizontal branchappears ( Figure S2 A). The Δ 0 branch is qualitatively similar to the previously studied case of m and n overlapping along the unitary direction, with a qualitative change when the disorder parameter g crosses unity. The stationary solutions are given by the intersections between the two nullclines. Although the shape of the κ nullcline is distinct from the shape of the μ nullcline studied in the previous case, qualitatively similar regimes are found. The trivial stationary state κ = 0 , Δ 0 = 0 exists for all parameter values. When the structure strength ρ Σ m Σ n exceeds unity, two symmetric heterogeneous states appear with non-zero κ values of opposite signs (but vanishing mean μ). Finally for large g an additional state appears with κ = 0 , Δ 0 > 0 . Similarly to Figure 1 , the solutions of Equation 89 , which correspond to stationary activity states, are shown in blue in Figures S2 B–S2D. In Figure S2 B we address their stability properties: again we find that when non-centered stationary solutions exist, the central fixed point becomes unstable. The instability is led by the outlier eigenvalue of the stability eigenspectrum. Similarly to Figure 1 , furthermore, the DMF theory predicts an instability to chaotic phases for high g values. As for stationary states, both heterogeneous and homogeneous chaotic solutions are admitted ( Figures S2 C and S2D); heterogeneous chaotic states exist in a parameter region where the values of g and ρ are comparable.

Response to external inputs I i , so that the pattern of inputs at the network level is characterized by the N-dimensional vector I = { I i } . The network dynamics in general depend on the geometrical arrangement of the vector I with respect to the connectivity vectors m and n. Within the statistical description used in DMF theory, the input pattern is therefore characterized by the first- and second-order statistics M I and Σ I of its elements, as well as by the value of the correlations Σ m I and Σ n I with the vectors m and n. In geometric terms, M I quantifies the component of I along the unit direction u, while Σ m I and Σ n I quantify the overlaps with m and n along directions orthogonal to u. For the sake of simplicity, here we consider two connectivity vectors m and n that overlap solely on the unitary direction ( ρ = 0 ). The two vectors thus read (see m = M m + Σ m x 1 n = M n + Σ n x 2 . (91)

In this section, we examine the effect of non-vanishing external inputs on the network dynamics. We consider the situation in which every unit receives a potentially different input, so that the pattern of inputs at the network level is characterized by the N-dimensional vector. The network dynamics in general depend on the geometrical arrangement of the vector I with respect to the connectivity vectors m and n. Within the statistical description used in DMF theory, the input pattern is therefore characterized by the first- and second-order statisticsandof its elements, as well as by the value of the correlationsandwith the vectors m and n. In geometric terms,quantifies the component of I along the unit direction u, whileandquantify the overlaps with m and n along directions orthogonal to u. For the sake of simplicity, here we consider two connectivity vectors m and n that overlap solely on the unitary direction (). The two vectors thus read (see Equation 34 ): x 1 and x 2 ). It can moreover include further orthogonal components of strength Σ ⊥ . The most general expression for the input vector can thus be written as: I = M I + Σ m I Σ m x 1 + Σ n I Σ n x 2 + Σ ⊥ h (92)

where h is a standard normal vector. We first focus on the equilibrium response to constant inputs, and then turn to transient dynamics. The input pattern can overlap with the connectivity vectors on the common (u) and on the orthogonal directions (and). It can moreover include further orthogonal components of strength. The most general expression for the input vector can thus be written as:where h is a standard normal vector. We first focus on the equilibrium response to constant inputs, and then turn to transient dynamics. ξ i ( t ) = η i ( t ) + I i ( t ) , with η i ( t ) defined as in η i ( t ) which have been computed in the previous paragraphs to obtain the equation for the mean activity: μ i = [ x i ] = m i κ + I i . (93)

The mean-field equations in presence of external inputs can be derived in a straightforward fashion by following the same steps as in the input-free case. We start by considering the statistics of the effective coupling term, which is given by, withdefined as in Equation 20 . We can then exploit the statistics ofwhich have been computed in the previous paragraphs to obtain the equation for the mean activity: κ = 〈 n i [ ϕ i ] 〉 = 〈 n i ∫ D z ϕ ( m i κ + I i + Δ 0 I z ) 〉 = M n 〈 [ ϕ i ] 〉 + Σ n I 〈 [ ϕ i ' ] 〉 , (94)

as both vectors m and I share non-trivial overlap directions with n. Equation 93 indicates that the direction of the average network activity is determined by a combination of the structured recurrent connectivity and the external input pattern. The final direction of the activation vector in the N-dimensional population space is controlled by the value of the overlap κ, which depends on the relative orientations of m, n and I. Its value is given by the self-consistent equation:as both vectors m and I share non-trivial overlap directions with n. [ ξ i ( t ) ξ j ( t + τ ) ] = δ i j g 2 〈 [ ϕ i ( t ) ϕ i ( t + τ ) ] 〉 + m i m j κ 2 + ( m i I j + m j I i ) κ + I i I j . (95)

The second-order statistics of the noise are given by: 〈 [ ξ i ( t ) ξ i ( t + τ ) ] 〉 − 〈 [ ξ i ( t ) ] 〉 2 = g 2 〈 [ ϕ i 2 ] 〉 + Σ m 2 κ 2 + 2 Σ m I κ + Σ I 2 . (96)

Averaging across the population we obtain: Σ μ 2 = Σ m 2 κ 2 + 2 Σ m I κ + Σ I 2 represents the variance induced by the structure, which is inherited from both vectors m and I ( Σ I 2 = Σ m I 2 Σ m 2 + Σ n I 2 Σ n 2 + Σ ⊥ 2 . (97)

The first term of the r.h.s. represents the quenched variability inherited from the random connectivity matrix, whilerepresents the variance induced by the structure, which is inherited from both vectors m and I ( Equation 93 ). From Equation 92 , the variance of the input reads: μ = M m κ + M I Δ ¨ = Δ − { g 2 〈 [ ϕ i ( t ) ϕ ( t + τ ) ] 〉 + Σ m 2 κ 2 + 2 Σ m I κ + Σ I 2 } κ = M n 〈 [ ϕ i ] 〉 + Σ n I 〈 [ ϕ i ' ] 〉 (98)

which, similarly to the cases we examined in detail so far, admits both stationary and chaotic solutions. As for spontaneous dynamics, the instabilities to chaos are computed by evaluating the radius of the eigenspectrum of the stability matrix S i j ( M is given by: M = ( 0 0 M m 2 g 2 〈 [ ϕ i ϕ i ' ] 〉 g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } 2 Σ m 2 κ 0 + 2 Σ m I 2 b g 2 〈 [ ϕ i ϕ i ' ] 〉 b g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } b ( 2 Σ m 2 κ 0 + 2 Σ m I ) + a ) , (99)

with: a = M m M n 〈 [ ϕ i ′ ] 〉 + M m Σ n I 〈 [ ϕ i ″ ] 〉 b = 1 2 { M n 〈 [ ϕ i " ] 〉 + Σ n I 〈 [ ϕ i ″ ′ ] 〉 } . (100)

As in the input-free case, when the stability eigenspectrum contains one outlier eigenvalue, its position is well predicted by the largest eigenvalue of M . The final DMF equations to be solved are given by the following system:which, similarly to the cases we examined in detail so far, admits both stationary and chaotic solutions. As for spontaneous dynamics, the instabilities to chaos are computed by evaluating the radius of the eigenspectrum of the stability matrix Equation 44 ). The stability matrix can admit an outlier eigenvalue as well, whose value can be predicted with a mean-field stability analysis. Extending the arguments already presented in the previous paragraphs allows to show that the effective stability matrixis given by:with:As in the input-free case, when the stability eigenspectrum contains one outlier eigenvalue, its position is well predicted by the largest eigenvalue of In the following, we refer to Figure 2 and analyze in detail the contribution of every input direction to the final network dynamics. M m = M n = 0 . The input direction is orthogonal to the connectivity vectors: Σ m I = Σ n I = 0 , so that the input strength is quantified by the amplitude of the component along h ( Σ ⊥ ) . In this configuration, because of In Figure 2 D (left), we consider a unit-rank structure whose vectors m and n are orthogonal:. The input direction is orthogonal to the connectivity vectors:, so that the input strength is quantified by the amplitude of the component along h. In this configuration, because of Equation 94 , the amount of structured activity quantified by κ systematically vanishes. x 2 . We keep Σ ⊥ = 1 fixed and we vary the component of the input along n by increasing Σ n I . As can be seen from the equation for κ ( Σ n I between the input and the left vector n has the effect of increasing the value of κ, which would otherwise vanish since the structure has null strength ( M n = 0 ) . In response to the input, a structured state emerges. From the same equation, furthermore, one can notice that the Σ n I term has the effect of breaking the sign reversal symmetry ( x → − x ) that characterizes the mean-field equations in the case of spontaneous dynamics. In Figure 2 D (center), we consider again orthogonal connectivity vectors, but we take an input pattern which overlaps with n along. We keepfixed and we vary the component of the input along n by increasing. As can be seen from the equation for κ ( Equation 98 ), the overlapbetween the input and the left vector n has the effect of increasing the value of κ, which would otherwise vanish since the structure has null strength. In response to the input, a structured state emerges. From the same equation, furthermore, one can notice that theterm has the effect of breaking the sign reversal symmetrythat characterizes the mean-field equations in the case of spontaneous dynamics. ( M m M n = 3.5 ) . In absence of external activity, the network dynamics thus admit two bistable solutions ( I = 0, Σ n I > 0 ). In this configuration, the external input has the effect of disrupting the symmetry between the two stable solutions. For sufficiently strong input values, one of the two stable solutions disappears by annihilating with the unstable one. In Figure 2 D (right), we include strong non-vanishing structure strengths. In absence of external activity, the network dynamics thus admit two bistable solutions ( Figure 1 ). We consider an input pattern that correlates with n but is orthogonal to the structure overlap direction (M= 0,). In this configuration, the external input has the effect of disrupting the symmetry between the two stable solutions. For sufficiently strong input values, one of the two stable solutions disappears by annihilating with the unstable one. x 2 . In models of computational tasks that employ non-linear input responses ( In Figure S4 C, we show that the value of the critical input strength for which one of the two stable solution disappears can be controlled by an additional external input that overlaps with n on a different, orthogonal direction. Specifically, in Figure S4 C, we tune the additional input along the direction of the structure overlap u. This input component can be thought as a modulatory signal which controls the way the network dynamics process the input stimulus along. In models of computational tasks that employ non-linear input responses ( Figure 4 ), a modulatory input along the structure overlap can regulate the threshold value of the input strength that the network has learnt to detect. Similarly, in Figures 5 and 6 , modulatory inputs are used to completely block the response to the non-relevant input stimulus, so that the readout can produce context-dependent outputs.

Asymmetric solutions A major effect of external inputs is that they break the sign reversal symmetry ( x → − x ) present in the network dynamics without inputs. As a consequence, in the parameter regions where the network dynamics admit bistable structured states, the two stable solutions are characterized by different statistics and stability properties. M I ≠ 0 , Σ m I = Σ n I = 0 ). The solutions of the system of equations corresponding to stationary states can be visualized with the help of the graphical approach, which unveils the symmetry breaking of network dynamics induced by external inputs ( To illustrate this effect, we focus on the simple case where the external input pattern I overlaps with the connectivity vectors m and n solely on the unitary direction (). The solutions of the system of equations corresponding to stationary states can be visualized with the help of the graphical approach, which unveils the symmetry breaking of network dynamics induced by external inputs ( Figure S4 D). Similarly to the input-free case, the Δ 0 nullcline consists of a symmetric V-shaped curve. In contrast to before, however, the vertex of the nullcline is no longer fixed in ( 0,0 ) , but takes positive ordinate values also at low g values. The value of G ( 0 , Δ 0 ) , indeed, does not vanish, because of the finite contribution from the input pattern Σ I 2 . The nullcline curves of μ are instead strongly asymmetric. For low M m M n values, one single μ nullcline exists. In contrast to the input-free case, this nullcline is no longer centered in zero. As a consequence, it intersects the Δ 0 nullclines in one non-zero point, corresponding to a unique heterogeneous stationary solution. As M m M n increases, a second, separated branch can appear. In contrast to the input-free case, the structure strength at which the second branch appears is not always equal to unity, but depends on the mean value of the input. If M m M n is strong enough, the negative branch of the nullcline can intersect the Δ 0 nullcline in two different fixed points, while a third solution is built on the positive μ nullcline. As g increases, the two intersections on the negative branch become closer and closer and they eventually collapse together. At a critical value g B , the network activity discontinuously jumps from negative to positive mean solutions. As they are no longer symmetrical, the stability of the positive and the negative fixed points has to be assessed separately, and gives rise to different instability boundaries. Computing the position of the outlier reveals that, when more than one solution is admitted by the mean-field system of equations, the centered one is always unstable. As the stability boundaries of different stationary solutions do not necessarily coincide, in presence of external input patterns the phase diagram of the dynamics are in general more complex ( Figures S4 A–S4C). Specifically, hybrid dynamical regimes, where one static solution co-exists with a chaotic attractor, can be observed.

Transient dynamics We now turn to transient dynamics evoked by a temporal step in the external input ( Figure 2 B). We specifically examine the projection of the activation vector and its average onto the two salient directions spanned by vectors m and I. μ i , and we finally project it onto the two orthogonal directions which are indicated in the small insets of The transient dynamics of relaxation to a stationary solution can be assessed by linearizing the mean-field dynamics. We compute the time course of the average activation vector, and we finally project it onto the two orthogonal directions which are indicated in the small insets of Figure 2 B. μ i is governed by: μ ˙ i ( t ) = − μ i ( t ) + m i κ ( t ) + I i ( t ) (101)

so that, at every point in time: μ i ( t ) = m i κ ˜ ( t ) + I i ˜ ( t ) , (102)

where κ ˜ ( t ) and I i ˜ ( t ) coincide with the low-pass filtered versions of κ ( t ) and I ( t ) . Similarly to Equation 45 , the time evolution ofis governed by:so that, at every point in time:whereandcoincide with the low-pass filtered versions ofand I i ˜ ( t ) coincides with a simple exponential relaxation to the pattern I i . The decay timescale is set by the time evolution of activity ( I i ˜ ( t ) = I i + ( I i i c − I i ) e − t . (103)

When the network activity is freely decaying back to an equilibrium stationary state,coincides with a simple exponential relaxation to the pattern. The decay timescale is set by the time evolution of activity ( Equation 6 ), which is taken here to be equal to unity: κ ˜ ( t ) is inherited from the dynamics of κ ( t ) . We thus refer to our mean-field stability analysis, and we compute the relaxation time of the population statistics κ ( t ) as the largest eigenvalue of the stability matrix M . The eigenvalue predicts a time constant τ r , which is in general larger than unity. As a consequence, the relaxation of κ ( t ) obeys, for small displacements: κ ( t ) = κ 0 + ( κ i c − κ 0 ) e − t τ r , (104)

where the asymptotic value of κ 0 is determined from the equilibrium mean-field equations ( κ ˜ ( t ) is derived as the low-pass filter version of The timescale ofis inherited from the dynamics of. We thus refer to our mean-field stability analysis, and we compute the relaxation time of the population statisticsas the largest eigenvalue of the stability matrix. The eigenvalue predicts a time constant, which is in general larger than unity. As a consequence, the relaxation ofobeys, for small displacements:where the asymptotic value ofis determined from the equilibrium mean-field equations ( Equations 98 ). Finally, the time course ofis derived as the low-pass filter version of Equation 104 with unit decay timescale.

Rank-two connectivity structures P i j = m i ( 1 ) n j ( 1 ) N + m i ( 2 ) n j ( 2 ) N , (105)

where the vector pairs m ( 1 ) and m ( 2 ) , n ( 1 ) and n ( 2 ) are assumed to be linearly independent. In the following paragraphs, we provide the detailed analysis for network models with rank-two connectivity structures. The structured component of the connectivity can be written as:where the vector pairsandandare assumed to be linearly independent. As in the case of unit-rank structures, we determine the network statistics by exploiting the link between linear stability analysis and mean-field description. The study of the properties of eigenvalues and eigenvectors for the low-dimensional matrix P i j helps to predict the complex behavior of activity above the instability and to restrict our attention to the cases of interest. I i is given by: μ i = κ 1 m i ( 1 ) + κ 2 m i ( 2 ) + I i . (106)

The mean activity of the network in response to a fixed input patternis given by: The final direction of the population activity is thus determined by the overlap values κ 1 = 〈 n i ( 1 ) [ ϕ i ] 〉 and κ 2 = 〈 n i ( 2 ) [ ϕ i ] 〉 . J i j . The stability of the heterogeneous stationary states can be assessed as before by evaluating separately the value of the radius ( S i j . The expression of the mean-field equations for the first- and second-order statistics are determined by the geometrical arrangement of the connectivity and the input vectors. Similarly to the unit-rank case, the simplest mean-field solutions correspond to stationary states, which inherit the structure of the most unstable eigenvectors of the connectivity matrix. The stability of the heterogeneous stationary states can be assessed as before by evaluating separately the value of the radius ( Equation 44 ) and the position of the outliers of the linear stability matrix d d t ( μ 1 Δ 0 1 κ 1 1 κ 2 1 ) = − ( μ 1 Δ 0 1 κ 1 1 κ 2 1 ) + M ( μ 1 Δ 0 1 κ 1 1 κ 2 1 ) . (107)

Similarly to the unit-rank case, it is possible to compute the position of the outlier eigenvalues by studying the linearized dynamics of the network statistics close to the fixed point, that is given by: Note that, in κ k l , the subscript k = 1,2 refers to the left vector n ( k ) with which the overlap is computed, while the superscript l = 0,1 indicates the order of the perturbation away from the fixed point. M , we follow and extend the reasoning discussed in details for the unit-rank case. We start by considering the time evolution of the linearized activity μ i 1 , which similarly to μ ˙ i 1 ( t ) = − μ i 1 + m i ( 1 ) κ 1 1 + m i ( 2 ) κ 2 1 . (108)

In order to compute the elements of the linear stability matrix, we follow and extend the reasoning discussed in details for the unit-rank case. We start by considering the time evolution of the linearized activity, which similarly to Equation 45 reads: At every point in time, we can write: μ i t = m i ( 1 ) κ ˜ 1 t + m i ( 2 ) κ ˜ 2 t , where κ ˜ k t is the low-pass filtered version of κ k t : ( 1 + d / d t ) κ ˜ k t = κ k t . μ ˙ 1 ( t ) = − μ 1 , (109)

so that the elements in the first row of M vanish. In analogy with Δ 0 gives instead: Δ ˙ 0 1 = − Δ 0 1 + 2 g 2 〈 [ ϕ i ϕ i ' ] 〉 μ 1 + g 2 { 〈 [ ϕ i ' 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } Δ 0 1 + 2 Σ m 2 κ 1 0 κ 1 1 + 2 Σ m 2 κ 2 0 κ 2 1 . (110)

In the case of orthogonal (zero mean), random connectivity vectors, we get:so that the elements in the first row ofvanish. In analogy with Equation 64 , the linearized dynamics ofgives instead: κ 1 we need to compute: κ 1 1 = 〈 n i ( 1 ) [ x i 1 ϕ ' ( x i 0 ) ] 〉 = 〈 n i ( 1 ) μ i [ ϕ i ' ] 〉 + ( Δ 0 1 2 − 〈 μ i 1 μ i 0 〉 − 〈 μ i 1 〉 〈 μ i 0 〉 ) 〈 n i ( 1 ) [ ϕ i " ] 〉 (111)

Similarly to the unit-rank case ( Equation 47 ), in order to determine the linear response ofwe need to compute: A similar expression can be derived for κ 2 1 . κ ˜ 1 1 , κ ˜ 2 1 and Δ 0 1 , leading to expressions of the form: κ 1 1 = a 11 κ ˜ 1 1 + a 12 κ ˜ 2 1 + b 1 Δ 0 1 κ 2 1 = a 21 κ ˜ 1 1 + a 22 κ ˜ 2 1 + b 2 Δ 0 1 . (112)

In general, the integrals in the r.h.s. can be expressed in terms of the perturbationsand, leading to expressions of the form: ( 1 + d / d t ) to the M = ( 0 0 0 0 2 g 2 〈 [ ϕ i ϕ i ′ ] 〉 g 2 { 〈 [ ϕ i ′ 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } 2 Σ m 2 κ 1 0 2 Σ m 2 κ 2 0 2 b 1 g 2 〈 [ ϕ i ϕ i ′ ] 〉 b 1 g 2 { 〈 [ ϕ i ′ 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } 2 b 1 Σ m 2 κ 1 0 + a 11 2 b 1 Σ m 2 κ 2 0 + a 12 2 b 2 g 2 〈 [ ϕ i ϕ i ′ ] 〉 b 2 g 2 { 〈 [ ϕ i ′ 2 ] 〉 + 〈 [ ϕ i ϕ i ″ ] 〉 } 2 b 2 Σ m 2 κ 1 0 + a 21 2 b 2 Σ m 2 κ 2 0 + a 22 ) , (113)

where the values of the constants a and b depend on the geometric arrangement of the structure and the input vectors. Applying the operatorto the Equation 111 allows to reshape the results in the final matrix form:where the values of the constants a and b depend on the geometric arrangement of the structure and the input vectors. In the following, we consider several specific cases of interest. Note that the non-linear network dynamics is determined by the relative orientation of the structure and input vectors, but also by the characteristics of the statistical distribution of their elements. In contrast to the cases we analyzed so far, the precise shape of the distribution of the entries in the connectivity vectors can play an important role when the rank of P i j is larger than unity. In the following, we focus on the case of broadly, normally distributed patterns.

Rank-two structures with null overlap The simplest case we consider consists of rank-two matrices whose four connectivity vectors m ( 1 ) , m ( 2 ) , n ( 1 ) , and n ( 2 ) are mutually orthogonal. From the point of view of responses to inputs, networks with this structure behave as superpositions of two independent unit-rank structures. Similarly to the unit-rank case, if the connectivity vectors are orthogonal, the network is silent in absence of external inputs: κ 1 = κ 2 = 0 . A single homogeneous state – stationary or chaotic – is the unique stable attractor of the dynamics. Consistently, the eigenspectrum of J i j does not contain any outlier, since every eigenvalue of P i j vanishes. P i j , we can rotate the matrix onto a basis defined by an orthonormal set of vectors, and compute its eigenvalues in the transformed basis. For simplicity, we consider an orthonormal set whose first four vectors are built from the connectivity vectors: u 1 = α 1 m ( 1 ) u 2 = α 2 m ( 2 ) u 3 = α 3 n ( 1 ) u 4 = α 4 n ( 2 ) , (114)

where the coefficient α k ( k = 1 , … , 4 ) denote the normalization factors. In this basis, the first four rows and columns of the rotated matrix P i j ' read: P i j ' = 1 N ( 0 0 1 α 1 α 3 0 0 0 0 1 α 2 α 4 0 0 0 0 0 0 0 0 ) , (115)

all the remaining entries being fixed to 0. From the present matrix form, it easy to verify that all the eigenvalues of P i j ' , and thus all the eigenvalues of P i j , vanish. Note that rewriting P i j in an orthonormal basis simplifies the search for its eigenvalues also in more complex cases where the connectivity vectors share several overlap directions. In those cases, a proper basis needs to be built starting from the connectivity vectors through a Gram-Schmidt orthonormalization process. In order to compute the eigenspectrum of, we can rotate the matrix onto a basis defined by an orthonormal set of vectors, and compute its eigenvalues in the transformed basis. For simplicity, we consider an orthonormal set whose first four vectors are built from the connectivity vectors:where the coefficientdenote the normalization factors. In this basis, the first four rows and columns of the rotated matrixread:all the remaining entries being fixed to 0. From the present matrix form, it easy to verify that all the eigenvalues of, and thus all the eigenvalues of, vanish. Note that rewritingin an orthonormal basis simplifies the search for its eigenvalues also in more complex cases where the connectivity vectors share several overlap directions. In those cases, a proper basis needs to be built starting from the connectivity vectors through a Gram-Schmidt orthonormalization process. As a side note we observe that, even though P i j ' (and thus P i j ) admits only vanishing eigenvalues, its rank is still equal to two. Indeed, the rank can be computed as N minus the dimensionality of the kernel associated to P i j ' , defined by any vector x obeying P ' x = 0 . As P i j ' contains N − 2 empty rows, the last equations impose two independent contraints on the components of x. As a consequence, the dimensionality of the kernel equals N − 2 , and the rank is equal to two. n ( 1 ) : I = n ( 1 ) Σ n I Σ n 2 + x Σ I 2 − Σ n I 2 Σ n 4 . (116)

We turn to responses that are obtained in presence of external inputs. We examine the network dynamics in response to a normalized input I which partially correlates with one of the left-connectivity vectors, here I − m ( 1 ) . The overlap values are given by: κ 1 = Σ n I 〈 [ ϕ i ′ ] 〉 κ 2 = 0 , (117)

and they can be used to close the mean-field equations together with the equation for the first ( μ = 0 ) and second-order statistics. In the case of stationary states we have: Δ 0 = g 2 〈 [ ϕ i 2 ] 〉 + Σ m 2 ( κ 1 2 + κ 2 2 ) + Σ I 2 . (118)

Similarly to the unit-rank case, we find that I elicits a network response in the plane. The overlap values are given by:and they can be used to close the mean-field equations together with the equation for the first () and second-order statistics. In the case of stationary states we have: Similar arguments allow to derive the two equations needed for the chaotic states. M ( b 1 = 1 2 Σ n I 〈 [ ϕ i ″ ] 〉 b 2 = 0. (119)

In order to assess the stability of the stationary states, we evaluate the position of the outliers in the stability eigenspectrum by computing the eigenvalues of Equation 113 ). In the case of orthogonal structures and correlated input patterns I, a little algebra reveals that all the a values vanish, while we have: We conclude that the first and the last row of M always vanish. Furthermore, the second and the third rows are proportional one to the other. As a consequence, the stability analysis predicts at most one outlier eigenvalue, which is indeed observed in the spectrum (not shown). The outlier is negative, as the effect of introducing inputs in the direction of the left vector n ( 1 ) is to further stabilize the dynamics. As it will be shown, more than one outlier can be observed in the case where the low-dimensional structure involves overlap directions.

Rank-two structures with internal pairwise overlap m ( 1 ) and n ( 1 ) , m ( 2 ) and n ( 2 ) share two different overlap directions, defined by vectors y 1 and y 2 . We set: m ( 1 ) = Σ 2 − ρ 1 2 x 1 + ρ 1 y 1 m ( 2 ) = Σ 2 − ρ 2 2 x 2 + ρ 2 y 2 n ( 1 ) = Σ 2 − ρ 1 2 x 3 + ρ 1 y 1 n ( 2 ) = Σ 2 − ρ 2 2 x 4 + ρ 2 y 2 . (120)

where Σ 2 is the variance of the connectivity vectors and ρ 1 2 and ρ 2 2 quantify the overlaps along the directions y 1 and y 2 . As a second case, we consider structured matrices where the two connectivity pairsandandshare two different overlap directions, defined by vectorsand. We set:whereis the variance of the connectivity vectors andandquantify the overlaps along the directionsand By rotating P i j onto the orthonormal basis that can be built from m ( 1 ) and m ( 2 ) by orthogonalizing the left vectors n ( 1 ) and n ( 2 ) , one can easily check that the two non-zero eigenvalues of P i j are given by λ 1 = ρ 1 2 and λ 2 = ρ 2 2 . They correspond, respectively, to the two right-eigenvectors m ( 1 ) and m ( 2 ) . In absence of external inputs, an instability is thus likely to occur in the direction of the m ( k ) vector which corresponds to the strongest overlap. ρ 1 = ρ 2 = ρ , and any combination of m ( 1 ) and m ( 2 ) is a right-eigenvector. The mean-field equations for the first-order statistics read: κ 1 = ρ 2 κ 1 〈 [ ϕ i ' ] 〉 κ 2 = ρ 2 κ 2 〈 [ ϕ i ' ] 〉 . (121)

We specifically focus on the degenerate condition where the two overlaps are equally strong,, and any combination ofandis a right-eigenvector. The mean-field equations for the first-order statistics read: ( κ 1 = κ 2 = 0 ) and a non-trivial state, determined by two identical conditions which read: 1 = ρ 2 〈 [ ϕ i ' ( 0 , Δ 0 ) ] 〉 . (122)

Similarly to Equation 89 , the two equations admit a silentand a non-trivial state, determined by two identical conditions which read: The equation above determines the value of Δ 0 . Note that the non-trivial state exists only for ρ > 1 . Δ 0 = g 2 〈 [ ϕ i 2 ] 〉 + Σ 2 ( κ 1 2 + κ 2 2 ) . (123)

A second condition is imposed by the equation for the second-order momentum which reads, for stationary solutions: Δ 0 is fixed, the mean-field set of equations fixes only the sum κ 1 2 + κ 2 2 , but not each single component. The mean-field thus returns a one-dimensional continuum of solutions, the shape of which resembles a ring of radius κ 1 2 + κ 2 2 in the m ( 1 ) − m ( 2 ) plane (see ρ 2 compared to g ( As the value ofis fixed, the mean-field set of equations fixes only the sum, but not each single component. The mean-field thus returns a one-dimensional continuu