Critical Mass, Length, Temperature …

Many systems and processes—whether in nature or society—can appear unchanging until they reach a certain transition point, after which their behavior shifts dramatically. Think of disease outbreaks, the spread of information in a network, or battery failures in electric vehicles. Let’s go through some examples.

Disease Outbreaks

Consider a disease outbreak such as COVID-19 in 2020. The goal of all nations world-wide was not just to reduce infections slightly, but to bring the average number of new infections per case—known as the reproduction number, or $R_0$—below the critical threshold of 1. When $R_0$ is less than 1, each infected person, on average, transmits the disease to fewer than one other person, and the outbreak dies out. However, if $R_0$ exceeds 1 even by a small margin, the number of infections can grow rapidly. This illustrates how minor changes in our behavior (such as reducing contacts, wearing masks, or isolating when necessary) can have a significant impact on the course of an outbreak.

Social Networks

Take social networks as another example. On platforms like Facebook, the algorithm often recommends “friends of friends.” This means that when a piece of information is shared, its likelihood of reaching someone outside your immediate circle can suddenly jump from almost zero to almost one once a critical level of connectivity in the network is achieved. This threshold behavior is key to understanding how information, trends, or even misinformation can spread rapidly through a network.

Battery Explosions

As electric vehicles become more common, the safety of their batteries becomes increasingly important. In some incidents, a battery remains stable, while in others, the damage can trigger a runaway process, where one failing cell causes its neighbors to fail as well. This process may eventually lead to a dangerous explosion. Here, a critical point is reached when the damage propagates faster than the battery can dissipate the excess energy, leading to a sudden and dramatic failure.


These examples demonstrate how systems can exhibit qualitatively different behavior depending on one or more parameters. This behavior is known as criticality. In the following sections, we will define and classify such systems and their dependencies more precisely. Although the three examples above might seem similar at first glance, they can be classified into two broad categories of critical behavior. We will describe these two classes, examine the decisive differences between them, and explore further examples to deepen our understanding of criticality and the subtle differences between its various forms. After some necessary definitions and terms I included some interactive simulations in this post in order to get an intuitive feeling of the described phenomena.

Terminology

To properly describe and analyze phenomena like those in the examples above, we need some terminology in order to wrap our heads around the processes going on. Each system can be described with:

  • An Order Parameter: a quantity that characterizes the state, or at least a specific quantity of a system we are investigating.
  • A Control Parameter: a parameter that is varied to observe changes in the system’s behavior. When we model a system, we have to be careful about what we can actually influence.

Mathematically, the relationship can be written as:

$$X(t) = f(t; p),$$

where $X(t)$ is the order parameter as a function of time, $p$ is the control parameter, and $f(t;p)$ describes the time dependence of $X$ and the relationship between the order and control parameter.

Although the order parameter is a function of time and is parameterized by the control parameter, it is conceptually easier to think of the order parameter as a function of the control parameter alone, and we often describe what happens to the order parameter over time. Therefore, we may drop the explicit temporal dependence of $X$ in some places.

In systems exhibiting criticality, there exists a critical value of the control parameter, $p_c$, at which the system undergoes a qualitative change.

Two Classes of Systems Exhibiting Criticality

We can differentiate between two types of critical behavior based on the continuity of the order parameter $X$ at the critical point $p_c$ in the limit as $t \to \infty$. Think of this, that we first set $p = p_c$ and then observe the system as $t \to \infty$.

Tipping Point Criticality (discontinuous)

In a tipping point scenario, the order parameter changes discontinuously at the critical point, meaning:

$$\lim_{p \to p_c^+} X(p) \neq \lim_{p \to p_c^-} X(p).$$

A canonical example of tipping point criticality is a system exhibiting exponential growth. Here, the order parameter is the population size $N$, and the control parameter is the exponential growth factor $k$ in:

$$N(t) = N_0 \times k^t.$$

The critical value is $k_c = 1$. Approaching from below,

$$\lim_{k \to 1^-} N(t) \to 0, \text{ as } t \to \infty,$$

while from above,

$$\lim_{k \to 1^+} N(t) \to \infty, \text{ as } t \to \infty.$$

This abrupt change as $k$ crosses 1 is characteristic of tipping point criticality. In the language of dynamical systems it is called an unstable or half-stable fixed point of the system.

Continuous Critical Transition

In a continuous critical transition, the order parameter changes smoothly at $p_c$. Mathematically:

$$ \lim_{p \to p_c^+} X(p) = \lim_{p \to p_c^-} X(p). $$

A classic example is percolation in large networks. Consider a (theoretically infinite) social network where each potential connection (friendship) between two individuals is formed with a certain probability $a$. This probability $a$ is our control parameter. We then look for the emergence of a large (spanning) cluster, meaning a cluster that grows to encompass a finite fraction of the entire network in the limit of infinite network size.

We define the order parameter $P( \text{spanning cluster} ;a)$ as the fraction of all individuals (nodes) that belong to this large spanning cluster—if such a cluster exists. In an infinite system:

  • For $a < a_c$, there is no infinite cluster, so $P(\text{spanning cluster};a) = 0$.
  • For $a > a_c$, a spanning (infinite) cluster emerges, so $P(\text{spanning cluster};a) > 0$.

We can express this succinctly:

$$ P(\text{spanning cluster};a) = \begin{cases} 0 & a < a_c, \\ \text{ > 0 and growing continuously from 0 as } a \text{ increases} & a > a_c. \end{cases} $$

At the critical value $a_c$, the cluster “turns on” but does so continuously, with no sudden jump from zero to a finite value. This smooth onset of large-scale connectivity is a defining feature of continuous critical transitions:

$$ \lim_{a \to a_c^+} P(\text{spanning cluster};a) = \lim_{a \to a_c^-} P(\text{spanning cluster};a), \text{ as } t \to \infty. $$

Such behaviors are described as contiuous shifts of stable fixed points in the language of dynamical systems.

Criticality and Power Laws

When one digs in deeper into the maths and physics of critical phenomena, one will often read the term power law. Power laws in general can be simply defined as some quantity depending on another to the power of some value, like: $ y \sim x^2$. This is nothing astounding on it’s own. The most important part of any power law is always what these quantities, $x$ and $y$ in this example, represent. In critical phenomena power laws occure in two different flavours. The first can be labeled as “competing power laws” and can be a potential cause of criticality, while the second, more intricate one is “power law distributions & scale invariance”, which are describtions of systems at or near the critical value.

Competing Power Laws

In the case of competing power laws, it is often the case that geometric properties of a model cause the quantity of interest to depend on both surface area and volume, for example. Most of the time, the order parameter involves some ratio of these dependencies. Now, if you vary the characteristic scale of the model or object, at some point the volume dependence ($\sim x^3$) will overtake the surface-area dependence ($\sim x^2$). These dependencies aren’t always so straightforward and often include additional constants. However, as you move the control parameter—here, the scale of the model—far enough, you reach a point where one dependence overtakes the other. This can in turn lead to critical phenomena.

Below I provide an interactive simulation of two competing power laws:

$$ y_1 = a_1 \times x^{b_1} \quad\text{and}\quad y_2 = a_2 \times x^{b_2}. $$

Adjust the sliders on the right to change the parameters. The graph on the left shows the curves plotted over fixed $x$– and $y$–axis ranges. In addition, a dashed purple line represents the ratio $y_1 / y_2.$ You can switch between a normal (linear) scale and a log–log scale.






Power Law Distributions & Scale Invariance

The second flavor of power laws in critical phenomena emerges only at or near the critical value of the control parameter, $p_c$. In some cases, an order parameter follows a probability distribution with a power-law tail—that is, the probability of obtaining an outlier greater than a given value decays as a power law. For clarity, the cumulative probability distribution of a random variable $X$, which is our order parameter, is defined as

$$G(x) := P(X \leq x) = \int_{-\infty}^x f(y) dy.$$

The function of interest here is the complementary cumulative distribution:

$$P(X > x) = 1 - G(x) \sim x^{-\alpha}.$$

Here is a small interactive visualization of a power law distribution with a pdf of $f(x) = x^{-\alpha}$ and the complementary cumulative distribution function $1 - G(x) \approx 1 - x^{-\alpha + 1}$:



Tail Probability = 0%

Additionally, an order parameter may itself follow a power law near $p_c$:

$$X \sim |p - p_c|^{-\alpha}.$$

This is an example of scale invariance at or near the critical value. Scale invariance means, that the ratio between $X(p)$ and $X(2 \cdot p)$ stays the same, namely: $2^{\alpha},$ at each scale.

Real World Examples

After introductory examples and the basic notation principles describing criticality, we will dive into further examples in more detail.

Thermal Runaway & Nuclear Chain Reaction

The first examples we look at are kind of similar in outcome and emergence. One of those we have already encountered in the introductory examples, namely the phenomenon of Thermal Runaway as for examples in damaged batteries. The second one, which also can make an explosion but of much larger magnitude is Nuclear Chain Reaction. I have written about nuclear chain reaction in more detail in my book review on The Los Alamos Primer and anyone who is interestedin the theoretical and historical events of the discovery of nuclear chain reaction should check it out or read the book itself (link to the book is provided below).

In both thermal runaway (e.g., in batteries) and nuclear chain reactions, we can think of an exponential state function describing the growth of damage (damaged battery cells) or fission events (fissioned atoms). A simple control parameter is the overall geometry—for example, the characteristic size $s$ of the battery or the bomb. The key reason the exponent in the state function becomes critical can be understood by competing power laws:

$$ N(t; s) = N_0 \times \exp\bigl(f(s) \cdot t \bigr), \quad \text{where} \quad f(s) \sim \frac{a_0 \cdot s^3}{a_1 \cdot s^2}. $$

Here, the energy sources scale with the volume ($\sim s^3$), while the energy sinks (e.g., heat dissipation or neutron leakage) scale with the surface area ($\sim s^2$). If the volume-based source term outgrows the surface-based sink term, $f(s)$ becomes positive and large enough to drive runaway growth of the state function—leading either to thermal runaway in a battery or a self-sustaining chain reaction in a nuclear device. If instead the sink term dominates, the energy release dies out.

Interactive Simulation of a Critical Exponent

Below you see a toy model of a cube (representing a battery) and a sphere (representing an atomic bomb) which can get critical based on the control parameter $s$. The model follows an exponential state model like described above. Note the difference based on the geometry—the cube gets critical if the length of the sides are $ \sqrt3 $ times as long as the radius of the sphere. In this toy model the sphere gets critical at a value above 10 for the control parameter and the cube should get critical at about $ \approx 17.3 = \sqrt3 \cdot 10 $.


$ \text{exponent} \sim f(\frac{\text{volume}}{\text{surface area}})$
Cube $(\text{side} = s)$
Exponent: $ \sim \frac{s^3}{s^2} = $ 0.58
Sphere $(\text{radius} = s)$
Exponent: $\sim \frac{4 \pi s^3}{3 \cdot 4 \pi s^2} = $ 1.00

Griffith Crack Length

Imagine a sail under tension or an air-filled balloon. In both cases, small imperfections or micro-cracks may appear in the material. Under stress, these tiny cracks can suddenly start to grow, potentially leading to a failure of the material. This behavior is common in many stressed materials and offers an intuitive way to think about critical behavior. The workings behind this process were first studied by A.A. Griffith and therefore the name Griffith Crack Length. The basic reasoning follows the book Structures from J.E. Gordon. (For anyone who is interested in more insights from structural mechanics I’d recommend this book. The link to the book is provided below).

The basic idea is that when a crack of length $L$ forms, the material can lower its overall energy by relieving stress. The energy released is generally proportional to $L^2$, while the cost to create new surfaces (the crack itself) increases linearly with $L$. We can write this energy balance as

$$\Delta E(L) = \alpha L^2 - \beta L.$$

Here, $\alpha$ represents a factor related to the tension or stress in the material and other material properties, while $\beta$ depends on the material’s resistance to forming new surfaces (its toughness). This energy balance tells us that when the crack is small, the energy cost of forming the crack (proportional to $L$) outweighs the benefit (proportional to $L^2$). But if the crack grows past a certain length, the energy gain from stress relief becomes dominant.

We define the critical crack length $L_c$ by setting $\Delta E(L) = 0$, which gives

$$L_c = \frac{\beta}{\alpha}.$$

For cracks with $L < L_c$, the net energy is negative, and the crack remains stable. However, if $L$ exceeds $L_c$, the net energy becomes positive and the crack tends to grow.

To capture the growth dynamics, we can model the change in crack length with time by the differential equation

$$\frac{dL}{dt} = \lambda \bigl(\alpha L^2 - \beta L\bigr) = \lambda L (\alpha L - \beta),$$

where $\lambda$ is a constant that sets the time scale of the process. This equation shows that if $L < L_c$, then $(\alpha L - \beta)$ is negative and the crack length remains roughly constant or heals. When $L > L_c$, however, $(\alpha L - \beta)$ becomes positive, and the crack grows exponentially over time.

This example is particularly interesting because the crack length $L$ plays a dual role. It is both an order parameter (describing the state of the system) and a control parameter (determining whether the system is stable or unstable). Moreover, the parameters $\alpha$ and $\beta$ are not fixed; they depend on the specific material properties as well as on the tension or stress applied to the material. This coupling of factors makes the Griffith crack length an unusual and insightful example of criticality in physical systems.

Interactive Simulation of Crack Length

This interactive simulation visualizes a material held under tensile stress between two clamps. On the left, an initial crack (whose length you can adjust with a slider) represents a small flaw in the material. As the crack grows, tensile energy is released, which is highlighted in green. In this simulation, the released energy increases roughly as $\sim L^2$, while the crack itself grows linearly, $\sim L$. The red overlay shows the level of tensile stress applied to the material. By adjusting the tension and the material toughness (i.e., the energy required to extend the crack), a critical crack length is defined and marked by a dashed line. When the crack exceeds this critical length, the material becomes unstable and fails rapidly.

20 0.09 4 Critical Length $ = $

Percolation

Percolation intuitively refers to something “seeping through” a medium—like water through porous rock or messages through a social network. To study such processes, we often use random graph models. For example you can use a lattice (e.g., a 2D grid) where only neighboring sites can influence each other. Or you can think of a general, fully connected, graph where any two nodes might be directly linked.

In each scenario, edges or nodes are randomly included or excluded with probability $p$. For site percolation on a 2D lattice, we imagine having $n$ sites arranged in a grid, and each site remains “occupied” with probability $p$. Any occupied site can only connect to its similarly occupied neighbors, whereas unoccupied sites are effectively removed. In bond percolation, we again have a 2D lattice of $n$ sites, but now all sites stay in place while each edge between adjacent sites is included with probability $p$ and excluded otherwise. Lastly, in the classic Erdős–Rényi graph $G(n,p)$, there are $n$ nodes, and every possible edge between two distinct nodes appears with probability $p$. Each of these random graph models has its own internal structure and a distinct threshold $p_c$ at which large-scale connectivity emerges.

We can then ask what the control parameter and order parameters are in these models. In all cases, the main control parameter that governs connectivity is $p$, the probability that a node or edge is present. Common choices for an order parameter include the fraction of nodes in the largest (“giant”) connected component, the probability of having a path of open sites or edges between two opposite sides of a lattice, or even the entire distribution of cluster sizes that tells us how many clusters of a given size $s$ form under different values of $p$.

Each model displays a phase transition at a specific $p_c$, although $p_c$ differs from model to model. For 2D site percolation on a square lattice, $p_c \approx 0.5927$. For 2D bond percolation on a square lattice, $p_c = 0.5$. For the Erdős–Rényi graph, $p_c \sim \frac{\log(n)}{n}$ in finite size, and in the large-$n$ limit there is a more precise threshold around $\frac{1}{n}$ for the emergence of a “giant” component.

At these critical probabilities $p_c$, several key phenomena occur. Exactly at $p = p_c$ (in an infinite system), the distribution of cluster sizes often follows a power law, $$P(\text{cluster size} = s) \sim s^{-\alpha},$$ with some exponent $\alpha$, reflecting the scale invariance of the critical point. Below $p_c$, no infinite or “giant” connected component exists, and the probability of a spanning path is essentially zero in the infinite limit, while above $p_c$ such a giant component appears, and in many percolation models the fraction of sites in the giant cluster grows continuously from 0 as $p$ increases past $p_c$. In an infinite lattice or graph, one commonly says that the probability of having a spanning (infinite) cluster jumps from 0 for $p < p_c$ to a positive value for $p > p_c$, although for finite graphs this transition is smoothed out and only becomes sharp as $n$ tends to infinity.

It is helpful to keep several important observations in mind when comparing these models. First, each model has its own critical probability $p_c$, so one cannot assume that site percolation, bond percolation, and Erdős–Rényi graphs share the same threshold or exponents. Next, while power-law behavior at criticality does appear in all these systems, the specific exponents vary with the model and the dimension. Furthermore, real-world networks or more sophisticated models often relax simple independence assumptions. In those cases, proving power laws and other signatures of criticality can be significantly more challenging.

Even though each percolation or random-graph model has unique geometric constraints, different $p_c$ values, and its own set of critical exponents, they all exhibit the broader phenomenon of a phase transition at some $p_c$. Past this threshold, connected clusters become large enough to span a substantial fraction of the graph, and the cluster-size distribution takes on a power-law form at the critical point.

Interactive Site-Percolation Simulation on a lattice

Below is a site percolation model on a $100 \times 100$ lattice. Each cell is occupied with probability $p$ so that water can flow through it. In this simulation “occupied” means that water can flow (white squares), which may be a little confusing, but I wanted to keep the theoretical threshold from the literature at $59$%, which would otherwise be $1 - 0.59$ or $41$%. The slider adjusts $p$ and the “Start” and “Reset” buttons begin or re-initialize the simulation. When the simulation runs, blue cells show water seeping downward from the top. A plot displays the largest cluster fraction, which remains small if $p$ is well below the critical threshold near $59$%, then rapidly grows as $p$ surpasses this threshold, reflecting the sudden jump in percolation probability.

Your browser does not support the canvas element.

(Approximate critical threshold $p_c \approx 59$% is set as default.)

The Ising Model of Magnetization

The Ising Model has a physical background and motivation. Real-world ferromagnets exhibit the remarkable ability to become magnetized below a certain critical temperature (the Curie temperature). In the early 20th century, physicists sought a simplified yet insightful way to model how local “spin” variables of atoms or molecules can align to produce macroscopic magnetization. Thus the Ising model was born, originally proposed by Wilhelm Lenz and solved in 1D by his student Ernst Ising (who then gave the model its name).

In a ferromagnetic context, each site on a lattice represents a localized spin that can be “up” $+1$ or “down” $-1$. The model captures how neighboring spins tend to align—mimicking the microscopic origin of ferromagnetism—without delving into the complicated details of electron orbitals or crystalline structure.

Imagine a grid (in two or higher dimensions) where each site $i$ has a spin that can be either $+1$, representing a spin pointing up, or $-1$, representing a spin pointing down. When two neighboring spins are the same, the energy of the system is lowered, and when they differ, the energy is raised. This interaction naturally favors the formation of large regions or clusters where spins are aligned in the same direction. At high temperatures, thermal agitation causes random fluctuations that disrupt this alignment, whereas at low temperatures the tendency for spins to align becomes dominant.

The 2D Ising Model is defined on a square lattice of size $N = L \times L$, where each site $i$ has a spin $s_i \in {+1, -1}$. The energy of a configuration ${s}$ is usually specified by

$$H({s}) = -J \sum_{(i,j)} s_i s_j - h \sum_{i} s_i,$$

where $J > 0$ is the ferromagnetic coupling constant favoring parallel neighbors, $\sum_{(i,j)}$ denotes a sum over neighbor pairs (each pair counted once), and $h$ is an external magnetic field that is frequently set to zero in theoretical studies. In statistical mechanics, the probability of a configuration ${s_i}$ at thermal equilibrium and temperature $T$ is given by the Boltzmann distribution:

$$p({s}) = \frac{1}{Z}\exp\bigl(-\beta H({s})\bigr), \qquad \text{where} \beta = \frac{1}{k_B T},$$

and $Z = \sum_{{s}} \exp(-\beta H({s}))$ is the partition function summing over all $2^N$ spin configurations. Several power laws and power-law distributions emerge in this model. The first notable quantity is the magnetization per site,

$$m = \frac{1}{N}\sum_{i=1}^N s_i,$$

which acts as an order parameter. Below a critical temperature $T_c$, the system spontaneously breaks symmetry so that $m \neq 0$, while above $T_c$, the magnetization in the thermodynamic limit is zero, corresponding to a disordered phase. Another important quantity is the spin–spin correlation function,

$$G(r) = \langle s_is_{i+r}\rangle -\langle s_i\rangle\langle s_{i+r}\rangle,$$

where $r$ measures the distance between two spins and the angle brackets indicate ensemble averages. Exactly at $T_c$, $G(r)$ decays as a power law $G(r)\sim r^{-\eta}$, whereas away from $T_c$ it decays exponentially. One can also analyze clusters of aligned spins. At $T_c$, the size distribution of these clusters often follows a power law

$$P(\text{cluster size} = s) \sim s^{-\alpha}.$$

The essence of the Ising model is that thermal equilibrium at temperature $T$ is governed by a Boltzmann factor $e^{-\beta H}$. Unlike a purely random assignment of spins, the Ising distribution correlates spins: configurations with large aligned regions get weighted more than configurations with many misaligned bonds. Thus, as $T$ changes, the system exhibits a phase transition and critical phenomena.

At first glance, the Ising model may seem quite similar to the percolation model discussed earlier, but there are important differences. In a percolation model, edges or sites are chosen at random with some probability $p$, independently of one another. For instance, in bond percolation, each edge is either present with probability $p$ or absent, while in site percolation, each node is occupied with probability $p$. These percolation systems exhibit phase transitions and power-law cluster-size distributions at a critical probability $p_c$.

In contrast, the Ising model is built on a fixed lattice geometry, meaning that all nearest-neighbor edges are present; we do not randomly delete any edges. Instead of using an independent Bernoulli process, the Ising model samples entire spin configurations from the Boltzmann distribution, given by $p({s}) \sim \exp(-\beta H({s}))$. Moreover, in the Ising model the spins interact: adjacent spins influence one another through the coupling constant $J$, so the probability of finding aligned pairs is not independent. This is fundamentally different from percolation, where each bond or site is typically determined independently.

Despite these differences, both percolation and the Ising model display similar emergent properties at criticality—like large connected clusters and scale-invariant (power-law) distributions—because both belong to related universality classes in statistical physics. The precise exponents differ, but the qualitative phenomenon is a common theme in continuous phase transitions.

Interactive Ising-Model Simulation on a lattice

Below is an interactive simulation of the 2D Ising model on a $100 \times 100$ lattice. All values (temperature $T$, magnetization $M$, Hamiltonian $H$, etc.) are dimensionless, and the interaction strength $J$ is fixed to 1. The lattice display shows spins in green (spin $+1$) and white (spin $-1$). We update the spins dynamically rather than sampling the entire configuration space in advance, which would be infeasible for large systems, so it takes a few seconds for the moving-average quantities (magnetization, Hamiltonian) and the three graphs to settle after clicking “Start.” The first plot (log-log) shows how spins correlate with distance, the second plot shows the cluster size distribution, and the third plot compares the simulated magnetization (green dot) to the known theoretical curve (red), illustrating the critical behavior of the Ising model near its critical temperature $T_c$.



[] []

  
  

Disease Outbreak Revisited – SIR Model

The SIR model is a classical framework used in epidemiology to capture the dynamics of infectious diseases. In its simplest formulation, it divides the population into three groups: susceptible ($S$), infected ($I$), and recovered ($R$). Individuals move from being susceptible to infected, and then from infected to recovered. The rates at which these transitions happen govern how an epidemic unfolds. Mathematically, the basic model is:

$$ \frac{dS}{dt} = -\beta \frac{SI}{N}, \quad \frac{dI}{dt} = \beta \frac{SI}{N} - \gamma I, \quad \frac{dR}{dt} = \gamma I, $$

where $\beta$ is the infection rate, $\gamma$ is the recovery rate, and $N = S + I + R$ is the total population. When $R_0 = \beta/\gamma$, the average reproduction rate, exceeds 1, the disease spreads rapidly, leading to a potential tipping point. Notice that these equations are non-linear because they involve the the terms $S \cdot I$ in the first and second equation.

Viewed through the lens of critical phenomena, the SIR model shows two types of critical behavior:

  1. Tipping Point Criticality

    In this view, the main control parameter is the ratio $\beta/\gamma$. The order parameter can be the total number of infected individuals. When $\beta/\gamma$ crosses 1, the system undergoes a sudden jump from a small outbreak to a large-scale epidemic. This is reminiscent of a discontinuous (first-order) transition, where a small change in the control parameter triggers a major shift in disease dynamics.

  2. Continuous Transitions in Network-Based Models

    If we consider a population as a network where edges represent possible transmission events, additional factors like social distancing, network connectivity, and heterogeneous contact weights become relevant. In these extended SIR models, the outbreak can spread through clusters of connected individuals. As control parameters such as link density or infection probability vary, the size of the infected clusters and the correlation lengths can follow power laws near the critical point. This behavior is characteristic of continuous (second-order) transitions, where there is no abrupt tipping point but rather a gradual change in the scale of cluster formation.

By integrating both perspectives, the SIR model can capture key features of epidemic spread, from rapid tipping points to network-driven cluster effects. This dual viewpoint helps illuminate how diseases may propagate in different settings and under various control measures.

Interactive SIR(S) Model Simulation

Below is a simulation of a SIR(S) Model on a $100 \times 100$ lattice. The second S stands for Re-Susceptible, meaning that a recovered individual can return to the susceptible state with a certain probability. Unlike the mean-field version, each infected cell here can only transmit the infection to its 8 neighboring cells. With the sliders on the right, you can adjust the infection, recovery, and re-susceptibility probabilities during the simulation. On the left, a time-series plot shows the fractions of susceptible, infected, and recovered individuals at each time step, while on the right a histogram of infected-cluster sizes is shown, capped at $100$ for clarity.

Remarks

First, I want to point out that I swept most of the subject of dynamical systems under the rug. Capturing all interesting and important ideas would not fit in a single book, let alone in one (too long) blog post. The book that really comes close to perfection on this topic, and which I highly recommend, is Nonlinear Dynamics and Chaos by Steven Strogatz (link provided below).

Critical phenomena often arise in dynamical systems and have a large conceptual overlap with unstable and half-stable fixed points and their bifurcations. All can be seen as “sudden qualitative changes in the behavior of a system.” Some types of bifurcations, such as saddle-node or subcritical pitchfork bifurcations, can exhibit a sudden jump and thus fit the tipping-point criticality concept. Other bifurcations, like supercritical pitchforks, involve the appearance or modification of equilibrium solutions without a discontinuous jump and fall in the category of continuous transitions. Hence, criticality can be primarily seen as a subset of bifurcation phenomena extended to probabilistic systems. Purely dynamical system literature generally handles randomness only in mean-field approaches.

Regarding continuous-transition criticality, all my examples involved some randomness, but that does not have to be the case. Purely deterministic dynamical systems can also exhibit continuous-transition criticality, like the bifurcations mentioned above. Furthermore, many deterministic processes lead to chaotic or complex behavior that mirrors randomness, and these phenomena can also arise there. Since it does not really matter whether the randomness is intrinsic (as in quantum mechanics) or stems from a deterministic but unpredictable source (as in fluid mechanics), the examples I gave still illustrate the main ideas behind continuous-transition criticality.

Lastly, I noted that systems undergoing continuous critical transitions often follow a power law at or near the critical value, without formally justifying that claim. For some readers, this may be the most intriguing aspect. In reality, deriving power laws in these models can be quite involved and is specific to each case. The main concepts include renormalization group theory, mean-field theory, and advanced probability theory (for example, stochastic differential equations). Providing these derivations goes beyond the scope of this blog post, which focuses on illustrating basic features of criticality. If you would like to explore these ideas further, the following Wikipedia pages are a good starting point:

These references explain in more detail why scale invariance often leads to power-law behavior near critical points.

Conclusion

The main points to take home are:

  • Many interesting and relevant processes in our world aren’t qualitatively the same across their entire parameter space. Often they exhibit drastic changes of behavior once a certain threshold or combination of parameters is reached. This behavior change is often very unintuitive.

  • In tipping point critical systems, there is a discontinuity at the critical parameter value. In continuous systems, as the name suggests, the behavior change happens continuously.

  • Critical phenomena may exhibit power laws. However, two very different flavors can arise:

    1. Competing power laws (seen in many tipping point phenomena). There, a control parameter drives different power-law terms, and once the exponents make one term overtake the other, the system reaches a critical value.
    2. Power laws in continuous transition phenomena, which describe distributions of system variables (like cluster sizes or correlation functions). These power laws signal scale invariance at the transition, meaning there is no characteristic size or scale in the system right at criticality.

Sources

  • Nonlinear Dynamics and Chaos by Steven H. Strogatz: amazon
  • The Los Alamos Primer by Robert Serber: amazon
  • Structures by J.E. Gordon: amazon
  • Simulation codes are inline javascript snippets which can be inspected via browser developer tools.