Emergent Second Law For Non Equilibrium Steady States
Emergent Second Law For Non Equilibrium Steady States
Emergent Second Law For Non Equilibrium Steady States
The Gibbs distribution universally characterizes states of thermal equilibrium. In order to extend
the Gibbs distribution to non-equilibrium steady states, one must relate the self-
information I(x)=−log(Pss(x))I(x)=−log(Pss(x)) of microstate x to measurable
physical quantities. This is a central problem in non-equilibrium statistical physics. By
considering open systems described by stochastic dynamics which become deterministic in the
macroscopic limit, we show that changes ΔI=I(xt)−I(x0)ΔI=I(xt)−I(x0) in steady
state self-information along deterministic trajectories can be bounded by the macroscopic
entropy production Σ. This bound takes the form of an emergent second
law Σ+kbΔI≥0Σ+kbΔI≥0, which contains the usual second law Σ ≥ 0 as a corollary, and
is saturated in the linear regime close to equilibrium. We thus obtain a tighter version of the
second law of thermodynamics that provides a link between the deterministic relaxation of a
system and the non-equilibrium fluctuations at steady state. In addition to its fundamental value,
our result leads to novel methods for computing non-equilibrium distributions, providing a
deterministic alternative to Gillespie simulations or spectral methods.
Introduction
When a system is at equilibrium with its environment (i.e., when no energy currents are
exchanged) the probability of a given microstate x is given by the Gibbs distribution1,2,3
Peq(x)=e−βΦ(x)/Z,Peq(x)=e−βΦ(x)/Z,
(1)
Public
In this work we prove, for a very general class of open systems displaying a macroscopic limit
where a deterministic dynamics emerges, the following fundamental bound on changes of self-
information:
Σa≡Σ+kb(I(xt)−I(x0))≥0,Σa≡Σ+kb(I(xt)−I(x0))≥0,
(3)
Public
(complementary metal-oxide-semiconductor) technology. These memories have a non-
equilibrium phase transition from a monostable phase to a bistable phase that allows the storage
of a bit of information. As we will see, the transition is well captured by Eq. (3), which also
allows to bound the probability of fluctuations around the deterministic fixed points. Finally, we
show that a general coarse-graining procedure generates equivalent models with minimal entropy
production, and that in this way the bound in Eq. (3) becomes tighter. When applied to the
CMOS memory, this improved bound enables the full reconstruction of the steady state
distribution arbitrarily away from equilibrium.
Results
To obtain Eq. (3) we consider stochastic systems described by autonomous Markov jump
processes. Thus, let {n∈Nk}{n∈Nk} be the set of possible states of the system, and λρ(n) be
the rates at which jumps n → n + Δρ occur, for ρ = ± 1, ± 2, ⋯ and Δ−ρ = −Δρ (ρ indexes a possible
jump and Δρ is the corresponding change in the state). Each state has energy E(n) and internal
entropy S(n). Thermodynamic consistency is introduced by the local detailed balance (LDB)
condition18,19. It relates the forward and backward jump rates of a given transition with the
associated entropy production:
σρ=logλρ(n)λ−ρ(n+Δρ)=−β[Φ(n+Δρ)−Φ(n)
−Wρ(n)].σρ=logλρ(n)λ−ρ(n+Δρ)=−β[Φ(n+Δρ)−Φ(n)−Wρ(n)].
(4)
In the previous equation, Φ(n) = E(n) − TS(n) is the free energy of state n, and Wρ(n) is the non-
conservative work provided by external sources during the transition. For simplicity, we have
considered isothermal conditions at inverse temperature β=(kbT)−1β=(kbT)−1, and
therefore the system is taken away from equilibrium by the external work sources alone. More
general situations in which a system interacts with several reservoirs at different temperatures
can be treated in the same way, this time in terms of a Massieu potential taking the place
of βΦ(n)18. Important classes of systems accepting the previous description are chemical
reaction networks and electronic circuits, which are powered by chemical or electrostatic
potential differences, respectively. Note that, by energy conservation, the heat provided by the
environment during transition n → n + Δρ is Qρ(n) = E(n + Δρ) − E(n) − Wρ(n), and therefore kbσρ =
− Qρ(n)/T + S(n + Δρ) − S(n).
The probability distribution Pt(n) over the states of the system at time t evolves according to the
master equation
∂tPt(n)=∑ρ[λρ(n−Δρ)Pt(n−Δρ)
−λρ(n)Pt(n)].∂tPt(n)=∑ρ[λρ(n−Δρ)Pt(n−Δρ)−λρ(n)Pt(n)].
(5)
From the master equation and the LDB conditions one can derive the energy balance
Public
dt⟨E⟩=⟨W˙⟩+⟨Q˙⟩,dt⟨E⟩=⟨W˙⟩+⟨Q˙⟩,
(6)
and the usual version of the second law:
Σ˙=Σ˙e+dt⟨S⟩=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logjρ(n)j−ρ(n+Δρ)≥0,Σ˙=Σ˙e+dt⟨S⟩=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logjρ(n)j−ρ(n+Δρ)≥0,
(7)
where jρ(n) = λρ(n)Pt(n) is the current associated to transition ρ. In the previous
equations, ⟨S⟩=∑nPt(n)(S(n)−kblog(Pt(n))⟨S⟩=∑nPt(n)(S(n)−kblog(Pt(n)) is the
entropy of the system, ⟨E⟩=∑nE(n)Pt(n)⟨E⟩=∑nE(n)Pt(n) is the average energy,
and Σ˙eΣ˙e is the entropy flow rate, given by
TΣ˙e=−⟨Q˙⟩=−∑ρ,nQρ(n)jρ(n)TΣ˙e=−⟨Q˙⟩=−∑ρ,nQρ(n)jρ(n)
(8)
Σ˙a=kb2∑ρ,n(jρ(n)−j−ρ(n+Δρ))logjssρ(n)jss−ρ(n+Δρ),Σ˙a=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logjρss(n)j−ρss(n+Δρ),
(10)
Public
and
Σ˙na=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logPt(n)Pss(n+Δρ)Pss(n)Pt(n+Δρ)=−kbdtDΣ˙na=kb2∑ρ,n(jρ(
n)−j−ρ(n+Δρ))logPt(n)Pss(n+Δρ)Pss(n)Pt(n+Δρ)=−kbdtD
(11)
are the adiabatic and non-adiabatic contributions to the entropy production rate Σ˙Σ˙,
respectively. In Eq. (10) we have introduced the steady state probability
currents jssρ(n)=λρ(n)Pss(n)jρss(n)=λρ(n)Pss(n). The non-adiabatic
contribution Σ˙naΣ˙na is related to the relaxation of the system towards the steady state, since
it vanishes when the steady state is reached. This is further evidenced by the identity in the
second line of Eq. (11): a reduction in the relative entropy between Pt(n) and Pss(n) leads to a
positive non-adiabatic entropy production. The adiabatic contribution Σ˙aΣ˙a corresponds to
the dissipation of ‘housekeeping heat’20,21, and at steady state matches the entropy flow
rate Σ˙eΣ˙e. An important property of the previous decomposition is that both contributions are
individually positive: Σ˙a≥0Σ˙a≥0 and Σ˙na≥0Σ˙na≥022,23,24,25. Thus, the last inequality
and the second line in Eq. (11) imply that the relative entropy D decreases monotonically, and
since D is positive by definition, it is a Lyapunov function for the stochastic dynamics.
Macroscopic limit
In the following we will assume the existence of a scale parameter Ω controlling the size of the
system in question. For example, Ω can be taken to be the volume V of the solution in well-
mixed chemical reaction networks, or the typical value C of capacitance in the case of electronic
circuits (see the example below). In addition, we will assume that for large Ω i) that the typical
values of the density x ≡ n/Ω are intensive, ii) that the internal energy and entropy
functions E(Ωx) and S(Ωx) are extensive, and iii) that the transition rates λρ(Ωx) are also
extensive. Under those conditions, the probability distribution Pt(x) satisfies a large deviations
(LD) principle17,26,27:
Pt(x)≍e−ΩIt(x),Pt(x)≍e−ΩIt(x),
(12)
which just means that the
limit It(x)≡limΩ→∞−log(Pt(x))/ΩIt(x)≡limΩ→∞−log(Pt(x))/Ω is well defined.
Then, It(x) is a positive, time-dependent ‘rate function’, since it gives the rate at which the
probability of fluctuation x decays with the scale. Note that, by Eq. (12), the steady state self-
information introduced in Eq. (2) satisfies I(x)=ΩIss(x)I(x)=ΩIss(x) to dominant order in
the macroscopic limit. In other words, the large deviations principle states that the instantaneous
self-information It(x)≡−log(Pt(x))It(x)≡−log(Pt(x)) is an extensive quantity26, and we
Public
can think of the rate function as the self-information density. Thus, in the following we will
consider the ansatz Pt(x)=e−ΩIt(x)/ZtPt(x)=e−ΩIt(x)/Zt,
with Zt≡∑xe−ΩIt(x)Zt≡∑xe−ΩIt(x), as an approximation to the actual time-dependent
distribution. This amounts to neglecting sub-extensive contributions to the instantaneous self-
information. As explained below, It(x) takes its minimum value It(xt) = 0 at the deterministic
trajectory xt, which is equivalent to Pt(x) = δ(x − xt) for Ω → ∞. Plugging the previous ansatz in
the master equation of Eq. (5) we note
that λρ(x−Δρ/Ω)Pt(x−Δρ/Ω)≃λρ(x)Pt(x)eΔρ⋅∇It(x)λρ(x−Δρ/Ω)Pt(x−Δρ/Ω)≃λρ
(x)Pt(x)eΔρ⋅∇It(x) to dominant order in Ω → ∞. Noting also that log(Zt)log(Zt) is sub-
extensive, it is possible to see that It(x) evolves according to
∂tIt(x)=∑ρωρ(x)[1−eΔρ⋅∇It(x)],∂tIt(x)=∑ρωρ(x)[1−eΔρ⋅∇It(x)],
(13)
From Eq. (12) we see that as Ω is increased, Pt(x) is increasingly localised around the minimum
of the rate function It(x), which is the most probable value. Also, deviations from that typical
state are exponentially suppressed in Ω. Thus, the limit Ω → ∞ is a macroscopic low-noise limit
where a deterministic dynamic emerges. In fact, from Eq. (13) one can show that the evolution of
the minima xt of It(x) is ruled by the closed non-linear differential equations
Public
dtxt=u(xt),withu(x)≡∑ρ>0iρ(x)Δρ,dtxt=u(xt),withu(x)≡∑ρ>0iρ(x)Δρ,
(15)
where iρ(x) ≡ ωρ(x) − ω−ρ(x) are the scaled deterministic currents17. The vector field u(x)
corresponds to the deterministic drift in state space. For chemical reaction networks the
dynamical equations in Eq. (15) are the chemical rate equations, while for electronic circuits they
are provided by regular circuit analysis.
In the following section we obtain bounds for the steady state rate function Iss(x), that according
to Eq. (13) satisfies:
0=∑ρωρ(x)[1−eΔρ⋅∇Iss(x)].0=∑ρωρ(x)[1−eΔρ⋅∇Iss(x)].
(16)
Public
although it was not explicitly written in terms of the self-information II. It is important to note
that ⟨Ssh⟩⟨Ssh⟩ is sub-extensive in Ω (according to Eq. (12), it grows as log(Ω)log(Ω)), and
therefore can be neglected in the macroscopic limit. Thus, changes in average self-information
can be bounded by the entropy production, that can in turn be computed or measured in terms of
actual energy and entropy flows (see Eqs. (7) and (8)). However, the result in Eq. (17) is not yet
in a useful form, since the average ⟨I⟩⟨I⟩ does not depend only on I(n)I(n), the unknown
quantity we are interested in, but also on the instantaneous distribution Pt(n), that is also typically
unknown. This issue is circumvented in the macroscopic limit, since in that case Pt(x) is strongly
localised around the deterministic values xt, and therefore ⟨I⟩≃ΩIss(xt)⟨I⟩≃ΩIss(xt) to
dominant order in Ω → ∞. Thus, in the same limit, Eq. (17) for the adiabatic entropy production
rate Σ˙aΣ˙a reduces to
σ˙a(xt)=σ˙(xt)+kbdtIss(xt)≥0,σ˙a(xt)=σ˙(xt)+kbdtIss(xt)≥0,
(18)
σ˙(xt)=−q˙(xt)/T+dts(xt)=kb∑ρ>0(ωρ(xt)
−ω−ρ(xt))logωρ(xt)ω−ρ(xt)≥0.σ˙(xt)=−q˙(xt)/T+dts(xt)=kb∑ρ>0(ωρ(xt)
−ω−ρ(xt))logωρ(xt)ω−ρ(xt)≥0.
(20)
σ˙a(xt)/kb=u(xt)⋅∇(Iss(x)−βϕ(x))|x=xt+βw˙(xt),σ˙a(xt)/kb=u(xt)⋅∇(Iss(x)
−βϕ(x))|x=xt+βw˙(xt),
Public
(21)
where we also used ϕ(x) = ϵ(x) − Ts(x) and that dtF(xt) = u(xt) ⋅ ∇ F(xt) for any function F(x).
Secondly, we note that in detailed-balanced settings (i.e., if Wρ(x) = 0 ∀ ρ, x) the steady state rate
function is just Iss(x) = βϕ(x) (up to a constant), in accordance to the Gibbs distribution (this
follows from Eqs. (14) and (16)). Thus, the difference g(x) ≡ Iss(x) − βϕ(x) appearing in Eq. (18)
quantifies the deviations from thermal equilibrium. Expanding Eq. (16) to first order in Wρ(x)
and g(x), it can be shown that
u(0)(x)⋅∇g(x)=−βw˙(0)(x)+O(W2ρ),u(0)(x)⋅∇g(x)=−βw˙(0)(x)+O(Wρ2),
(22)
where u(0)(x)=∑ρ>0i(0)ρ(x)Δρu(0)(x)=∑ρ>0iρ(0)(x)Δρ and w˙(0)
(x)=∑ρ>0i(0)ρ(x)Wρ(x)w˙(0)(x)=∑ρ>0iρ(0)(x)Wρ(x) are the lowest-order
deterministic drift and work rate, respectively17. These are defined in terms of the detailed-
balanced deterministic currents i(0)ρ(x)=ω(0)ρ(x)−ω(0)−ρ(x)iρ(0)(x)=ωρ(0)(x)
−ω−ρ(0)(x) constructed from the scaled transition rates evaluated at Wρ(x) = 0 that, according to
the LDB conditions of Eq. (14),
satisfy log(ω(0)ρ(x)/ω(0)−ρ(x))=−βΔρ⋅∇ϕ(x)log(ωρ(0)(x)/ω−ρ(0)
(x))=−βΔρ⋅∇ϕ(x). Comparing the result of Eq. (22) with Eq. (21), we see
that Σ˙a=0Σ˙a=0 to linear order in Wρ(x). Then, in the linear response regime we can write:
Iss(xt)−Iss(x0)≃−∫t0dt′σ˙(0)(xt′)/kb=β[ϕ(xt)−ϕ(x0)−∫t0dt′w˙(0)(xt
′)].Iss(xt)−Iss(x0)≃−∫0tdt′σ˙(0)(xt′)/kb=β[ϕ(xt)−ϕ(x0)−∫0tdt′w˙(0)(xt′)].
(23)
where the integration is performed along trajectories solving the detailed-balanced deterministic
dynamics dtxt = u(0)(xt).
Public