Emergent Second Law For Non Equilibrium Steady States

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Abstract

The Gibbs distribution universally characterizes states of thermal equilibrium. In order to extend
the Gibbs distribution to non-equilibrium steady states, one must relate the self-
information I(x)=−log(Pss(x))I(x)=−log⁡(Pss(x)) of microstate x to measurable
physical quantities. This is a central problem in non-equilibrium statistical physics. By
considering open systems described by stochastic dynamics which become deterministic in the
macroscopic limit, we show that changes ΔI=I(xt)−I(x0)ΔI=I(xt)−I(x0) in steady
state self-information along deterministic trajectories can be bounded by the macroscopic
entropy production Σ. This bound takes the form of an emergent second
law Σ+kbΔI≥0Σ+kbΔI≥0, which contains the usual second law Σ ≥ 0 as a corollary, and
is saturated in the linear regime close to equilibrium. We thus obtain a tighter version of the
second law of thermodynamics that provides a link between the deterministic relaxation of a
system and the non-equilibrium fluctuations at steady state. In addition to its fundamental value,
our result leads to novel methods for computing non-equilibrium distributions, providing a
deterministic alternative to Gillespie simulations or spectral methods.

Introduction
When a system is at equilibrium with its environment (i.e., when no energy currents are
exchanged) the probability of a given microstate x is given by the Gibbs distribution1,2,3

Peq(x)=e−βΦ(x)/Z,Peq(x)=e−βΦ(x)/Z,
(1)

where β=(kbT)−1β=(kbT)−1 is the inverse temperature of the environment, Φ(x) is the free


energy of microstate x (for states with no internal entropy, Φ(x) is just the energy),
and Z=∑xexp(−βΦ(x))Z=∑xexp⁡(−βΦ(x)) is the partition function. This central result
of equilibrium statistical physics has universal validity and its relevance in most areas of physics
cannot be overstated. A natural question is whether or not a similar result also holds for
nonequilibrium steady states (NESSs), when the system is maintained out of thermal equilibrium
by external drives and subjected to constant flows of energy or matter. In this case, one can
always write the steady state distribution over microstates as
Pss(x)=e−I(x)Pss(x)=e−I(x)
(2)

in terms of the self-information I(x)I(x), also known as fluctuating entropy4,5. In order to


provide a useful generalization of the Gibbs distribution to NESSs one must relate the self-
information I(x)I(x) to measurable physical quantities. This quest has a long history, starting
with the seminal contributions of Lebowitz and MacLennan6,7,8 and followed by other
works9,10,11,12,13,14,15. However, it still remains an open problem in non-equilibrium statistical
physics, since previous formal results are simply not practical in computations, due to the fact
that they involve averages over stochastic trajectories.

Public
In this work we prove, for a very general class of open systems displaying a macroscopic limit
where a deterministic dynamics emerges, the following fundamental bound on changes of self-
information:

Σa≡Σ+kb(I(xt)−I(x0))≥0,Σa≡Σ+kb(I(xt)−I(x0))≥0,
(3)

where Σ=∫t0dt′Σ˙(xt′)/TΣ=∫0tdt′Σ˙(xt′)/T is the entropy production along a


deterministic trajectory from microstate x0 to microstate xt. For example, let us consider the case
of chemical reaction networks. The concentrations x = (x1, x2, ⋯  ) of different chemical species
reacting in a solution are stochastic quantities, and their evolution is therefore described by a
probability distribution Pt(x) at time t. As the volume V of the solution is increased, the
distribution Pt(x) becomes strongly localised around the most probable values xt for the
concentrations at time t, and these values follow a deterministic dynamics that is in general
nonlinear (given in this case by the chemical rate equations). An analogous situation is
encountered in electronic circuits, where the state variables x are now the voltages at the nodes
of a circuit, and the macroscopic limit corresponds to increasing the typical capacitance C of the
nodes (as well as the conductivity of the conduction channels connecting pairs of nodes). The
remarkable feature of the result in Eq. (3) is that it provides a link between the deterministic
dynamics that emerges in the macroscopic limit and the fluctuations observed at steady state. For
example, in an electronic circuit powered by voltage sources and working at temperature T, the
entropy production rate is Σ˙=−Q˙/TΣ˙=−Q˙/T, where −Q˙−Q˙ is the rate of heat
dissipation by the conductive elements of the circuit, that can be easily evaluated at the
deterministic level. Then, by Eq. (3), the quantity −Σ/kb=∫t0dt′Q˙(xt
)/(kbT)−Σ/kb=∫0tdt′Q˙(xt′)/(kbT) provides a lower bound to the change of steady state

self-information I(xt)−I(x0)I(xt)−I(x0) along a trajectory. To arrive at our main result in


Eq. (3) we consider stochastic systems with a well defined macroscopic limit in which the self-
information I(x)I(x) can be shown to be extensive, and Eq. (3) is strictly valid in that limit.
However, as we also show, our results can be applied to micro or mesoscopic systems whenever
sub-extensive contributions to I(x)I(x) can be neglected. We interpret our result as an
emergent second law of thermodynamics, that is stronger than the usual second law Σ ≥ 0. This
last inequality is recovered from Eq. (3) by considering the fact that ΔI=I(xt)
−I(x0)≤0ΔI=I(xt)−I(x0)≤0 (to dominant order in the macroscopic limit, the steady state
self-information is a Lyapunov function of the deterministic dynamics16). In addition to its
conceptual value, our result offers a practical tool to approximate or bound non-equilibrium
distributions, that can typically only be accessed via stochastic numerical methods (for example
the Gillespie algorithm). In contrast, Eq. (3) only requires to know the deterministic dynamics of
the system, which is directly given by the well known network analysis techniques commonly
applied in electronic circuits and chemical reaction networks. Furthermore, the inequality in Eq.
(3) is saturated close to equilibrium, leading to a powerful linear response theory17. As an
example, we apply our results to a realistic model of non-equilibrium electronic memory: the
normal implementation of SRAM (static random access memory) cells in CMOS

Public
(complementary metal-oxide-semiconductor) technology. These memories have a non-
equilibrium phase transition from a monostable phase to a bistable phase that allows the storage
of a bit of information. As we will see, the transition is well captured by Eq. (3), which also
allows to bound the probability of fluctuations around the deterministic fixed points. Finally, we
show that a general coarse-graining procedure generates equivalent models with minimal entropy
production, and that in this way the bound in Eq. (3) becomes tighter. When applied to the
CMOS memory, this improved bound enables the full reconstruction of the steady state
distribution arbitrarily away from equilibrium.

Results
To obtain Eq. (3) we consider stochastic systems described by autonomous Markov jump
processes. Thus, let {n∈Nk}{n∈Nk} be the set of possible states of the system, and λρ(n) be
the rates at which jumps n → n + Δρ occur, for ρ = ± 1, ± 2, ⋯   and Δ−ρ = −Δρ (ρ indexes a possible
jump and Δρ is the corresponding change in the state). Each state has energy E(n) and internal
entropy S(n). Thermodynamic consistency is introduced by the local detailed balance (LDB)
condition18,19. It relates the forward and backward jump rates of a given transition with the
associated entropy production:
σρ=logλρ(n)λ−ρ(n+Δρ)=−β[Φ(n+Δρ)−Φ(n)
−Wρ(n)].σρ=log⁡λρ(n)λ−ρ(n+Δρ)=−β[Φ(n+Δρ)−Φ(n)−Wρ(n)].
(4)
In the previous equation, Φ(n) = E(n) − TS(n) is the free energy of state n, and Wρ(n) is the non-
conservative work provided by external sources during the transition. For simplicity, we have
considered isothermal conditions at inverse temperature β=(kbT)−1β=(kbT)−1, and
therefore the system is taken away from equilibrium by the external work sources alone. More
general situations in which a system interacts with several reservoirs at different temperatures
can be treated in the same way, this time in terms of a Massieu potential taking the place
of βΦ(n)18. Important classes of systems accepting the previous description are chemical
reaction networks and electronic circuits, which are powered by chemical or electrostatic
potential differences, respectively. Note that, by energy conservation, the heat provided by the
environment during transition n → n + Δρ is Qρ(n) = E(n + Δρ) − E(n) − Wρ(n), and therefore kbσρ = 
− Qρ(n)/T + S(n + Δρ) − S(n).
The probability distribution Pt(n) over the states of the system at time t evolves according to the
master equation

∂tPt(n)=∑ρ[λρ(n−Δρ)Pt(n−Δρ)
−λρ(n)Pt(n)].∂tPt(n)=∑ρ⁡[λρ(n−Δρ)Pt(n−Δρ)−λρ(n)Pt(n)].
(5)
From the master equation and the LDB conditions one can derive the energy balance

Public
dt⟨E⟩=⟨W˙⟩+⟨Q˙⟩,dt⟨E⟩=⟨W˙⟩+⟨Q˙⟩,
(6)
and the usual version of the second law:

Σ˙=Σ˙e+dt⟨S⟩=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logjρ(n)j−ρ(n+Δρ)≥0,Σ˙=Σ˙e+dt⟨S⟩=kb2∑ρ,n⁡(jρ(n)
−j−ρ(n+Δρ))log⁡jρ(n)j−ρ(n+Δρ)≥0,
(7)
where jρ(n) = λρ(n)Pt(n) is the current associated to transition ρ. In the previous
equations, ⟨S⟩=∑nPt(n)(S(n)−kblog(Pt(n))⟨S⟩=∑nPt(n)(S(n)−kblog⁡(Pt(n)) is the
entropy of the system, ⟨E⟩=∑nE(n)Pt(n)⟨E⟩=∑nE(n)Pt(n) is the average energy,
and Σ˙eΣ˙e is the entropy flow rate, given by

TΣ˙e=−⟨Q˙⟩=−∑ρ,nQρ(n)jρ(n)TΣ˙e=−⟨Q˙⟩=−∑ρ,n⁡Qρ(n)jρ(n)
(8)

where we also defined the heat rate ⟨Q˙⟩⟨Q˙⟩ (the work rate ⟨W˙⟩⟨W˙⟩ is analogously


defined as ⟨W˙⟩=∑ρ,nWρ(n)jρ(n)⟨W˙⟩=∑ρ,nWρ(n)jρ(n)). Finally, Eq. (7) can be also
expressed as:
TΣ˙=−dt⟨F⟩+⟨W˙⟩≥0TΣ˙=−dt⟨F⟩+⟨W˙⟩≥0
(9)

where ⟨F⟩=⟨E⟩−T⟨S⟩⟨F⟩=⟨E⟩−T⟨S⟩ is the non-equilibrium free energy.


Adiabatic/nonadiabtic decomposition
If the support of Pt(n) can be restricted to a finite subspace of the state space, the Perron-
Frobenius theorem states that the master equation in Eq. (5) has a unique steady state Pss(n).
Once the steady state is attained, the entropy production rate Σ˙Σ˙ matches the entropy flow
rate Σ˙eΣ˙e. An interesting decomposition of the entropy production rate can be obtained by
considering the relative
entropy D=∑nPt(n)log(Pt(n)/Pss(n))D=∑nPt(n)log⁡(Pt(n)/Pss(n)) between the
instantaneous distribution Pt(n) and the steady state distribution Pss(n). Then, it is possible to
show that Σ˙=Σ˙a+Σ˙na,Σ˙=Σ˙a+Σ˙na, where

Σ˙a=kb2∑ρ,n(jρ(n)−j−ρ(n+Δρ))logjssρ(n)jss−ρ(n+Δρ),Σ˙a=kb2∑ρ,n⁡(jρ(n)
−j−ρ(n+Δρ))log⁡jρss(n)j−ρss(n+Δρ),
(10)

Public
and

Σ˙na=kb2∑ρ,n(jρ(n)
−j−ρ(n+Δρ))logPt(n)Pss(n+Δρ)Pss(n)Pt(n+Δρ)=−kbdtDΣ˙na=kb2∑ρ,n⁡(jρ(
n)−j−ρ(n+Δρ))log⁡Pt(n)Pss(n+Δρ)Pss(n)Pt(n+Δρ)=−kbdtD
(11)

are the adiabatic and non-adiabatic contributions to the entropy production rate Σ˙Σ˙,
respectively. In Eq. (10) we have introduced the steady state probability
currents jssρ(n)=λρ(n)Pss(n)jρss(n)=λρ(n)Pss(n). The non-adiabatic
contribution Σ˙naΣ˙na is related to the relaxation of the system towards the steady state, since
it vanishes when the steady state is reached. This is further evidenced by the identity in the
second line of Eq. (11): a reduction in the relative entropy between Pt(n) and Pss(n) leads to a
positive non-adiabatic entropy production. The adiabatic contribution Σ˙aΣ˙a corresponds to
the dissipation of ‘housekeeping heat’20,21, and at steady state matches the entropy flow
rate Σ˙eΣ˙e. An important property of the previous decomposition is that both contributions are
individually positive: Σ˙a≥0Σ˙a≥0 and Σ˙na≥0Σ˙na≥022,23,24,25. Thus, the last inequality
and the second line in Eq. (11) imply that the relative entropy D decreases monotonically, and
since D is positive by definition, it is a Lyapunov function for the stochastic dynamics.
Macroscopic limit
In the following we will assume the existence of a scale parameter Ω controlling the size of the
system in question. For example, Ω can be taken to be the volume V of the solution in well-
mixed chemical reaction networks, or the typical value C of capacitance in the case of electronic
circuits (see the example below). In addition, we will assume that for large Ω i) that the typical
values of the density x ≡ n/Ω are intensive, ii) that the internal energy and entropy
functions E(Ωx) and S(Ωx) are extensive, and iii) that the transition rates λρ(Ωx) are also
extensive. Under those conditions, the probability distribution Pt(x) satisfies a large deviations
(LD) principle17,26,27:

Pt(x)≍e−ΩIt(x),Pt(x)≍e−ΩIt(x),
(12)
which just means that the
limit It(x)≡limΩ→∞−log(Pt(x))/ΩIt(x)≡limΩ→∞−log⁡(Pt(x))/Ω is well defined.
Then, It(x) is a positive, time-dependent ‘rate function’, since it gives the rate at which the
probability of fluctuation x decays with the scale. Note that, by Eq. (12), the steady state self-
information introduced in Eq. (2) satisfies I(x)=ΩIss(x)I(x)=ΩIss(x) to dominant order in
the macroscopic limit. In other words, the large deviations principle states that the instantaneous
self-information It(x)≡−log(Pt(x))It(x)≡−log⁡(Pt(x)) is an extensive quantity26, and we

Public
can think of the rate function as the self-information density. Thus, in the following we will
consider the ansatz Pt(x)=e−ΩIt(x)/ZtPt(x)=e−ΩIt(x)/Zt,
with Zt≡∑xe−ΩIt(x)Zt≡∑xe−ΩIt(x), as an approximation to the actual time-dependent
distribution. This amounts to neglecting sub-extensive contributions to the instantaneous self-
information. As explained below, It(x) takes its minimum value It(xt) = 0 at the deterministic
trajectory xt, which is equivalent to Pt(x) = δ(x − xt) for Ω → ∞. Plugging the previous ansatz in
the master equation of Eq. (5) we note
that λρ(x−Δρ/Ω)Pt(x−Δρ/Ω)≃λρ(x)Pt(x)eΔρ⋅∇It(x)λρ(x−Δρ/Ω)Pt(x−Δρ/Ω)≃λρ
(x)Pt(x)eΔρ⋅∇It(x) to dominant order in Ω → ∞. Noting also that log(Zt)log⁡(Zt) is sub-
extensive, it is possible to see that It(x) evolves according to

∂tIt(x)=∑ρωρ(x)[1−eΔρ⋅∇It(x)],∂tIt(x)=∑ρ⁡ωρ(x)[1−eΔρ⋅∇It(x)],
(13)

where ωρ(x)≡limΩ→∞λρ(Ωx)/Ωωρ(x)≡limΩ→∞⁡λρ(Ωx)/Ω are the scaled jump


rates17,28. In a similar way, in the macroscopic limit the LDB conditions in Eq. (4) take the form
logωρ(x)ω−ρ(x)=−β[Δρ⋅∇ϕ(x)−Wρ(x)],log⁡ωρ(x)ω−ρ(x)=−β[Δρ⋅∇ϕ(x)
−Wρ(x)],
(14)
in terms of the free energy
density ϕ(x)≡limΩ→∞Φ(Ωx)/Ωϕ(x)≡limΩ→∞⁡Φ(Ωx)/Ω (internal energy and entropy
densities ϵ(x) and s(x) satisfying ϕ(x) = ϵ(x) − Ts(x) can be defined in the same way). For the
work contributions in Eq. (14) we are abusing notation by
writing Wρ(x)=limΩ→∞Wρ(n=Ωx)Wρ(x)=limΩ→∞⁡Wρ(n=Ωx). Note that we
assume that work contributions are intensive. This is justified since they are given by the product
of two intensive quantities: a thermodynamic force (for example a potential difference), and the
change in a conserved quantity (mass, charge, etc) during a single jump29. However, note also
that the work rate ⟨W˙⟩⟨W˙⟩ will be extensive in general due to the extensivity of the
transition rates.
Many classes of systems satisfy the previous scaling assumptions besides the examples already
mentioned. Additional examples include non-equilibrium many-body problems like the driven
Potts model30,31, reaction-diffusion models32,33, and asymmetric exclusion processes32,34.

From Eq. (12) we see that as Ω is increased, Pt(x) is increasingly localised around the minimum
of the rate function It(x), which is the most probable value. Also, deviations from that typical
state are exponentially suppressed in Ω. Thus, the limit Ω → ∞ is a macroscopic low-noise limit
where a deterministic dynamic emerges. In fact, from Eq. (13) one can show that the evolution of
the minima xt of It(x) is ruled by the closed non-linear differential equations

Public
dtxt=u(xt),withu(x)≡∑ρ>0iρ(x)Δρ,dtxt=u(xt),withu(x)≡∑ρ>0⁡iρ(x)Δρ,
(15)
where iρ(x) ≡ ωρ(x) − ω−ρ(x) are the scaled deterministic currents17. The vector field u(x)
corresponds to the deterministic drift in state space. For chemical reaction networks the
dynamical equations in Eq. (15) are the chemical rate equations, while for electronic circuits they
are provided by regular circuit analysis.

In the following section we obtain bounds for the steady state rate function Iss(x), that according
to Eq. (13) satisfies:

0=∑ρωρ(x)[1−eΔρ⋅∇Iss(x)].0=∑ρ⁡ωρ(x)[1−eΔρ⋅∇Iss(x)].
(16)

Emergent second law


The positivity of the adiabatic and non-adiabatic contributions to the entropy
production, Σ˙a≥0Σ˙a≥0 and Σ˙na≥0Σ˙na≥0, in addition to the usual second
law Σ˙≥0Σ˙≥0, have been called the ‘three faces of the second law’23. In28, the
inequality Σ˙na=−kbdtD≥0Σ˙na=−kbdtD≥0 was put forward as an ‘emergent’ second
law. There, F=kbDF=kbD was interpreted as an alternative non-equilibrium free energy,
with a balance equation dtF=Σ˙a−Σ˙≤0dtF=Σ˙a−Σ˙≤0 (note the analogy with Eq. (9)).
Then, the adiabatic contribution Σ˙aΣ˙a was interpreted as an energy input, which at steady
state balances the dissipation Σ˙Σ˙. Although this point of view is compelling, it is hindered by
the fact that there is no clear interpretation of Σ˙aΣ˙a away from the steady state, that would
allow to compute this quantity in terms of actual physical currents. In this work we take the other
possible road, and investigate the interpretation and consequences of Σ˙a≥0Σ˙a≥0. We begin
by rewriting Eq. (10) using the LDB conditions of Eq. (4) and the definition of II in Eq. (2),
obtaining:
Σ˙a=Σ˙+kbdt⟨I⟩−dt⟨Ssh⟩≥0,Σ˙a=Σ˙+kbdt⟨I⟩−dt⟨Ssh⟩≥0,
(17)

where we have defined ⟨I⟩=∑nI(n)Pt(n)⟨I⟩=∑nI(n)Pt(n) as the average of the steady


state self-
information I(n)=−log(Pss(n))I(n)=−log⁡(Pss(n)) and ⟨Ssh⟩=−kb∑nPt(n)log(
Pt(n))⟨Ssh⟩=−kb∑nPt(n)log⁡(Pt(n)) as the Shannon contribution to the system entropy,
computed over the instantaneous distribution. Eq. (17) has been already obtained in22,23,24,

Public
although it was not explicitly written in terms of the self-information II. It is important to note
that ⟨Ssh⟩⟨Ssh⟩ is sub-extensive in Ω (according to Eq. (12), it grows as log(Ω)log⁡(Ω)), and
therefore can be neglected in the macroscopic limit. Thus, changes in average self-information
can be bounded by the entropy production, that can in turn be computed or measured in terms of
actual energy and entropy flows (see Eqs. (7) and (8)). However, the result in Eq. (17) is not yet
in a useful form, since the average ⟨I⟩⟨I⟩ does not depend only on I(n)I(n), the unknown
quantity we are interested in, but also on the instantaneous distribution Pt(n), that is also typically
unknown. This issue is circumvented in the macroscopic limit, since in that case Pt(x) is strongly
localised around the deterministic values xt, and therefore ⟨I⟩≃ΩIss(xt)⟨I⟩≃ΩIss(xt) to
dominant order in Ω → ∞. Thus, in the same limit, Eq. (17) for the adiabatic entropy production
rate Σ˙aΣ˙a reduces to
σ˙a(xt)=σ˙(xt)+kbdtIss(xt)≥0,σ˙a(xt)=σ˙(xt)+kbdtIss(xt)≥0,
(18)

where we have defined σ˙(xt)=limΩ→∞Σ˙/Ωσ˙(xt)=limΩ→∞⁡Σ˙/Ω as the scaled


macroscopic limit of the entropy production rate (σ˙a(xt)σ˙a(xt) is defined in a similar way).
Eq. (18) is a more rigorous version of our central result in Eq. (3), which is obtained by
integrating Eq. (18) along deterministic trajectories (satisfying Eq. (15)) and multiplying by the
scale factor. It is also useful to write down the first and second laws in the macroscopic limit.
The energy balance in Eq. (6) reduces to
dtϵ(xt)=u(xt)⋅∇ϵ(xt)=w˙(xt)+q˙(xt),dtϵ(xt)=u(xt)⋅∇ϵ(xt)=w˙(xt)+q˙(xt),
(19)
where the scaled heat and work rates for state x are defined
as q˙(x)=∑ρ>0iρ(x)Qρ(x)q˙(x)=∑ρ>0iρ(x)Qρ(x) and w˙(x)=∑ρ>0iρ(x)Wρ(
x)w˙(x)=∑ρ>0iρ(x)Wρ(x), respectively. Finally, again neglecting the sub-extensive Shannon
contribution ⟨Ssh⟩⟨Ssh⟩, the second law in Eq. (7) reduces to

σ˙(xt)=−q˙(xt)/T+dts(xt)=kb∑ρ>0(ωρ(xt)
−ω−ρ(xt))logωρ(xt)ω−ρ(xt)≥0.σ˙(xt)=−q˙(xt)/T+dts(xt)=kb∑ρ>0⁡(ωρ(xt)
−ω−ρ(xt))log⁡ωρ(xt)ω−ρ(xt)≥0.
(20)

Linear response regime


We will now show that to first order in the work contributions Wρ(x) the inequality in Eq. (18) is
saturated. In first place we rewrite Eq. (18) using the macroscopic first and second laws in Eqs.
(19) and (20):

σ˙a(xt)/kb=u(xt)⋅∇(Iss(x)−βϕ(x))|x=xt+βw˙(xt),σ˙a(xt)/kb=u(xt)⋅∇(Iss(x)
−βϕ(x))|x=xt+βw˙(xt),

Public
(21)
where we also used ϕ(x) = ϵ(x) − Ts(x) and that dtF(xt) = u(xt) ⋅ ∇ F(xt) for any function F(x).
Secondly, we note that in detailed-balanced settings (i.e., if Wρ(x) = 0 ∀ ρ, x) the steady state rate
function is just Iss(x) = βϕ(x) (up to a constant), in accordance to the Gibbs distribution (this
follows from Eqs. (14) and (16)). Thus, the difference g(x) ≡ Iss(x) − βϕ(x) appearing in Eq. (18)
quantifies the deviations from thermal equilibrium. Expanding Eq. (16) to first order in Wρ(x)
and g(x), it can be shown that

u(0)(x)⋅∇g(x)=−βw˙(0)(x)+O(W2ρ),u(0)(x)⋅∇g(x)=−βw˙(0)(x)+O(Wρ2),
(22)

where u(0)(x)=∑ρ>0i(0)ρ(x)Δρu(0)(x)=∑ρ>0iρ(0)(x)Δρ and w˙(0)
(x)=∑ρ>0i(0)ρ(x)Wρ(x)w˙(0)(x)=∑ρ>0iρ(0)(x)Wρ(x) are the lowest-order
deterministic drift and work rate, respectively17. These are defined in terms of the detailed-
balanced deterministic currents i(0)ρ(x)=ω(0)ρ(x)−ω(0)−ρ(x)iρ(0)(x)=ωρ(0)(x)
−ω−ρ(0)(x) constructed from the scaled transition rates evaluated at Wρ(x) = 0 that, according to
the LDB conditions of Eq. (14),
satisfy log(ω(0)ρ(x)/ω(0)−ρ(x))=−βΔρ⋅∇ϕ(x)log⁡(ωρ(0)(x)/ω−ρ(0)
(x))=−βΔρ⋅∇ϕ(x). Comparing the result of Eq. (22) with Eq. (21), we see
that Σ˙a=0Σ˙a=0 to linear order in Wρ(x). Then, in the linear response regime we can write:

Iss(xt)−Iss(x0)≃−∫t0dt′σ˙(0)(xt′)/kb=β[ϕ(xt)−ϕ(x0)−∫t0dt′w˙(0)(xt

′)].Iss(xt)−Iss(x0)≃−∫0tdt′σ˙(0)(xt′)/kb=β[ϕ(xt)−ϕ(x0)−∫0tdt′w˙(0)(xt′)].
(23)
where the integration is performed along trajectories solving the detailed-balanced deterministic
dynamics dtxt = u(0)(xt).

Public

You might also like