Models of Theoretical Physics Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 116

Models Of Theoretical Physics

Baiesi - Maritan

Vincenzo Maria Schimmenti - [email protected]

October 5, 2019
Contents

1 Gaussian Integrals, Generating Functions and Wick’s theo-


rem 5
1.1 Gaussian Integrals . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Correlations Functions . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Wick’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 More on Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.1 Steepest descent . . . . . . . . . . . . . . . . . . . . . . 8
1.4.2 Gaussian with imaginary mean . . . . . . . . . . . . . 9
1.4.3 Fresnel integral . . . . . . . . . . . . . . . . . . . . . . 9
1.4.4 Example: Schrodinger Equation . . . . . . . . . . . . . 10
1.4.5 Indented Integrals and  prescription . . . . . . . . . . 11

2 Stochastic Processes and Path Integrals 12


2.1 Diffusion Equation . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Random walk and diffusion equation . . . . . . . . . . . . . . 13
2.3 Wiener Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Some Calculations . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.1 Identity . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4.2 Return to start . . . . . . . . . . . . . . . . . . . . . . 18
2.4.3 Two-point function . . . . . . . . . . . . . . . . . . . . 19
2.4.4 A first functional . . . . . . . . . . . . . . . . . . . . . 19
2.4.5 Potential-like functional . . . . . . . . . . . . . . . . . 20

3 Fokker-Planck Equation 24
3.1 Master Equation derivation . . . . . . . . . . . . . . . . . . . 24

1
3.2 Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.2 Ito Integrals . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.3 Differentiation rules . . . . . . . . . . . . . . . . . . . . 31
3.3.4 Correlation . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.5 Change Of Variables . . . . . . . . . . . . . . . . . . . 33
3.3.6 Fokker-Planck derivation from Langevin . . . . . . . . 33

4 The Wiener Path Integral in action 35


4.1 Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Multidimensional Wiener Path Integral . . . . . . . . . . . . . 39
4.3 The Fokker-Planck equation with velocity . . . . . . . . . . . 41

5 The Feynman-Kac formula 42


5.1 Feynman-Kac for the Fokker-Planck equation . . . . . . . . . 42
5.2 Feynman-Kac for the Bloch equation . . . . . . . . . . . . . . 43
5.2.1 Proof 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2.2 Proof 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

6 Brownian motion, Polymer Physics and Field Theory 48


6.1 Prelude: Green’s function . . . . . . . . . . . . . . . . . . . . 48
6.2 Introduction: Polymers . . . . . . . . . . . . . . . . . . . . . . 49
6.3 Phantom Chain . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.4 Some remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.5 Feynman-Kac for Polymer in a potential . . . . . . . . . . . . 53
6.6 Gaussian Field Theory . . . . . . . . . . . . . . . . . . . . . . 54
6.7 Self Interacting Polymers . . . . . . . . . . . . . . . . . . . . . 56

7 Models Of Phase Transitions 59


7.1 Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
7.2 Mean Field Theory . . . . . . . . . . . . . . . . . . . . . . . . 60
7.3 Field Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

8 Entropy and Information 65

9 From Wiener to Feynman Path Integrals 66


9.1 Imaginary Diffusion . . . . . . . . . . . . . . . . . . . . . . . . 66
9.2 Phase Space Feynman Path Integral . . . . . . . . . . . . . . . 68

2
9.3 Finite Temperature Quantum Mechanics . . . . . . . . . . . . 71
9.4 Field Theory Example . . . . . . . . . . . . . . . . . . . . . . 72

10 Topics in Stochastic Phenomena 74


10.1 Stochastic Amplification . . . . . . . . . . . . . . . . . . . . . 74
10.2 Linear Noise Approximation . . . . . . . . . . . . . . . . . . . 75
10.3 Continued: Volterra model . . . . . . . . . . . . . . . . . . . . 77
10.4 Stochastic Resonance . . . . . . . . . . . . . . . . . . . . . . . 78

11 Disordered systems 80
11.1 Spin Glasses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
11.2 Replica trick . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.3 Pure states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.4 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.5 Overlaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.6 Overlap distribution . . . . . . . . . . . . . . . . . . . . . . . 85
11.7 p-Spin spherical model . . . . . . . . . . . . . . . . . . . . . . 85
11.8 Replica Symmetric Solution . . . . . . . . . . . . . . . . . . . 88
11.9 Replica Trick and Physics . . . . . . . . . . . . . . . . . . . . 89
11.10Replica Symmetry Breaking . . . . . . . . . . . . . . . . . . . 89
11.11Random Field Ising Model . . . . . . . . . . . . . . . . . . . . 90

12 Levy flights 93
12.1 Sub and super diffusion . . . . . . . . . . . . . . . . . . . . . . 93
12.2 Cauchy random walk . . . . . . . . . . . . . . . . . . . . . . . 94

13 Instantons 96
13.1 Classical Instantons . . . . . . . . . . . . . . . . . . . . . . . . 96
13.2 Instanton Examples . . . . . . . . . . . . . . . . . . . . . . . . 97
13.3 Quantum and Statistical Mechanics Instantons . . . . . . . . . 98

14 Field Approach to a Master Equation - NOT RELATED TO


COURSE 100
14.1 Fock Space and Coherent states . . . . . . . . . . . . . . . . . 100

Appendices 102

3
A Characteristic Functions and Central Limit Theorem 103
A.1 Characteristic Functions . . . . . . . . . . . . . . . . . . . . . 103
A.2 Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . 104

B Circulant matrices 106

C Disordered systems 108

D Brownian Bridge 111

E Nearest neighbour matrix eigenvalues 114

4
Chapter 1

Gaussian Integrals, Generating


Functions and Wick’s theorem

1.1 Gaussian Integrals


The aim of this section is to introduce the formalism of generating functions
which depends almost completely on the knowledge of Gaussian integrals.
So we begin by remembering the simplest of those integrals:
Z ∞ r
Ax2 2π
dxe− 2 = (1.1)
−∞ A
which proof is done by considering the square of the integral and switching to
polar coordinates. Note that the argument of the exponent can be regarded
as a quadratic form on R (as a vector space); we generalize that concept by
introducing a n dimensional positive definite matrix A (i.e. it has all positive
eingevalues) that naturally induces a quadratic form A2 (x) on Rn
n
1X 1
A2 (x) = xi Ai,j xj = xT Ax (1.2)
2 i,j=1 2

Now we consider the gaussian integral:


Z Z Pn
1
−A2 (x)
Z[A] = n
d xe = dn xe− 2 i,j=1 xi Ai,j xj
(1.3)
Rn Rn

The variables are coupled and the integral is solved by diagonalizing the
matrix A (this can be done thanks to the spectral theorem) via an orthogonal

5
matrix O : OOT = In and changing the variables: y = Ox. The Jacobian of
this transformation is 1 so, calling ai , i = 1...n the eigenvalues of A, we have:
n Z ∞
Y ai 2 (2π)n/2
Z[A] = dxi e− 2 xi = √ (1.4)
i=1 −∞ det A
Qn
since i=1 ai = det A.
The last result we need is the further generalization of Z[A]; Pnit is done Tby
adding a linear term to the argument of the exponential, i.e. i=1 bi xi = b x
where b ∈ Rn .
Z
T 1 T −1
Z[A, b] = dn xe−A2 (x)+b x = Z[A, 0]e 2 b A b (1.5)
Rn

(we have defined Z[A, 0] = Z[A]). This result can be derived by either
completing the square or by changing the variables to y = x − x∗ where
x∗ = A−1 b is the extremum of the exponential’s argument. From now on we
call Z[A, b] a generating function.

1.1.1 Examples
 
3 −1
A=
−1 3
A has eigenvalues λ1 = 2 and λ2 = 4 and determinant 8, obtaining
Z[A, 0] = √π2 .
Suppose now that b = (1, 0)T ; then by using A−1 :
 
−1 1 3 1
A =
8 1 3
π
Z[A, b] = √ e3/16
2

1.2 Correlations Functions


Since we calculated Z[A, 0] we can now define a gaussian probability density
over x ∈ RN (a.k.a multivariate normal distribution with mean zero and
covariance matrix A−1 ).
1
p(x) = e−A2 (x) (1.6)
Z[A, 0]

6
From this distribution we can calculate expected values of products (also
called correlation functions) of l of the variables xi .
Z
1
hxk1 ...xkl i = xk1 ...xkl e−A2 (x) (1.7)
Z[A, 0] Rn

where kj = 1...N . To do so we exploit the generating function property of


Z[A, b], i.e.:
Z
1
hxk1 ...xkl i = xk1 ...xkl e−A2 (x) =
Z[A, 0] Rn
Z
1 ∂ ∂ n −A2 (x)+bT x

= ... d xe
Z[A, 0] ∂bk1 ∂bkl Rn
b=0

1 ∂ ∂ ∂ ∂ 1 bT A−1 b
hxk1 ...xkl i = ... Z[A, b] = ... e2 (1.8)
Z[A, 0] ∂bk1 ∂bkl b=0 ∂b k1 ∂b kl

So any correlation function that has an odd number of variables vanishes


identically (this gaussian has mean zero!); instead any even correlation func-
tion can be calculated using Wick’s Theorem.

1.2.1 Examples
Using the results from the previous example we compute a correlation func-
tion:
1 T −1 1
hx1 x2 i = ∂b1 ∂b2 e 2 b A b = A−1
1,2 =

b=0 8
Notice that this result could have been derived just by looking the inverse of
A, being the covariance matrix.
Now using A = In we compute hxk1 ...xkl i (supposing every kj index is differ-
ent from the others):

∂ ∂ bT b l bT b

hxk1 ...xkl i = ... e = 2 bk1 ...bkl e =0
∂bk1 ∂bkl b=0 b=0

This result is expected, since in this case the covariance matrix is the identity.
Instead the two point function is:

hxi xj i = δij

7
1.3 Wick’s Theorem
We discovered that correlation functions can be computed using derivatives
of Z[A, b]; however there is another way, called Wick’s theorem: it states that
any even-number correlation function can be written as the sum of products
of two points correlation functions, e.g. (defining Gi,j = A−1
i,j ):

hxi xj xk xl i = Gij Gkl + Gik Gjl + Gil Gjk (1.9)


Here we summed products of Gij taking into account all possible permuta-
tions of the indices. The number of terms in this expression, in the case of
2m variables, is (2m − 1)!! (double factorial).

1.3.1 Examples
Setting ourselves in one dimension, we can compute the moments of a gaus-
sian with variance σ (i.e. the matrix A is a number equals to 1/σ) using
Wick’s theorem:
hx2 i = σ 2
hx4 i = σ 2 σ 2 + σ 2 σ 2 + σ 2 σ 2 = 3σ 4 = 3hx2 i2

1.4 More on Integrals


1.4.1 Steepest descent
Here we describe the method of steepest descent to approximate integrals of
the form dn xe−F (x)/λ for λ → 0. Indeed we have:
R

(2π)1/2
Z
I(λ) = dn xe−F (x)/λ ≈ λn/2 e−F (xc )/λ (1.10)
det(∂ 2 F (xc ))1/2
where xc is the saddle √point of F (x). To derive this result we just change
the variable x = xc + λy and ignore all factors O(λ1/2 ) in the exponent
argument.
In the case of a complex variable:
(2π)1/2 g(zc )esf (zc )
Z
I(s) = g(z)esf (z) dz ≈ (1.11)
C |sf 00 (zc )|1/2
where zc is the maximum of f (z) inside the contour C; beware that f and g
must be real (???).

8
1.4.2 Gaussian with imaginary mean
Here we want to make sense of:
Z
2
e−a(x−ib) dx (1.12)
R

where b ∈ R. Since the integrand is analytic, we can continue it to a complex


variable z and consider a contour ΓR = γ1 ∪ γ2 ∪ γ+ ∪ γ− where γ1 is the
interval [−R, R], −γ2 is [−R + ib, R + ib], γ+ is [R, R + ib] and γ− is − γ+ . The
contribution for R → ∞ is zero on ΓR and on γ± ; instead γ1,2 give opposite
results, hence: Z r
−a(x−ib)2 π
e dx = (1.13)
R a

1.4.3 Fresnel integral


Now consider: Z R
dk −ak2 ei(π/2−) −ibk
I (a, b) = lim e (1.14)
R→∞ −R 2π
This integral is the regularized version of another integral, the Fresnel Inte-
gral: Z ∞
dk −iak2 −ibk ib2
e = (4πai)−1/2 e 4a (1.15)
−∞ 2π

We want to prove that I(a, b) ≡ lim→0 I (a, b).


Introduce the variable z = kei(π/4−/2) ; construct a contour ΓR = γR ∪ γ+ ∪
γ̄R ∪ γ−

• γ+ = {z = Reiθ : θ ∈ [0, π/4 − /2]}

• γ− = {z = Reiθ : θ ∈ [π, 3π/4 − /2]}

• γR = {|z| ≤ R, arg z = π/4 − /2}

• −γR = [−R, R]

Consider the integrand calculated on γ+ , calling b0 = be−i(π/4−/2) , as R → ∞:


−i(π/4−/2) Z Z π/4−/2
e −az 2 −izb
iθ−aR2 e2iθ −Rb0 eiθ
e dz ≤ R
dθ e


γ+ 0

9
Z π/4−/2
2 cos(2θ)+Rb0 sin(θ)
=R dθe−aR →0
0
as R → ∞. The same holds on γ− . On γ̄R we have:
∞ b02
e−i(π/4−/2) e− 4a
Z
−az 2 −ib0 z π ib2
e = ( )1/2 → (4πai)−1/2 e 4a
2π −∞ a 2π →0

Since the integral over ΓR is zero we conclude that:


Z ∞
dk −iak2 −ibk ib2
I(a, b) ≡ lim I (a, b) = e = (4πai)−1/2 e 4a
→0 −∞ 2π

for a > 0, otherwise use: I(a, b) = I ∗ (−a, −b)

1.4.4 Example: Schrodinger Equation


Consider the free Schrodinger equation:

~2 2
i~∂t ψ(x, t) = − ∂ ψ(x, t) (1.16)
2m x
Set ~ = 1 and move to Fourier space:
Z
dp
ψ(x, t) = ψ̃(p, t)eipx (1.17)

p2
i∂t ψ̃(p, t) = ψ̃(p, t) (1.18)
2m
p2 t
ψ̃(p, t) = ψ̃(p, 0)e−i 2m (1.19)
p2 t
Since we use ψ(x, 0) = δ(x) we have ψ̃(p, t) = e−i 2m . To find ψ(x, t) we make
t
use of Fresnel integrals (a = 2m and b = −ix):
Z  −1/2 i(−ix)2
1 2
−i p2mt +ipx t  m 1/2 mx2
e− 2it
t
ψ(x, t) = dpe = 4π i e 4 2m
=
2π 2m 2πti
(1.20)
Putting back ~:
 m 1/2 mx2
ψ(x, t) = e− 2~it (1.21)
2π~it
This is called the propagator for the free Schrodinger equation.

10
1.4.5 Indented Integrals and  prescription
Prove that:
1 1
lim =P ± iπδ(x − x0 )
→0 x − x0 ∓ i x − x0
We here prove:
1
lim
→0 x − x0 + i

Start considering the contour: Γ0R = {z = Reiθ , 0 ≤ θ ≤ π} as R → ∞


Z Z Z π iθ
1 z + x0 iθ Re + x0
dz = dz 2 = dθiRe =
Γ0R z − x0 Γ0R |z| + x20 0 R2 + x20
π π
iR2
Z Z
2iθ iRx0
= 2 dθe + 2 dθeiθ
R + x20 0 R + x20 0i
2iθ
The first integral goes to zero since e takes the same values on π and 0;
the secondo is zero in the limit of R → ∞. So the lim→0 of the integral of
1
x−x0 +i
gives no contribution in the contour Γ0R ∪ [−R, R] as R → ∞ because
of Cauchy’s theorem. Since −iπδ(x − x0 ) integrated on the real line gives
the contribution −iπ we conclude that:
Z
1
P dz = iπ
Γ z − x0

where Γ = limR→∞ Γ0R ∪ [−R, R] and P is the principal value.

11
Chapter 2

Stochastic Processes and Path


Integrals

2.1 Diffusion Equation


Suppose we have a particle in another particle bath and the inertia of this
particle is low enough to make the collision between that particle and the
bath produce movement. This process could be described in principle using
Newton’s law of motion (we stay for now in the classical framework) but in
pratice this is impossible: we would have to deal with a number of order
1023 of equations of motion; the solution to this problem is found treating
the bath as a random noise applied to the particle (this is possible since we
assume that the length and time scale of the movement of the particle in the
background is much less than the particle one’s). To begin, we introduce the
particle density ρ at position x ∈ Rd and at time t such that the integral over
a region A ⊆ Rd of ρ gives the fraction of particles inside that region:
Z
dd xρ(x, t) (2.1)
A

Since the particle move through the boundary of A, i.e. ∂A, we can say that
exist a current vector j(x, t) whose integral describe this flux (related to the
change in particle number):
Z Z Z
d
∂t d xρ(x, t) = − d xj(x, t)n̂ = − dd x∇ · j(x, t)
d
(2.2)
A ∂A A

12
(the minus sign is due to the fact that when the divergence decreases the
density increases: less particles are going out). Since the region A is arbitrary:

∂t ρ(x, t) = −∇ · j(x, t) (2.3)

At this moment we don’t have any external field so the only way to construct
the current j(x, t) is from ρ and its derivatives; assuming ρ is small (along
with its derivatives) we choose:

j(x, t) = −D(x, t)∇ρ(x, t) (2.4)

(the minus sign is due to a similar argument as before); we arrive at:

∂t ρ(x, t) = ∇ · (D(x, t)∇ρ(x, t)) (2.5)

For constant D we derive Fick’s Law (diffusion equation):

∂t ρ(x, t) = D∇2 ρ(x, t) (2.6)

Side note: we can always choose ρ normalized to 1 and interpet it as a


probability distribution, thing that we’ll do later on.

2.2 Random walk and diffusion equation


We can derive diffusion equation using a more "microscopic" (not in the
strict sense) approach, by means of a Discrete Markov Process. Consider a
particle moving on a one dimensional lattice of spacing l and take a discrete
time with unit . We call the transition matrix of jumping in a time step 
from site j to site i Wij () and the probability of being at site i at time tn
wi (tn ); thus, by the definition of the transition matrix:
X
wi (tn ) = Wij wj (tn−1 ) (2.7)
j

In vector form:
w(tn ) = W w(tn−1 ) = W n w(0)
Suppose that the only possible jumps are the ones from nearest neighbour
sites, i.e.:
Wij = p+ δi,j+1 + p− δi,j−1 (2.8)

13
where p+ is the probability of a right jump, while p− of a left one. We now
derive wi (tn ); in n time steps n+ steps on the right are done and n− on the
left such that n+ + n− = n; the position i is i = n+ − n− , i.e.:
n+i
n+ ≡ (2.9)
2
n−i
n− ≡ (2.10)
2
If n − i is odd or |i| > n the probability is zero; otherwise is a binomial
n+ ∼ B(p+ , n):
     
n n+ n− n n+ n−n+ n n+i n−i
wi (n) = p+ p− = p+ p − = p+2 p−2 (2.11)
n+ n+ n+
To calculate moments we use the generating function:
n  
X
n+ n n n−n
ŵ(z, n) = z p++ p− + = (p+ z + p− )n (2.12)
n =0
n+
+


hn+ i = z ∂z ŵ = np+ (2.13)

 2
hn2+ i = z ∂z ŵ = np+ (1 + (n − 1)p+ ) (2.14)
Since xi = il = l(2n+ + n) we obtain:
hxn i = nl(p+ − p− ) (2.15)
Var(xn ) = 4l2 p+ p− n (2.16)
Moving to the continuous limit we set p+ = p− = 1/2 and n = t/
l2
Var(xt ) = t

l2
the limit is taken on l → 0 ,  → 0 and n → ∞ keeping 
≡ 2D constant:
Var(xt ) = 2Dt (2.17)
This is the same result obtained by solving the diffusion equation: as ex-
pected the binomial approaches a gaussian distribution in the continuous
limit. Indeed using the central limit theorem on the distribution we obtain:
1 x2
w(x, t) = √ e− 4Dt
4πDt

14
For further insights let’s take the continuum limit of the discrete master
equation itself. In general a master equation reads:
X
wi (tn+1 ) = rj (tn )wj (tn ) (2.18)
j6=i

In our case:
1
wi (tn+1 ) = (wi−1 (tn ) + wi+1 (tn )) (2.19)
2
Divide by and substitute tn = n; since we are looking for a continuum
distribution we substitute i with x:
1
w(x, t + ) = (w(x − l, t) + w(x + l, t)) (2.20)
2
1
w(x, t + ) − w(x, t) = (w(x − l, t) + w(x + l, t) − w(x, t)) (2.21)
2
l2
Now we send  → 0, l → 0 and n → ∞ keeping 
≡ 2D fixed, obtaining:

∂t w(x, t) = D∇2 w(x, t) (2.22)

i.e. we arrived at the diffusion equation 2.6.


Let’s solve now the diffusion equation; to do so we look for eigenfunctions of
∇2 :
∂x2 ϕk = −k 2 ϕk (2.23)
1
ϕk (x) = √ eikx (2.24)

for k ∈ R. Decompose w(x, t) w.r.t. ϕk (it is a complete basis in L2 ):
Z
w(x, t) = ck (t)ϕk (x)dk (2.25)
R

We obtain:
dck (t)
= −Dk 2 ck (t)
dt
2
ck (t) = ck (0)e−Dk t
Z
2
w(x, t) = ck (0)e−Dk t eikx dk (2.26)
R

15
To impose initial conditions:
Z
dk
w(x, 0) = ck (0)eikx √
R 2π
dx0
Z
0 −ikx0
ck (0) = w(x , 0)e √
R 2π
For w(x, 0) = δ(x − x0 ) we obtain ck (0) = √1 and:

1 (x−x0 )2
e− 4Dt
w(x, t) = √ (2.27)
4πDt
We can shift even the time and consider the solution for t > t0 :
θ(t − t0 ) (x−x0 )2

w(x, t|x0 , t0 ) = p e 4D(t−t0 ) (2.28)
4πD(t − t0 )
We call this solution the propagator for the brownian motion and denote by
W (x, t|x0 , t0 ); it satisfies:
W (x, t|x0 , t0 ) = W (x − x0 , t − t0 |0, 0)
Mathematically speaking W is the Green Function of the diffusion equation
and for arbitrary initial condition w(x0 , t0 ) we can derive, using W , the full
solution w(x, t):
Z
w(x, t) = dx0 W (x, t|x0 , t0 )w(x0 , t0 )

Considering now t0 < t0 < t:


Z
0 0
w(x , t ) = dx0 W (x0 , t0 |x0 , t0 )w(x0 , t0 )
Z
w(x, t) = dx0 W (x, t|x0 , t0 )w(x0 , t0 )
we obtian by combining these two equations:
Z Z
w(x, t) = dx W (x, t|x , t ) dx0 W (x0 , t0 |x0 , t0 )w(x0 , t0 )
0 0 0

by comparison with the previous result:


Z
W (x, t|x0 , t0 ) = dx0 W (x, t|x0 , t0 )W (x0 , t0 |x0 , t0 ) (2.29)

i.e. the propagator satisfies ESCK relation.

16
2.3 Wiener Integral
To start we have to define now an object T , which can be a finite subset of
R or an interval, e.g.: T = [0, ∞) or T = {t1 , ..., tn } and RT as the set of
all functions having as domain T : in our example if T = [0, ∞), RT is the
set of all functions x : [0, ∞) → R or if T = {t1 , ..., tn }, RT is the set of all
sequences {x1 , ..., xn } = {x(t1 )...x(tn )} for some xi ’s.
Using T and RT , we want to construct a brownian motion measure Pw for
sets A ⊂ RT , i.e. give a probability to this sets (also called ensembles).
Start from finite sets: consider ∀n ∈ N a finite set of time instants T =
{t1 , ..., tn } (where ti < ti+1 and ti ∈ R). Define ∆ti = ti −ti−1 and Hi = [ai , bi ]
∀i = 1...n and ai < bi ∈ R: we call A the set {x : x(t1 ) ∈ H1 , ..., x(tn ) ∈ Hn }.
We can now, by means of ESCK relation, define the measure Pw (A) = Pt1 ,...,tn
of A-like ensembles as:
n
(xi − xi−1 )2
Z Z  
Y 1
Pt1 ,...,tn = dx1 ... dxn 1/2
exp − (2.30)
H1 Hn i=1
(4πD∆t i ) 4D∆t i

Notice that this relation is valid for any n ∈ N and by the use of Kolgomorov
extension theorem we are free to extend this result to any subset RT (i.e.:
n → ∞), given a suitable choice of T (e.g.: [0, ∞)). This way we constructed
a probability space (RT , F, Pw ) where F is the set of measurable subsets of
RT to which A belongs.
Practically for any computation we rely on discretization; for example, if
we had to Rfind the expected value of aPfunctional of the trajectory x(τ ),

F (x(τ )) = 0 a(τ )x(τ )dτ we would use N i=0 a(ti )x(ti ) and take N → ∞

2.4 Some Calculations


Before continuing we define C = [0, 0; t] as the trajectory starting from 0 at
time 0 and lasting a time span t; C = [0, 0; x, t] is the trajectory obtained by
fixing the endpoints: the particle starts at time 0 from position 0 ending up,
after a time t, at position x.

17
2.4.1 Identity
Using normalization of Wiener measure:
Z
h1iw = 1dxw (τ ) = 1 (2.31)
C=[0,0;t]

2.4.2 Return to start


We start from 0 and after a time t we go back there:
Z NY
+1 2
dxi − N
P (xi+1 −xi )
hδ(x(t))iw = √ e i=0 4D∆ti δ(xN +1 ) (2.32)
i=1
4πD∆ti

The delta sets xN +1 to zero; use, w.l.o.g., t/(N + 1) = :


Z
1 1 T
= N +1 dN xe− 4D x AN x
(4πD) 2

AN (i, i) = 2
AN (i, j) = −δi,j+1 − δi,j−1
We need to compute the determinant of AN : using Laplace expansion in the
last column we obtain:

det AN = 2 det AN −1 − det AN −2

with initial conditions A1 = 2 and A2 = 3 w obtain:

det AN = N + 1

The integral yields:


1 1 1
hδ(x(t))iw = N +1 (4πD)N/2 √ = = W (0, t|0, 0)
(4πD) 2 N +1 (4πDt)1/2
(2.33)

18
2.4.3 Two-point function
To compute the two point correlation function it’s straightforward to use
ESCK relation:
x2 (x2 −x1 )2
Z
dx1 dx2 1
− 4D∆t − 4D∆t
hx(t1 )x(t2 )iw = √ √ e 1 e 2 x x
1 2 (2.34)
4πD∆t1 4πD∆t2
Using x = x1 and y = x2 − x1 :

< x(t1 )x(t2 ) >w = 2Dt1

In general:
hx(t1 )x(t2 )iw = x20 + 2D min{t1 − t0 , t2 − t0 } (2.35)

2.4.4 A first functional


Rt
We want to compute the expected value for the functional F ( 0 a(τ )x(τ )dτ ).
Rt
First introduce A(τ ) = τ a(s)ds, i.e. Ȧ(τ ) = −a(τ ); then:
Z t Z t
a(τ )x(τ )dτ = A(x(τ ))ẋ(τ )dτ (2.36)
0 0

Discretizing:
N
X N
X
A(ti )(xi − xi−1 ) = Ai (xi − xi−1 )
i=1 i=1

Then the average of the functional F is the limit N → ∞ of (D = 1/4):


N
Z Y N (xi −xi−1 )2
dxi X − N
P
IN = F ( A (x
i i − x i−1 ))e i=1 ∆ti
(2.37)
i=1
(π∆ti )1/2 i=1

N
Z Y N yi2
dyi X − N
P
= F( Ai yi )e i=1 ∆ti
(2.38)
i=1
(π∆ti )1/2 i=1
We introduce the identity as a delta function:
Z Z N
Z Y yi2
dα iαz dyi − N
P
i=1 ( ∆ti +iαAi yi )
dze F (z) e
2π i=1
(π∆ti )1/2

19
( N
)
α2 X 2
Z Z

= dzF (z) exp − A ∆ti + iαz =
2π 4 i=1 i
s Z ( N
)
π X
= PN 2
dzF (z) exp −z 2 / A2i ∆ti
i=1 Ai ∆ti i=1

As N → ∞:
N
X Z t Z t Z t
lim A2i ∆ti = 2
A (τ )dτ = [ a(s)ds]2 dτ ≡ R (2.39)
N →∞ 0 0 τ
i=1

Z t r Z
π 2
hF ( a(τ )x(τ )dτ )iW = dzF (z)e−z /R (2.40)
0 R
For D 6= 1/4 send R → 4DR. Rt
Using F (z) = ehz we obtain the moment generating function for 0 a(τ )x(τ )dτ :
Rt 2 R/4
heh 0 a(τ )x(τ )dτ
iW = eh (2.41)
Z t 2k+1
h a(τ )x(τ )dτ iW = 0 (2.42)
0
Z t 2k  2
R (2k)!
h a(τ )x(τ )dτ i= (2.43)
0 2 2k k!

2.4.5 Potential-like functional


In this section we want to compute the expected value for the functional:
Rt
p(τ )x2 (τ )dτ
e− 0 (2.44)

Start by discretizing:
N
!
(xi −xi−1 )2
Z Y dx PN
+pi x2i ]
(N )
I4 = √ i e − i=1 [  (2.45)
i=1
π

We can recast the argument of exponential as a bilinear form using the matrix
a: this matrix, calling ai = pi  + 2 for i = 1 . . . N − 1 and aN = pN  + 1 , is a

20
three-diagonal matrix having as diagonal elements ai ’s and − 1 as the other
two off-diagonals elements.
− 1
 
a1 0 ... ... 0
− 1 −1
a2 
... ... 0 
− 1 − 1 · · ·
 
 0 a3 0 
a =  .. .. (2.46)
 
 . ... − 1 . . . ... .


 .
 ..

−1 
0 ... ... 0 − 1 aN
So, in terms of det a, we have that:
(N )
I4 = (N det a)−1/2 = (det(a))−1/2
We denote the determinant of the matrix a by D1N and define DkN as the
determinant of the matrix obtained by removing the first k − 1 rows and
columns from a.
ak − 1
 
0 ··· ··· 0
− 1 ak+1 − 1 0 ··· 0 
  
... ... ... .. 

1
 0 − . 

DkN = 
 ... .. ..  (2.47)
 0 . . − 1 0  
 . .. ...
 ..

. − 1 aN −1 − 1 
0 0 ··· 0 − 1 aN
By Laplace expansion on the first row of DkN :
DkN = ak Dk−1
N N
− Dk+2 = (2 pk + 2)Dk−1
N N
− Dk−2 (2.48)
DkN − N
2Dk−1 + N
Dk−2 N
= pk Dk−1 (2.49)
2
Calling τ = (k − 1)/N and taking  → 0 and N → ∞ we arrive at:
∂τ2 D(τ ) = p(τ )D(τ ) (2.50)
From this we find that D1N → D(0).
(N )
Since DN = pN 2 + 1 we find that D(t) = 1.
(N )
Since DN −1 = pN pN −1 4 + 2pN 2 + pN −1 2 + 1 we have that:
(N ) (N )
D − DN −1
Ḋ(t) = lim N =0
→0 

21
Going back to our integral:
1
I4 = p
D(0)

For p(τ ) = k 2 :

D(τ ) = Aekτ + Be−kτ (2.51)


−kτ
Ḋ(τ ) = k Aekτ − Be

(2.52)

Using the previous stated initial conditions:


1 1
D(τ ) = e(t−τ ) + e−(t−τ ) = cosh(t − τ )k (2.53)
2 2
D(0) = cosh kt (2.54)

From which:
(N )1
lim I4 = I4 = √ (2.55)
N →∞ cosh kt
A generalization of the previous computation is:
Rt
p(τ )x2 (τ )dτ
he− 0 δ(x(t) − x)i (2.56)

i.e.: the previous expected value but with fixed endpoint. Start by rewriting
the delta function:
Z ∞
− 0t p(τ )x2 (τ )dτ
R 1 Rt 2
he δ(x − x(t))i = dαeiαx he− 0 p(τ )x (τ )dτ e−iαx(t) i (2.57)
2π −∞

Discretize the expected value (a is the matrix above):


N
!
PN (xi −xi−1 )2
Z Y
dx +pi x2i −iαxN ]
(N )
Iˆ4 = √ i e− i=1 [  = (2.58)
i=1
π
N x2
− −1
Z
d x −xT ax−iαxN −1 (N ) −1/2 a
= e = (πaN,N D1 ) e N,N (2.59)
(π)N/2

The value of a−1


N,N can be viewed from the form of the matrix a:

(N −1)
|a0 | N |a0 | D̃
a−1
N,N = = (N ) = 1 (N ) (2.60)
|a| D1 D1

22
we introduced a0 as the matrix obtained from removing the last row and
(N −1)
column from a. Introduce now D̃k as the determinant of the matrix
(N −1) N 0
obtained from D1 ≡  |a | by eliminating the first k−1 rows and columns:
 
ak −1 0 ··· ··· 0
 −1 ak+1 −1 0 ··· 0 
.. .. .. .. 
 
0 −1 . . . .

(N −1)
 
D̃k =  .
 .. . . . .
 (2.61)
 0 . . −1 0 

 . .. ..
 ..

. . −1 aN −2 −1 
0 0 ··· 0 −1 aN 1

As N → ∞ we get for D̃ the same differential equation as the one of D:

∂τ2 D̃(τ ) = p(τ )D(τ ) (2.62)

but with different initial conditions:


(N −1)
lim D̃N −1 = 2 aN −1 → 0 (2.63)
N →∞
(N −1)
lim D̃N −2 =  2 aN −2 aN −1 − 1 =

(2.64)
N →∞
=  pN −1 pN −2 4 + 2 (pN −1 + pN −2 ) 2 +  + 3

(2.65)
(N −1) (N −1)
D̃N −1 − D̃N −2 2 − 3 + o()2
= → −1 (2.66)
 
˙
so D̃(t) = 0 and D̃(t) = −1. So given D(τ ) and D̃(τ ):
(N )
D1
1 −x2 (N −1) 1 −x2
D(0)
(N )
Iˆ4 =q e D̃1
→ Iˆ4 = q e D̃(0) (2.67)
(N −1)
π D̃1 π D̃(0)

For the special case p(τ ) = k 2 :

D(τ ) = cosh(t − τ )k (2.68)


1
D̃(τ ) = sinh(t − τ ) (2.69)
k
r
k 2
I4 (x) = e−x k coth t (2.70)
π sinh t

23
Chapter 3

Fokker-Planck Equation

3.1 Master Equation derivation


Suppose we want to describe a particle motion via a discrete Markov Process;
we would have that this particle would jump from position j to a position i
in a time step  according to a transition matrix Wij . The probability vector
w at time tn+1 would be given by:
X
wi (tn+1 ) = Wij (tn )wj (tn ) (3.1)
j

The time instants are tn = n while the position is taken to lie in a one
dimensional lattice of spacing l. Let’s try to find the continuous version of
this equation. Introduce w(xi , tn ) = 1l wi (tn ) and use an integral version of
the sum above:
Z
w(x, tn+1 ) = dzW (z|x − z, tn )w(x − z, tn ) (3.2)

where W (z|x−z) describes the probability of moving for an amount z starting


from x − z. For probability conservation,
R given that the initial state must be
normalized, we have to require that dzW (z|x, tn ) = 1; using this property
we consider:
Z
w(x, tn+1 ) − w(x, tn ) = dz[W (z|x − z, tn )w(x − z, tn ) − W (z|x)w(x, tn )]
(3.3)

24
Now we expand W in the argument x − z for z → 0: this comes from a
physical argument, since W must be big for small jumps (since the time
steps are supposed to be short):

z2 2
Z
w(x, tn+1 )−w(x, tn ) = dz[−z∂x [W (z|x, tn )w(x, tn )]+ ∂x W (z|x, tn )w(x, tn )−...)]
2
(3.4)
Using integration by parts:

(−1)k
X Z
w(x, tn+1 ) − w(x, tn ) = ∂xk [w(x, tn ) dzz k W (z|x)] (3.5)
k=1
k!

To make a step further we mimick the brownian motion so we require a mean


dependent on some "force" and a variance of order :
 
z − f (x, tn )  1
W (z|x, tn ) = F  q q (3.6)
D̂(x, tn ) D̂(x, tn )

F (y) = F (−y)
Z
dyF (y) = 1

for some functions f and D̂. We find that:


Z
dzzW (z|x, tn ) = f (x, tn ) (3.7)
Z Z
2
dzz W (z|x, tn ) = D̂(x, tn ) dyF (y)y 2 + O(2 ) (3.8)

R of z are2 negligible; in the end taking the  → 0 limit and


All other moments
1
setting D = 2 D̂ dyF (y)y we arrive at:

∂t w(x, t) = ∂x [−f (x, t)w(x, t) + ∂x [D(x, t)w(x, t)]] (3.9)

This is the Fokker-Planck equation; notice that by imposing f = 0 we recover


the diffusion equation.

25
3.2 Langevin Equation
Let’s go back to the Wiener path integral. In the discretization we find that
the probability density of the difference of two subsequent points xi+1 and xi
is:
z2
dzi − i
dP(xi+1 − xi = zi ) = √ e 4D∆ti (3.10)
4πD∆ti
i.e.: p
zi ∼ N (0, 2D∆ti ) (3.11)
from which:
xi+1 = xi + zi (3.12)

Introducing another random variable ∆Bi such that zi = 2D∆Bi (i.e.:
∆Bi ∼ N (0, ∆ti )) we can rewrite the previous expression as:

∆x = 2D∆B (3.13)

or in a formal way: √
dX(t) = 2DdB(t) (3.14)
where B(t) is called pure Brownian motion which again can be recast to a
standard normal variable ξi in the following way:

∆Bi = ξi ∆ti (3.15)

At this point this formalism seems pointless; consider now a particle (brow-
nian particle) in a (smaller) particle bath (a liquid for example) subject to
friction and an external force (which is composed by a deterministic force
and a noise due to collision with the bath)

mr̈ = −γ ṙ + F (3.16)

for F = Fext + Fnoise . Now if we look at the system in time scales t for which
we can neglect the inertia of the particle, i.e. t  m/γ we arrive at (in one
dimension):
1 1 1
ẋ = F = Fext + Fnoise (3.17)
γ γ γ
or:
1 1
xi+1 = xi + Fext ∆ti + Fnoise ∆ti (3.18)
γ γ

26

Setting Fext = 0 we recover the equation dX(t) = 2DdB(t) if we identify:
1 √
Fnoise = 2Dξi (3.19)
γ
Fext
Setting f = γ
we arrive at the (formal) Langevin equation:

dx(t) = f (x(t), t)dt + 2DdB(t) (3.20)

which is understood in the discrete form (since B is not differentiable):



xi+1 = xi + f (xi , ti )∆ti + 2D∆Bi (3.21)

3.3 Stochastic Calculus


3.3.1 Introduction
Having introduced the quantity ∆Bi and formally dB(t) let’s try to go further
and compute integrals containing a brownian motion, i.e.:
Z t
S= G(s)dB(s) (3.22)
t0

This integral must be understood in a mean squared sense:


n
X
S = ms lim G(τi )(B(ti ) − B(ti−1 )) (3.23)
max ∆ti →0
i=1

where τi is a point lying in the interval [ti−1 , ti ]: τi =Pλti + (1 − λ)ti−1 for 0 ≤


λ ≤ 1. The mean squared limit means, calling Sn = ni=1 G(τi ) (B(ti ) − B(ti−1 )):

lim h(S − Sn )2 i = 0 (3.24)


n→∞

As a warm up, let’s calculate the expected value of Sn :


n
X
hSn i = [τi − t0 − ti−1 + t0 ] = λ(t − t0 ) (3.25)
i=1

since hB(t)B(t0 )i = min{t − t0 , t0 − t0 }. Notice that the value depends on


the discretization (so on λ); then λ remains as a free parameter; by fixing it

27
we choose a particular set of rules for integrals containing brownian motions.
Using λ = 0 we choose the Ito prescription (which preserves causality) using
λ = 1/2 the Stratonovich one: the first sets τi = ti−1 the second τi = ti +t2i−1 .
Now set G(s) = B(s) as use the Ito prescription:
n n
X 1 X
(Bi−1 + ∆Bi )2 − Bi−1
2
− (∆Bi )2

Sn = Bi−1 (Bi − Bi−1 ) =
i=1
2 i=1
(3.26)
The first part is:
n
X X
(Bi−1 + ∆Bi )2 − Bi−1
2
= (Bi−1 + ∆Bi + Bi−1 )(Bi−1 + ∆Bi − Bi−1 ) =
i=1 i
(3.27)
X X
= (Bi + Bi−1 )∆Bi = Bi2 − Bi−1
2
= Bn2 − B02 (3.28)
i i

which leads to: n


B 2 − B02 1 X
Sn = n − (∆Bi )2 (3.29)
2 2 i=1
We have now solved the first part; for the second term we have to make a
guess; suppose ni=1 (∆Bi )2 gives a contribution of t − t0 , i.e.:
P

n
!2
X
lim h (∆Bi )2 − (t − t0 ) i=0 (3.30)
n→∞
i=1
X X
lim h(t − t0 )2 + (∆Bi )2 (∆Bj )2 − 2(t − t0 ) (∆Bi )2 i = 0 (3.31)
n→∞
i,j i

2
P
P we use the fact that all the ∆Bi are independent for saying that h i (∆Bi ) i =
Now
i ∆ti = t − t0 . We are left with:
X
lim h (∆Bi )2 (∆Bj )2 − (t − t0 )2 i = 0 (3.32)
n→∞
i,j

28
Since h(∆Bi )2 (∆Bj )2 i = 3δi,j ∆t2i + (1 − δij )∆ti ∆tj :
X X X
h (∆Bi )2 (∆Bj )2 − (t − t0 )2 i = 3 δi,j ∆t2i + (1 − δij )∆ti ∆tj − (t − t0 )2 ≤
i,j i,j i,j
(3.33)
 X
≤ max ∆tk ∆ti = (3.34)
k
i
 
= max ∆tk (t − t0 ) → 0 N → ∞ (3.35)
k

In the end:
Bt2 − B02 t − t0
−S= (3.36)
2 2
This is the result using the Ito prescription, the one we’ll keep through the
notes. The Stratonivch one consists in finding the mean square limit of:
n  
X ti + ti−1
Sn = B (B(ti ) − B(ti−1 )) (3.37)
i=1
2

i.e. taking τi as the midpoint of [ti−1 , ti ]. The full calculation gives:

Bt2 − B02
SStrat =
2

3.3.2 Ito Integrals


Rt
In general a stochastic integral t0 G(τ )dB(τ ), understood according to Ito
prescription, is well defined if the integrand is a non anticipating function,
i.e. G(τ ) can only be statistically dependent from B(s) for τ < s. In that
case examples of non anticipatng integrands are:

• B(τ )

• G(τ ) = t0
dsg(s)dB(s) for g non anticipating

• G(τ ) = t0
g(s)ds for g non anticipating

• G(τ ) = t0
F (B(s))dB(s)

29
Now we want to prove a very powerful differential relation:

(dB(τ ))2 = dτ (3.38)

Indeed given a non anticipating function G let’s prove that:


Z t Z t
2
G(τ )(dB(τ )) = G(τ )dτ (3.39)
t0 t0

To do that consider:

lim IN = 0 (3.40)
N →∞
!2
X
IN ≡ h Gi−1 (∆Bi2 − ∆ti ) i (3.41)
i

If we prove that the limit is zero we prove 3.39.


X
IN = hGi−1 Gj−1 (∆Bi2 − ∆ti )(∆Bj2 − ∆tj )i = (3.42)
i,j
n
X 2 X
= hG2i−1 (∆Bi )2 − ∆ti i + 2 hGi−1 Gj−1 (∆Bi − ∆ti )(∆Bj − ∆tj )i
i=1 i>j
(3.43)

Consider in the first addend the argument of the sum; since Gi−1 is non
anticipating it is independent from ∆Bi :
2
hG2i−1 (∆Bi )2 − ∆ti i = hG2i−1 i(h∆Bi4 i −2∆ti h∆Bi2 i +(∆ti )2 ) = (3.44)
| {z } | {z }
3∆t2i ∆ti

= 2∆t2i hG2i−1 i (3.45)

Consider in the second term the argument of the sum for i > j:

hGi−1 Gj−1 (∆Bi2 − ∆ti )(∆Bj2 − ∆tj )i (3.46)

since (Gi−1 Gj−1 )(∆Bj2 − ∆tj ) is independent from ∆Bi2 − ∆ti then:

hGi−1 Gj−1 (∆Bj2 − ∆tj )i h(∆Bi2 − ∆ti )i (3.47)


| {z }
=0

30
The second factor is zero since h∆Bi2 i = ∆ti . We are left with:
N
X
IN = 2 ∆t2i hG2i−1 i (3.48)
i=1

2
< ∞ or suphG2 i < ∞:
P
If i hGi−1 i∆ti

lim IN ≤ const lim max{∆ti } → 0 (3.49)


N →∞ N →∞

We have proved 3.38, that is (dB(τ ))2 = dτ . We can generalize this formula;
indeed for any k > 0 we have:

(dB(τ ))k+2 = 0 (3.50)

3.3.3 Differentiation rules


Fix now a function f (x(t), t); its finite difference is (up to second order):

f (x + ∆x, t + ∆t) − f (x, t) ≈ ∂t f (x, t)∆t + ∂x f (x, t)∆x+


1 1
+ ∂t2 f (x, t)(∆t)2 + ∂x2 f (x, t)(∆x)2
2 2
If x(t) was B(t), i.e. a pure brownian motion, we would have:
1 1
∆f ≈ ∂t f (B, t)∆t + ∂B f (B, t)∆B + ∂t2 f (B, t)(∆t)2 + ∂B2 f (B, t)(∆B)2
2 2
(3.51)
2
Sending ∆t → dt and remembering that (dB(t)) = dt and ignoring terms
with order higher than 1:
1
df = (∂t f (B, t) + ∂B2 f (B, t))dt + ∂B f (B, t)dB(t) (3.52)
2
i.e. the terms of order dt are coming from the first derivative w.r.t. time and
the second w.r.t brownian motion.
Example Compute the differential of B n (t):
n  
!
X n
d(B n ) = (B(t) + dB(t))n − B n (t) = B n−k (t)dB k (t) − B n (t)
k=0
k

31
Ignoring all terms (dB(τ ))m+2 for m > 0 (they are zero!):
     
n n n n n−1 n
dB (t) = B (t) + B dB(t) + B n−2 dB 2 (t) − B n (t)
0 1 2
n(n − 1) n−2
dB n (t) = nB n−1 dB(t) + B dt (3.53)
2
The differential is made up of the regular part (nxn−1 ) coming from standard
calculus and the Ito part related to second derivative! ( n(n−1)
2
xn−2 ). Now take
n = m + 1:
m(m + 1) m−1
dB m+1 (t) = (m + 1)B m dB(t) + B dt
2
Divide by m + 1, integrate ad rearrange:
Z t
B m+1 (t) − B m+1 (t0 ) m t m−1
Z
m
B (τ )dB(τ ) = − B (τ )dτ (3.54)
t0 m+1 2 t0

3.3.4 Correlation
Let G and H non anticipating functions. We want to prove that:
Z t Z t Z t
h G(τ1 )dB(τ1 ) H(τ2 )dB(τ2 )i = hG(τ )H(τ )idτ (3.55)
t0 t0 t0

This result is somewhat expected, since hdB(τ )dB(τ 0 )i = δ(τ − τ 0 )dτ dτ 0 .


However, let’s proceed with the discretization:
XN X
N
h Gi−1 Hj−1 ∆Bi ∆Bj i = (3.56)
i=1 j=1
N
X X
= hGi−1 Hi−1 (∆Bi )2 i + h(Gi−1 Hj−1 + Gj−1 Hi−1 )∆Bj ∆Bi i (3.57)
i=1 i>j

As N → ∞ (∆Bi )2 → dτ and since ∆Bi is independent of (Gi−1 Hj−1 +


Gj−1 Hi−1 )∆Bj , the second term goes to 0:
X X
h(Gi−1 Hj−1 +Gj−1 Hi−1 )∆Bj ∆Bi i = h(Gi−1 Hj−1 +Gj−1 Hi−1 )∆Bj i h∆Bi i
| {z }
i>j i>j →0
(3.58)

32
The remaining term is (due to independence):
N
X Z t
2
hGi−1 Hi−1 ih(∆Bi ) i → hG(τ )H(τ )idτ (3.59)
i=1 t0

3.3.5 Change Of Variables


p
Start from a Langevin equation with g(x, t) ≡ 2D(x, t):

dx(t) = f (x(t), t)dt + g(x(t), t)dB(t)

We want to understand how this equation transforms as we consider a func-


tion h of x. Let’s write the differential of h(x(t)):

∂h (dx(t))2 ∂ 2 h
dh(x(t)) = dx(t) + 2
+ ... = h0 (x(t))[f (x(t), t)dt + g(x(t), t)dB(t)]+
∂x 2 ∂x
(3.60)
h00 (x(t))
+ [f (x(t), t)dt + g(x(t), t)dB(t)]2 + ... (3.61)
2
In the end we get:
h00 2
 
0
dh = dt h f + g + h0 gdB (3.62)
2

3.3.6 Fokker-Planck derivation from Langevin


Consider a function w(x, t) = hδ(x − x(t))iB (the average is done over a
brownian motion) where x(t) obeys a Langevin equation. Take h ∈ C 2 (R)
non anticipating. From the change of variable formula 3.62:

h00 (x(t))g 2 (x(t), t)


hdh(x(t)))iB = dthh0 (x(t))f (x(t), t) + iB +
2
+ hh0 (x(t))g(x(t), t)dB(t)iB
| {z }
≡0

The last term is zero due to independence (h is non anticipating) and the
identity hdB(t)i = 0.

dh h00 (x(t))g 2 (x(t), t)


h iB = hh0 (x(t))f (x(t), t) + iB (3.63)
dt 2

33
Introduce now w(x, t) = hδ(x − x(t))iB via an integral:

h00 (x)g 2 (x, t)


Z Z  
d 0
dxw(x, t)h(x)dx = dxw(x, t) h (x)f (x, t) + =
dt 2
(3.64)
Z  
1
= dxh(x) −∂x f (x, t)w(x, t) + ∂x2 g 2 (x, t)w(x, t) (3.65)
2

Due to the fact that h is arbitrary, the previous equation yields:


 
1 2
∂t w(x, t) = ∂x −f (x, t)w(x, t) + ∂x g (x, t)w(x, t) (3.66)
2

i.e. the Fokker Planck equation 3.9 (since g = 2D).

34
Chapter 4

The Wiener Path Integral in


action

4.1 Harmonic Oscillator


Consider a discrete Langevin equation with a linear force term:

xi = xi−1 − kxi−1 ∆ti + 2D∆Bi k = mω 2 /γ (4.1)

The Wiener measure for Bi is known; let’s try to derive the one for xi by a
change of variable; we need to compute the jacobian of the transformation:

0√
 j>i
Jij == 2D i = j (4.2)
 ∂xi

otherwise
∂Bj

Jij is a lower diagonal matrix so the jacobian is J = (2D)N/2 . The measure


transforms as:

∂xi
dP (x1 , . . . , xN ) = dP (B1 , . . . , BN ) = JdP (B1 , . . . , BN ) (4.3)
∂Bj
N N
!
Y dxi X (∆xi + kxi−1 ∆ti )2
dP (x1 , . . . , xN ) = √ exp − = (4.4)
i=1
4πD∆ti i=1
4D∆ti

35
Expanding the square in the exponential argument we get:
N N
! N
! N
!
Y dxi X ∆x2i X kxi−1 ∆xi X k 2 x2i−1
√ exp − exp − exp −
i=1
4πD∆t i i=1
4D∆t i i=1
2D i=1
∆ti /4D
(4.5)
Taking the N → ∞ limit (in the mean squared sense):
Y dx(τ ) Rt 2 k
Rt k2
Rt 2
dP = √ e− 0 dτ ẋ(τ ) e− 2D 0 dτ x(τ )dI x(τ ) e− 4D 0 x (τ )dτ (4.6)
τ 4πDdτ

Let’s start by compute the term in the middle using the change of variables
3.62: consider a function h(x) ∈ C 3 (R):

∆x2i
∆h(xi ) = h0 (xi−1 )∆xi + h00 (xi−1 ) + O(∆x3i ) (4.7)
2
Now making use of discrete Wiener measure we can say that, in the mean
L2 sense, ∆x2i → 2D∆ti while all higher moments go to zero:

∆h(xi ) = h0 (xi−1 )∆xi + h00 (xi−1 )D∆ti + O(∆t2i ) (4.8)

from which:
Z t Z t
0
h(x(t)) − h(x(0)) = h (x(τ ))dI x(τ ) + D h00 (x(τ ))dτ (4.9)
0 0

Setting h0 (x) = x:
Z t
x2 (t) − x2 (0)
= x(τ )dI x(τ ) + Dt (4.10)
2 0
Z t
x2 (t) − x2 (0)
x(τ )dI x(τ ) = − Dt (4.11)
0 2
which is the argument, apart from an overall constant, of the middle term’s
exponent in 4.6:
Y dx(τ ) Rt 2 x2 (t)−x2 (0) kt k2
Rt 2
dP = √ e− 0 dτ ẋ(τ ) e−k 4D + 2 e− 4D 0 x (τ )dτ (4.12)
τ 4πDdτ

36
Now we are ready to compute the propagator:
Z
W (x, t|x0 , 0) = dP ({x(τ )})δ(x(t) − x) = (4.13)
x2 − x20 kt
 
k2
Rt 2
= exp −k + he− 4D 0 x (τ )dτ δ(x(t) − x)iW (4.14)
4D 2

No need to do anything else, since we have already computed the last term
for D = 1/4 in 2.70; we just need to send t → 4Dt and k → k/4D:
r
k2
− 4D
Rt 2 k kx2
he 0 x (τ )dτ
δ(x(t) − x)iW = e− 4D coth 4Dkt (4.15)
4πD sinh 4Dkt
Putting everything together and, for simplicity setting x0 = 0:
r
k x2 kt kx2
W (x, t|0, 0) = e−k 4D + 2 − 4D coth 4Dkt = (4.16)
s 4πD sinh 4Dkt
k k x2
− 2D −2kt )
= e (1−e (4.17)
2πD(1 − e−2kt )

Another way to derive the expression of the propagator for D 6= 1/4 is by


using the Fokker-Planck equation associated to a linear force:

∂t W (x, t|x0 , 0) = ∂x [kxW (x, t|x0 , 0) + D∂x W (x, t|x0 , 0)] (4.18)

To warm up start with stationary solution, having x0 = 0:


r
∗ k − kx2
W (x) = e 2D (4.19)
2πD
This solution must coincide with the Boltzmann distribution for a particle
in an harmonic potential:
2 x2
− mω
W ∗ (x) ∝ e 2k
BT (4.20)

which yields:
k 2
D
= mωkB T
(4.21)
D = kBγ T = 6πηR
kB T
(4.22)

37
To derive the full solution we lean on the Fourier transform that poses the
Fokker-Planck equation in "momentum" space. In order to do this we need
to know how to transform a function like xf (x):
Z Z
F (xf (x)) (p) = dxxf (x)e −ipx
= dxf (x)(i∂p e−ipx ) = i∂p f˜(p)

The expression of the FP equation is:

∂t W̃ (p, t) = −kp∂p W̃ (p, t) − Dp2 W̃ (p, t) (4.23)

To solve this equation we use the method of characteristics with a parameter


s:
1 ∂t W̃ (p, t) + kp ∂p W̃ (p, t) = −Dp2 W̃ (p, t)
|{z} |{z} | {z }
dt(s) dp(s)
= = dW̃ (s)
ds ds = ds

dt
The first differential equation is ds = 1 and gives us t = s + s0 ; we choose
freely S0 = 0. The second ODE is:

dp(s)
= kp(s)
ds
Its solution is straightforward and since t = s it is:

p(t) = p0 ekt

With the same substitution s → t the last ODE is:


dW̃ (s)
= −Dp2 (s)W (s)
ds
with solution:
Dp2
0 (e
2kt −1)

W̃ (t) = W (0)e− 2k

Although we solved the differential equation the dependence on p is not


explicit; start by promoting the integration constant to a function of p
W (0) → W (p, 0) and by recognizing it being the initial condition in mo-
mentum space, i.e. W (p, 0) = e−ipx0 since W (x, 0) = δ(x − x0 ). Then collect
e2kt at the exponent and make the substitution p2 = p20 e2kt
Dp2 (1−e−2kt )
W̃ (p, t|x0 , 0) = e−ipx0 e− 2k (4.24)

38
D(1−e−2kt )
To simplify the notation set for now a(t) = k
and inverse Fourier
a(t)p2
− 2
transform W̃ (p, a(t)) = e−ipx0 e (it’s easy, being gaussian):
s
(x−x0 )2
k k
− 2D −2kt )
W (x, t|x0 , 0) = −2kt
e (1−e (4.25)
2πD(1 − e )

Another way to see the result is:

D(1 − e−2kt )
xt ∼ N (x0 , ) (4.26)
k
Its worth studying the limits of the variance:

t → 0 Var(x) → 2Dt Pure Diffusion Regime

D
t → ∞ Var(x) → Stationary Regime
k

4.2 Multidimensional Wiener Path Integral


The generalization of the Wiener Path Integral to more than one dimension
is quite straightforward: each dimension is independent so we just consider
a collection of d independent brownian motions B α ; i.e.:
d
Y
dP (B1 , ..., BN ) = dP (B1α , ..., BN
α
) (4.27)
α=1

where: !
N N
Y dB α X (∆B α )2
dP (B1α , ..., BN
α
)= √ i exp − i
(4.28)
i=1
2π∆ti i=1
2∆ti
The properties of the measure are easy to derive:

dB α (τ ) = 0 (4.29)
α β αβ
dB (τ )dB (τ ) = δ dτ (4.30)
dB α (τ )dτ = 0 (4.31)
dB α1 . . . dB αk = 0 ∀k > 2 (4.32)

39
It is also easy to write down a multidimensional Langevin equation:
p
dxα (t) = f α (x(t), t)dt + 2Dα dB α (4.33)

along with the change of variables equation:


d d
X
α
1X α

β
dh(x(t), t) = dt∂t h(x, t) + ∂α h(x, t)dx + ∂α ∂β h(x, t)dx dx
x=x(t) α=1 x=x(t) 2 α,β= x=x(t)

(4.34)
" d
# d p
X X
α
Dα ∂α2 h 2Dα ∂α hdB α (t)

= dt ∂t h + ∂hα f + + (4.35)
α=1 α=1

For h not dependent on time and Dα = D ∀α:



dh(x(t)) = dt[f ∇h + D∇2 h] + 2D∇hdB (4.36)

And for just h not dependent on time the Fokker-Planck counterpart is:
d
X
−∂α (f α w) + Dα ∂α2 w

∂t w(x, t|x0 , t0 ) = (4.37)
α=1

The last thing we need derive is the multidimensional Wiener measure for
x(t); the Jacobian is:
d
!N/2
Y
J= (2Dα ) (4.38)
α=1

and the measure:


N Y
d N X d
!
Y dxαi X (∆xαi + fi−1α
∆ti )2
dP (x1 , ..., xN ) = √ exp −
i=1 α=1
4πD α ∆ti i=1 α=1
4Dα ∆ti
(4.39)
which we can write in a formal form as N → ∞:
d Z t X d
!
YY dxα (τ ) (ẋα (τ ) + f α (τ ))2
√ exp − dτ (4.40)
τ α=1
4πDα dτ 0 α=1
4Dα

40
4.3 The Fokker-Planck equation with velocity
As an application of multidimensional Wiener Path Integral consider a gen-
eralized Langevin equation:

mv̇(t) = −γv + F(r) + γ 2Dξ (4.41)
and cast it in a two equation differential form:
(   √
F(r)
dv(t) = − γvm
+ m
dt + γ 2D
m
dB
(4.42)
dr(t) = vdt
This system is a six dimensional system of Brownian particles in the variables
x = (vx , vy , vz , x, y, z)T and with diffusion costants:
( 2
γ D
2 α = 1, 2, 3
Dα = m (4.43)
0 α = 4, 5, 6

Since the diffusion costants for the position variables (x, y, z) are (going to)
zero the Gaussian-like measures involving (x, y, z) collapse into delta func-
tions (the zero variance limit of the gaussian distirbution is a delta function).
The Wiener measure is:
3 3 Z t  2 !
Y dv δ (ṙ(τ ) − v(τ )) 1 γv F(r)
dP ({x(τ )}) = 3/2 3
exp − dτ v̇ − +
τ
(4πDdτ ) (dτ ) 4D 0 m m
(4.44)
which leads to a Fokker-Planck equation:
γ 2D
  
γv F(r)
∂t w(v, r, t|v0 , x0 ) = ∇v − w + 2 ∇v w + ∇r (−vw) (4.45)
m m m
The stationary version of this equation is called Kramer’s equation; as ex-
pected, its solution is the Maxwell-Boltzmann distribution with potential
V (r), given that F(r) = −∇V (r):
1 −β mv 2
 
+V (r)
W∗ = e 2
(4.46)
Z∗
which gives us (calling R the radius of the brownian particle):
1 kB T
D= = (4.47)
γβ 6πηR

41
Chapter 5

The Feynman-Kac formula

5.1 Feynman-Kac for the Fokker-Planck equa-


tion
Consider the overdamped stochastic dynamics of a particle under an external
potential U (r); setting f = − ∇Uγ(r) the Langevin equation reads:

dr(t) = f (r)dt + 2DdB (5.1)

whose associated Fokker-Planck equation is:

∂t w(r, t|r0 ) = ∇ [−f (r)w(r, t|r0 ) + D∇w(r, t|r0 )] (5.2)

The solution of this equation is given by a Wiener Path Integral after a


change of variables 4.36:
Y dd r(τ ) 1
− 4D
Rt 2
w(r, t|r0 , 0) = e 0 dτ (ṙ(τ )−f (r(τ )) δ d (r(t) − r) (5.3)
τ
(4πDdτ )d/2

The discretized form is:


N
dd ri (∆ri )2
Z Y
− N
P
1 PN 1 PN 2
W (N )
= e i=1 4D∆ti
e 2D i=1 ∆ri fi−1 − 4D i=1 fi−1 ∆ti δ d (r −r)
d/2 N
i=1
(4πD∆ti )
(5.4)
(N )
W can be seen as an expected value over the discrete Wiener measure:
1 PN 1 PN 2
∆ri fi−1 − 4D i=1 fi−1 ∆ti
he 2D i=1 δ d (rN − r)iW (5.5)

42
The second term in the exponential’s argument is a standard integral; to deal
with it consider a generic function h(r) and write its increment (as usual, this
is in the Ito sense):
∆h(r) = ∆r∇h(r) + D∆t∇2 h(r) + O(∆t3/2 ) (5.6)
Consider the increment between rN and r0 and apply the previous formula:
N
X N
X N
X
h(rN ) − h(r0 ) = ∆h(r ) = ∆ri ∇h(ri−1 ) + D∆t ∇2 h(ri−1 )
| {z i}
i=1 i=1 i=1
=h(ri )−h(ri−1 )
(5.7)
In the continuum limit these two equations give:
Z t Z t
dI r(τ )∇h(r(τ )) = h(r) − h(r0 ) − D dτ ∇2 h(r(τ )) (5.8)
0 0

that a version of the fundamental theorem of stochastic Ito calculus. Setting


h = − U γ(r) we get:
U (r)−U (r0 ) 1
Rt 1
Rt
− + 2γ dτ ∇2 U − dτ (∇U )2 d
w(r, t|r0 , 0) = he 2Dγ 0 4Dγ 2 0
δ (r(t) − r)iW (5.9)
1
If we define V (r) = − 2γ ∇2 U (r) + 1
4Dγ 2
(∇U (r))2 and recall that β −1 = γD
we arrive at:
β Rt
w(r, t|r0 , 0) = e− 2 (U (r)−U (r0 )) he− 0 V (r(τ ))dτ d
δ (r(t) − r)iW (5.10)
This is the version of the Feynman-Kac formula for a brownian particle in a
potential U (r).

5.2 Feynman-Kac for the Bloch equation


As one may notice, we happened to have to compute more than once (2.56
4.13 5.10) expected values like:
Rt
WB (x, t|x0 , t0 ) = he− 0 V (x(τ ))dτ
δ(x(t) − x)iW (5.11)
We want to prove, in two different ways, that 5.11 obeys to the so called
Bloch equation, i.e.:
∂t WB (x, t|x0 , t0 ) = D∂x2 WB (x, t|x0 , t0 ) − V (x)WB (x, t|x0 , t0 ) (5.12)
with initial condition WB (x, t0 |x0 , t0 ) = δ(x − x0 ).

43
5.2.1 Proof 1
The first proof starts from the Feynman-Kac formula and derives the Bloch
equation. Start by discretizing WB using ψN :
Z NY
+1
PN +1 ∆x2i
dx PN +1
ψN +1 = √ i e− i=1 4D − i=1 Vi δ(xN +1 − x) (5.13)
i=1
4πD

Let’s apply the delta function:


−1
Z N
!
dx dxi − PN ∆x2 PN (xN −x)2
√ N
Y i
ψN +1 (x) = √ e i=1 4D − i=1 Vi
e− 4D
−V (x)
=
4πD i=1
4πD
(5.14)
Z
dx (x −x)2
= √ N ψN (xN )e − N4D −V (x)
(5.15)
4πD

Set z ≡ √N −x :
x
2D
Z
dz 2 √
ψN +1 (x) = √ e−z /2 ψN (x + z 2D)e−V (x) (5.16)

and expand around x (small z):

−V (x)
Z
dz 2
h √ i
ψN +1 (x) = e √ e−z /2 ψN (x) + ψN
0 00
(x) z 2D + ψN (x) z 2 D + ... =

(5.17)
2 00 2
 
= (1 − V (x) + O( )) ψN (x) + DψN + O( ) = (5.18)
00 2
= ψN (x) +  [DψN − V ψN ] + O( ) (5.19)

from which:
ψN +1 (x) − ψN (x)
= D∂x2 ψN (x) − V (x)ψN (x) (5.20)

For N → ∞ we get the Bloch equation 5.12:

∂t WB = D∂x2 − V (x) WB

(5.21)

44
5.2.2 Proof 2
The second proof is based on two observations.
Observation 1 WB follows ESCK relation: let’s prove it. Divide the time
interval in which we observe the motion, (t0 , t), in N + N 0 steps. The discrete
propagator from t0 up to the instant tN 0 is:
N0
Z Y P 0 (∆xi )2 P 0
(N 0 ) 0 0 dxi − N − N
i=1 V (xi )
W (x , t |x0 , t0 ) ≡ √ e i=1 4D∆ti δ(x0 − x0N )
i=1
4πD∆ti
(5.22)
The one from tN 0 to tN is:
Z +N 0
NY PN +N 0 (∆xi )2 PN +N 0
(N ) 0 0 dxi − i=N 0 +1 4D∆t − V (xi )
W (x, t|x , t ) ≡ √ e i i=N 0 +1 δ(x−xN +N 0 )
i=N 0 +1
4πD∆ti
(5.23)
(N +N 0 )
If WB statisfied the ESCK relation, we would have that W → WB as
0
N, N → ∞ and it would be the integrated product of 5.22 and 5.23:
Z
(N +N 0 ) 0
W (x, t|x0 , t0 ) = dx0 W (N ) (x, t|x0 , t0 )W (N ) (x0 , t0 |x0 , t0 ) = ... =

(5.24)
+N 0
Z NY
dxi 0 (∆xi )2 PN +N 0
− N +N
P
− i=1 V (xi )
= √ e i=1 4D∆ti
δ(x − xN +N 0 ) (5.25)
i=1
4πD∆ti

In the limit N, N 0 → ∞ with max ∆ti → 0 we obtain ESCK relation:


Z
W (x, t|x0 , t0 ) = dx0 W (x, t|x0 , t0 )W (x0 , t0 |x0 , t0 )
 R 
t
Observation 2 If we take h ∈ C(R) and set u(t) = exp − t0 h(s)ds we
obtain trivially that:
du
= −h(t)u(t)
dt
u(t0 ) = 1
Integrating the differential equation and using the previous solution for u(t):
Z t Z t  Z τ 
u(t) = 1 − h(τ )u(τ )dτ = 1 − h(τ ) exp − h(s)ds (5.26)
t0 t0 t0

45
Given these two observations, set in the previous equation h(τ ) = V (x(τ ))
and introduce a delta function δ(x − x(t)):
Z t
− tt dτ V (x(τ )) − τ dsV (x(s))
R R
δ(x − x(t))e 0 = δ(x − x(t)) − δ(x − x(t)) dτ V (x(τ ))e t0
t0
(5.27)
By taking averages over Wiener Path Integral we recognize the appearence
of WB as define in the Feynman-Kac formula 5.11:
Rt
− dτ V (x(τ ))
WB (x, t|x0 , t0 ) = hδ(x − x(t))e t0 iW =
Z t
− τ dsV (x(s))
R
= hδ(x − x(t))iW − dτ hV (x(τ ))e t0 δ(x − x(t))iW (5.28)
t0

In the first term, hδ(x − x(t))iW , we recognize W (x, t|x0 , t0 ) i.e. the solution
of the diffusion equation with W (x, t0 |x0 , t0 ) = δ(x − x0 ). The second term
can be cast to:
Z t Z t
− tτ dsV (x(s))
R
dτ hV (x(τ ))e 0 iW = dτ WB (x0 , τ |x0 , t0 )V (x0 )W (x, t|x0 , τ )
t0 t0
(5.29)
To summarize we arrive at:
Z t Z
0
WB (x, t|x0 , t0 ) = W (x, t|x0 , t0 )− dt dx0 W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 )
t0
(5.30)
Take time of both sides:
Z
∂t WB (x, t|x0 , t0 ) = ∂t W (x, t|x0 , t0 )− dx0 W (x, t|x0 , t)V (x0 )WB (x0 , t|x0 , t0 )
Z t Z
− dt0 dx0 ∂t W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 ) (5.31)
t0

Z
∂t WB (x, t|x0 , t0 ) = − dx0 δ(x − x0 )V (x0 )WB (x0 , t|x0 , t0 )
D∂x2 W (x, t|x0 , t0 )
Z t Z
− dt0 dx0 D∂x2 W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 ) (5.32)
t0

46
 Z t Z 
0 0 0 0 0 0 0
∂t WB (x, t|x0 , t0 ) = D∂x2 W (x, t|x0 , t0 ) − dt dx W (x, t|x , t )V (x )WB (x , t |x0 , t0 )
t0
| {z }
WB (x,t|x0 ,t0 ) from 5.28

− V (x)WB (x, t|x0 , t0 ) (5.33)

We arrive at the Bloch Equation 5.12:

∂t WB (x, t|x0 , t0 ) = (D∂x2 − V (x))∂t WB (x, t|x0 , t0 )

47
Chapter 6

Brownian motion, Polymer


Physics and Field Theory

6.1 Prelude: Green’s function


Consider a (scalar, for now) differential operator L(x) for x ∈ Rd ; we define
its Green’s function G(x, y) as:

L(x)G(x, y) = δ d (x − y) (6.1)

Notice just by definition that it can be interpreted as an inverse for L(x).


Consider a general equation involving L:

L(x)u(x) = f (x) (6.2)

for some function f . Using the properties of the delta function and 6.1:
Z Z
f (x) = δ (x − y)f (y)d y = f (y)L(x)G(x, y)dd y
d d
(6.3)

Since f (x) = L(x)u(x):


Z
L(x)u(x) = L(x) f (y)G(x, y)dd y (6.4)
Z
u(x) = f (y)G(x, y)dd y + q(x) (6.5)

48
given L(x)q(x) = 0. So the Green’s function lets us express the solution to
a differential equation with an integral.
Let’s try to solve it for one particular case (d = 3):

L(r) = −D∇2r + µ (6.6)

Using Fourier transform:

Dk2 + µ G̃(k) = 1


1
G̃(k) = (6.7)
Dk2 +µ
Z
1 3 1
G(r) = 3
d k 2
eikr =
D(2π) k + µ/D
Z 2π Z ∞ Z π
1 2 1
= dθ dkk 2 dφ sin φeikr cos φ =
D(2π)3 0 0 k + µ/D 0
Z ∞
1 k
= 2 dk 2 sin kr =
2π rD 0 k + µ/D
Z ∞
1 k
= 2 = dk 2 eikr
4π rD −∞ k + µ/D
1 õ
G(r) = e− D r (6.8)
4πrD
In the d dimensional case we have a behaviour like:
1
G(r) ∼ e−r/ξ (6.9)
rd−2
q
D
where ξ = µ
.

6.2 Introduction: Polymers


In this chapter we want to study some basic models for polymers; a polymer
is a chain of molecular groups connected through chemical bonds, so we can
imagine it as a series of building blocks of which we are interested in the
probability distribution of the endpoint. The starting point is taken as the
origin and if r is the vector connecting the origin to the endpoint, Ω(r, N )

49
counts the number of configuration of the polymer with N building blocks.
So the entropy is:
S = kB log Ω(r, N ) (6.10)
Defining w(r, N ) as the distribution of the endpoint of the polymer with N
blocks, we get: Z
hr i = r2 w(r, N )d3 r
2
(6.11)
It will be useful to introduce the Fourier transform:
Z
G(q, N ) = d3 reiq·r w(r, N )

6.3 Phantom Chain


A phantom chain is a polymer made of non interacting (=independent) build-
ing blocks; this last assumptions makes its distribution factorized:
Z YN N
! N
X Y
w(r, N ) = d3 ri δ 3 ri − r w(ri , 1) (6.12)
i=1 i=1 i=1

In the above equation w(ri ; 1) is the one body distribution that describes
fully the phantom polymer. Considering the chain for N + 1 blocks:
Z NY
+1 N +1
! N +1
X Y
3 3
w(r, N + 1) = d ri δ ri − r w(ri , 1) = (6.13)
i=1 i=1 i=1
Z N
Z Y N
X +1 Z
3 0 3 3
d3 rN +1 w(rN +1 , 1)δ 3 (rN +1 + r0 − r)

= dr d ri w(ri , 1) δ ( ri − r)
i=1 i=1
(6.14)
Recognize in the first part w(r0 , N ):
Z
w(r, N + 1) = d3 r0 w(r0 , N )w(r − r0 , 1) (6.15)

Assume w(r0 , N ) is smooth for N  1 and expand around r:


Z X
w(r, N + 1) = d3 r0 w(r − r0 , 1)[w(r, N ) + (rα0 − rα )∂α w(r, N )+
α
1 XX 0
+ (r − rα )(rβ0 − rβ )∂α ∂β w(r, N ) + ...] (6.16)
2 α β α

50
Now if we use some facts. Since w(r, 1) is a p.d.f.:
Z Z
d r w(r − r , 1) = d3 r0 w(r0 , 1) = 1
3 0 0
(6.17)

Thanks to rotational and parity invariance:


Z Z
3 0 0 0
d r w(r − r , 1)(rα − rα ) = − d3 rw(r, 1)rα = 0 (6.18)
Z Z
3 0 0 1
d r w(r − r , 1)(rα0 − rα )(rβ0 − rβ ) = δαβ d3 rw(r, 1)r2 (6.19)
3
Putting everything together in 6.16:

l2 2
w(r, N + 1) = w(r, N ) + ∇ w(r, N ) + O(l4 ) (6.20)
6
Here l is a length defined via the one body distribution:
1
w(r, 1) = δ(|r| − l) (6.21)
4πl2
indeed: Z Z
3 4π 2
d rw(r, 1)r = drr2 δ(r − l)r2 = l2 (6.22)
4πl2
l2
For continuous values of N , calling D = 2d
where d = 3:


w(r, N ) = D∇2 w(r, N ) (6.23)
∂N
which is just the diffusion equation 4.45. For initial condition w(r, 0) = δ d (r):
 d/2
1 2 /4DN
w(r, N ) = e−r (6.24)
4πDN

The Fourier Transform is:


2
G(q) = e−DN q (6.25)
and the generating (Laplace transform) function is:
Z ∞
1
G̃(q, ω) = dN e−ωN G(q, N ) = (6.26)
0 ω + Dq 2

51
The entropy is:
r2
 
S = kB − + N-dependent constant (6.27)
4DN
If r0 > r then:
∆S = S(r0 ) − S(r) < 0
so stretching a rubber band decreases the entropy (i.e. less number of possible
configurations).

6.4 Some remarks


Here we try to define the inverse of the operator −D∇2r + ω. Consider:

A(r, r0 ) = −D∇2r + ω δ d (r − r0 ) ≡ hr| (−D∇2 + ω) |r0 i



(6.28)

We introduced a bra and a ket to stress the fact that we are expressing this
operator in position space; however this is, as for now, just formal. As in
matrix multiplication take a function f (r0 ); the action of A on f is:
Z
dd r0 A(r, r0 )f (r0 ) = −D∇2r f (r) + ωf (r) (6.29)

Using the Fourier basis:


Z
0
dd r0 A(r, r0 )eiqr = (Dq 2 + ω)eiqr (6.30)

we can write the inverse of A:


0
dd q eiq(r−r )
Z
−1 0
A (r, r ) = ≡ hr| (−D∇2 + ω)−1 |r0 i (6.31)
(2π)d Dq 2 + ω
Indeed:
0
dd q eiq(r−r ) dd q iq(r−r0 )
Z Z
−D∇2r = δ d (r − r0 )

+ω = e (6.32)
(2π)d Dq 2 + ω (2π) d

Remember one thing: in the chapter about Gaussian integrals we introduced


a quadratic form A in N dimensions and derived the Wick theorem for n
point correlation functions; if we interpret the above A(r, r0 ) as an infinite

52
dimensional matrix we can "use" the Wick theorem to find expected values
of infinite dimensional gaussian integrals (path integrals...). Some examples:
Z ∞ 0
dq eiq(x−x ) 1 √ω
−1 0 − D |x−x0 |
A (x, x )d=1 = 2
= √ e (6.33)
−∞ 2π Dq + ω 2 ωD
1 √ω
−1 0 − D |x−x0 |
A (x, x )d=3 = − 0
e (6.34)
4πD|x − x |
Another thing to point out is that we can recast the diffusion equation in a
Schroedinger-like form. Define H0 = −D∇2 then:

∂N w(r, N ) = −H0 w(r, N ) (6.35)

from which the solution is:

w(r, N ) = e−H0 N (r0 , r) ≡ hr| e−H0 N |r0 i



(6.36)
d
w(r, 0) = δ (r − r0 ) (6.37)

−H0 N

e =I (6.38)
N =0

The Laplace transform:


Z ∞ Z ∞
−ωN
w̃(r, ω) = w(r, N )e dN = hr| e−H0 N −ωN |r0 i = hr| (H0 + ω)−1 |r0 i
0 0
(6.39)
where (H0 + ω)−1 is defined above.

6.5 Feynman-Kac for Polymer in a potential


If in the previous derivation of polymer equation 6.15 we make a little mod-
ification, i.e.:
Z
w(r, N + 1) = d3 r0 w(r0 , N )w(r − r0 , 1)(1 − V (r)) (6.40)

introducing V (r) as a fraction of volume excluded around r we arrive at the


Bloch equation:

w(r, N ) = −V (r)w(r, N ) + D∇2 w(r, N ) (6.41)
∂N
53
which is solved using the Feynman-Kac formula 5.11:
RN
w(r, N ) = he− 0 dτ V (r(τ )) d
δ (r(N ) − r)iW (6.42)

The quantity H in this case is:

H = −D∇2 + V

and:
hr| H |r0 i = (−D∇2r + V (r))δ d (r − r0 )
The formal solution is:

w(r, N ) = hr| e−HN |r0 i

and the Laplace transform is:

w̃(r, ω) = hr| (ω − D∇2 + V )−1 |r0 i

with interesting result:


Z ∞ Rs
−1
2
hr| (ω − D∇ + V ) |r0 i = dse−sω he− 0 dτ V (r(τ ))
iw
0

6.6 Gaussian Field Theory


Consider a scalar "function" (or a distribution) ϕ : Rd → R and the action:
Z
1
A[ϕ] = dd rdd r0 ϕ(r)A(r, r0 )ϕ(r0 ) (6.43)
2
For example, using A(r, r0 ) ≡ (−D∇2r + µ + V (r)) δ d (r − r0 ):
Z
1
dd r −Dϕ(r)∇2 ϕ(r) + µϕ2 (r) + V (r)ϕ2 (r) =

A[ϕ] = (6.44)
2
Z
1
dd r D (∇ϕ)2 + µϕ2 + V (r)ϕ2

= (6.45)
2
(the last equality holds for vanishing boundary conditions on ϕ)
Assume now we can apply Euler-Langrange equations:
∂L ∂L
= ∂i (6.46)
∂ϕ ∂(∂i ϕ)

54
R
for L such that A = dd rL . We obtain:

1
D (∇ϕ)2 + µϕ2 + V (r)ϕ2

L= (6.47)
2
µϕ = −V (r)ϕ + D∇2 ϕ (6.48)
that is the Bloch equation for Polymers after applying Laplace Transform
of variable µ on the variable N (∂N ϕ → µϕ). In the spirit of Wiener Path
Integrals let’s try to compute the partition function for this action to have a
probabilistic interpretation:
Z Y
Z= dϕ(r)e−A (6.49)
r∈Rd

Having Z we can compute:


Z
1
hϕ(r1 )...ϕ(rk )i = Dϕe−A ϕ(r1 )...ϕ(rk ) (6.50)
Z
To justify this let’s move to a discrete form; fix a lattice spacing a and put
the system in a Ld volume hence with N = (L/a)d sites:

L
r ∈ Λ = {an : n ∈ Zd |nµ | < }
a
X 1
Aa = ad ϕ(r)A(r, r0 )ϕ(r0 )/2 = ϕAϕ
r,r0 ∈Λ
2

from which:
Za = (det A)−1/2 (2π)N /2
Now we use the trick of the external field J(r) to compute correlation func-
tions:
Z
P 1 1 1 −1
he r∈Λ J(r)ϕ(r)
i= Dϕe− 2 ϕAϕ+ϕJ = e 2 JA J ≡ Ẑa (J) (6.51)
Za
Now we can apply Wick’s Theorem for Gaussian variables and we discover
that the odd-points correlation function vanish while for even-points:

hϕ(r1 )ϕ(r2 )i = A−1 (r1 , r2 )

55
X
hϕ(r1 )...ϕ(r2k )i = hϕ(ri1 )ϕ(rj1 )i...hϕ(rik )ϕ(rjk )i
all pairings ofϕ

which holds even for a → 0. Now generalize and consider: ϕ(r) = (ϕ1 , ..., ϕn ) ϕ:
Rd → Rn .
Z Z d
1 X X
A[ϕ] = d r dd r0
d
ϕα (r)A(r, r0 )ϕα (r0 ) = A[ϕα ]
2 α=1 α

from which:
Z Y Z n
(n) −A[ϕ] −A[ϕ1 ]
Z = Dϕα e = Dϕ1 e = Zn (6.52)
α

In the limit:
lim Z (n) = 1
n→0

1 Y
hϕ1 (r1 )ϕ1 (r2 )i = Dϕα e−A[ϕ] ϕ1 (r1 )ϕ1 (r2 ) = (6.53)
Zn α
Z
1 −A[ϕ1 ] n−1

Dϕ 1 ϕ1 (r1 )ϕ1 (r2 )e Z = (6.54)
Zn
Z
1
= Dϕ1 ϕ1 (r1 )ϕ1 (r2 )e−A[ϕ1 ] = A−1 (r1 , r2 ) (6.55)
Z
Thus: n
Z Y
1
−1
A (r1 , r2 ) = lim Dϕα e− 2 ϕAϕ ϕ1 (r1 )ϕ1 (r2 ) (6.56)
n→0
α=1

where: XZ
ϕAϕ = dd rdd r0 ϕα (r)A(r, r0 )ϕα (r0 ) (6.57)
α

6.7 Self Interacting Polymers


Consider the Wiener Integral (obtained via Keynman Kac):

dd r(τ )
Z Y
− 0N dτ ṙ2 −g 0N dτ1 0N dτ2 δ d (r(τ1 )−r(τ2 )) d
R R R
w(r, N ) = d/2
e δ (r(N ) − r)
τ
(4πDdτ )
(6.58)

56
Define the beads density:
Z N
ρ(r) = dτ1 δ d (r − r(τ1 )) (6.59)
0

from which:
Z N Z N Z
d
dτ1 dτ2 δ (r(τ1 ) − r(τ2 )) = dd rρ2 (r) (6.60)
0 0

Discretize the space and use Hubbard-Stratonovich:


dd ρ2 (r) ad ρ2 (r)
R P
e−g = lim e−g r∈Λ = (6.61)
a→0
Y d ρ2 (r)
YZ ad 2 (r)−iad σ(r)ρ(r)
= lim e−ga = lim dσ(r)e− 4g σ = (6.62)
a→0 a→0
r∈Λ r∈Λ
Z d
− a4g σ 2 (r)−iad
Y P P
σ(r)ρ(r)
= lim dσ(r)e r∈Λ r∈Λ
= (6.63)
a→0
r∈Λ
Z
1
dd rσ 2 (r)−i dd rσ(r)ρ(r)
R R
= Dσe− 4g = (6.64)
Z RN
1
dd rσ 2 (r)−i
R
= Dσe− 4g 0 dτ σ(r(τ ))
(6.65)

In the end:
Z RN
1
dd σ 2 (r)
R
w(r, N ) = Dσe− 4g he−i 0 dτ σ(r(τ )) d
δ (r(N ) − r)iW (6.66)

Take the Laplace transform:


Z Z ∞ RN
1
R d 2
− 4g d rσ (r)
w̃(r, µ) = Dσe dN e−µN he−i 0 dτ σ(r(τ )) d
δ (r(N ) − r)iW =
0
(6.67)
Z
1 −1
dd rσ 2 (r)
R
= Dσe− 4g hr| µ − D∇2 + iσ |r0 i (6.68)

57
Using an n dimensional scalar field we obtain (see previous sections):
Z n
Y
w̃(r, µ) = lim Dσ Dφα
n→0
α=1
n
!
Z 2
1 σ (r) X
exp − dd r +D (∇φ(r)α )2 + µφ2 (r) + iσ(r)φ2 (r) φ1 (r)φ2 (r0 ) =
2 2g α=1
Z Y n
= lim Dφα φ1 (r)φ2 (r0 )e−H[φ] (6.69)
n→0
α=1

where Z  
1 2 1 2
H [φ] = d r D (∇φ) + µφ + g φ2
d 2
(6.70)
2 2

58
Chapter 7

Models Of Phase Transitions

7.1 Ising Model


Consider the Ising model in d dimensions:
X X
H = −J σx σy − b σx (7.1)
hx,yi x

with partition function (where K = βJ h = βh):


Y X Y X P P
ZI = e−βH = eK hx,yi σx σy +h x σx (7.2)
x σx =±1 x σx =±1

The order parameter is the magnetization (|λ| is the numbers of sites):

∂ log Z
m(T ) = lim lim (7.3)
h→0 |Λ|→∞ ∂h |Λ|

The susceptibilty is:


∂ 2 log Z
χ(h, T ) = lim (7.4)
|Λ|→∞ ∂h2 |Λ|

The model in d = 1 (with periodic boundary conditions) can be easily solved


using the transfer matrix method; define:

h(σ + σ 0 )
 
0 0
T(σ, σ ) = exp Kσσ + (7.5)
2

59
and rewrite the partition function 7.2:
X X
Z= ... exp(K (σ1 σ2 + ... + σN −1 σN + σN σ1 ) + h (σ1 + ... + σN )) =
σ1 =±1 σN =±1
!
X X X hX
= ... exp K (σi σi+1 ) + (σi + σi+1 ) =
σ1 =±1 σN =±1 i
2 i
N  
X X Y h
= ... exp Kσi σi+1 + (σi + σi+1 )
σ =±1 σ =±1 i=1
2
1 N

Introducing the bra’s hσ| and ket’s |σi and the matrix T:

|1i ≡ (1, 0)T

|−1i ≡ (0, 1)T


e−K
 K+h 
e
T= (7.6)
e−K eK−h
the partition functions becomes:
X X
Z (N ) = ... hσ1 | T |σ2 i hσ2 | T |σ3 i ... hσN | T |σ1 i (7.7)
σ1 =±1 σN =±1
P
Because of the resolution of the identity σi |σi i hσi | = I we get:

Z (N ) = tr TN = λN N

1 + λ2 (7.8)

where λ1 and λ2 are the eigenvalues of T.

7.2 Mean Field Theory


Starting from the Ising Model 7.1 we want to develop a mean field model
that, in the end, will be equivalent to the Ginzburg-Landau theory. Let’s
start by remembering the Hubbard-Stratonovich transformation:
! √ Z Y !
1X det A 1 X X
exp xi A−1
ij xj = dϕi exp − ϕi Aij ϕj + xi ϕi
2 i,j (2π)n/2 i
2 i,j i
(7.9)

60
for A positive definite. Apply this transformation to the Ising model; in the
Hamiltonian the spins σi and σj are coupled, in general, through a matrix
Kij that is non zero if i and j are nearest neighbors or i = j:
1X X K0 |Λ|
− βH = Ki,j σi σj + h σi − (7.10)
2 i,j i
2

The additional constant term compensating for Ki,i . Using 7.9 we can write:
! Z Y !
1X 1 1 X X
exp Kx,y σx σy = √ |Λ|/2
dϕx exp − ϕx K−1
x,y ϕy + σx ϕx
2 x,y det K (2π) x
2 x,y∈Λ x∈Λ
(7.11)
The partition function becomes:
K0 |Λ|
!
e− 2 XZ Y 1 X X
ZI = √ |Λ|/2
dϕx exp − ϕx K−1
x,y ϕy + σx (ϕx + h)
det K (2π) σ x∈Λ
2 x,y∈Λ x∈Λ
(7.12)
Z Y !
1 X YX
= const dϕx exp − ϕx K−1 ϕ
x,y y eσx (h+ϕx ) = (7.13)
x∈Λ
2 x,y∈Λ x∈Λ σx ±1
Z Y !
1 X −1
X
= const dϕx exp − ϕx Kx,y ϕy + log cosh(h + ϕx ) (7.14)
x∈Λ
2 x,y∈Λ x∈Λ

Define now:
1 X X
H[ϕ] = ϕx K−1
x,y ϕy − log cosh(h + ϕx ) (7.15)
2 x,y∈Λ x∈Λ

such that: Z Y
ZI = const dϕx exp(−H[ϕ]) (7.16)
x
We are ready to use the saddle point approximation, calling ϕ̄ the stationary
point of the exponential argument:
X
∂ϕx H[ϕ] = 0 = K−1
x,y ϕ̄y − tanh(ϕ̄x + h) (7.17)
y∈Λ

1
∂ϕx ∂ϕy H[ϕ] = K−1
x,y − δx,y 2 (7.18)
cosh (ϕ̄x + h)

61
First we look for uniform solutions, i.e. ϕ̄x = ϕ̃ ∀x; since:

K 0 x = y

Kx,y = K x, y nearest neighbor (7.19)

0 otherwise

P
the sum y Kx,y is independent of x:
X
Kx,y = K0 + 2d
|{z} K (7.20)
|{z}
y x=y coord.numb.

From this and the eigenvalues of K:


X 1
K−1
x,y = (7.21)
y
K0 + 2dK

In the end we get:


1
ϕ̃ − tanh(ϕ̃ + h) = 0 (7.22)
K0 + 2dK
This is a self consistency equation for ϕ̃. The saddle point free energy density
is:
log ZI H(ϕ̃) 1
f (ϕ̃, h) = =− =− ϕ̃2 + log cosh(h + ϕ̃) (7.23)
|Λ| |Λ| 2 (K0 + 2dK)

from which we derive the magnetization:


1 ϕ̃
m = lim lim ∂h log ZI = (7.24)
h→0 |Λ|→∞ |Λ| K0 + 2dK
ϕ̃
If we set K0 +2dK = m and K̃ = K0 + 2dK the self consistency equation for
ϕ̃ becomes:
m = tanh mK̃ (7.25)
which is the self consistency equation for the magnetization found in the
standard mean field theory. The saddle point free energy density is now:

K̃m2  
f (m, h) = − log cosh K̃m + h (7.26)
2
62
Expanding around small m and h we arrive at:

K̃(1 − K̃)m2 K̃ 2 m4
f (m, h) ≈ + − K̃mh (7.27)
2 2
This is the form of the free energy one can obtain using the Landau approach.
Settng K0 = 2Kd we get:
4dJ Tc
K̃ = 4dK = ≡ (7.28)
kB T T

For zero magnetic field if K̃ < 1 or T > Tc we have only an equilibrium phase
m̄ = 0; for K̃ > 1 or T < Tc we get m̄ = 0 as a metastable state while two Z2
symmetric solutions ±m0 as stable equilibrium phases; indeed minimizing f :
!
K̃m2
∂m f (m, 0) = K̃m (1 − K̃) + + o(m4 ) = 0
3

we get:

• m = 0 K̃ < 1
 1/2
3(K̃−1)
• m = 0 and m = ± K̃
K̃ ≥ 1

(the positive solution is reached when h → 0+ the negative when h → 0− ).

7.3 Field Theory


Start again from Kx, y and invert it following the Von Neumann prescription
(geometric series; a is the lattice spacing):

K = K0 I + K∆

∆x,y ≡ δ|x−y|,a
 −1
1 K 1 K
K−1 = I+ ∆ = − 2 ∆ + ...
K0 K0 K0 K 0

63
Now we approximate H:

1 X 1 X 2 K X
ϕx K−1
x,y ϕy = ϕx − ϕx ∆x,y ϕy + ... =
2 x,y∈Λ 2K0 x∈Λ 2K02 x,y∈Λ
1 X 2 K X
= ϕx − ϕx ϕy + ... (7.29)
2K0 x∈Λ 4K02
hx,yi

X X1 1 4
log cosh(h + ϕx ) = ϕ2x − ϕ + ϕx h + ... (7.30)
x∈Λ x∈Λ
2 12 x
Since ∆x,y couples only nearest neighbor fields we can expand the field ϕy
around a direction µ (one among d, the dimensionality) such that y = x + aµ
for a → 0:
a2
ϕy = ϕx + a∂µ ϕx + ∂µ2 ϕx + ... (7.31)
2
Thus: (TO FINISH)

64
Chapter 8

Entropy and Information

65
Chapter 9

From Wiener to Feynman Path


Integrals

9.1 Imaginary Diffusion


Let’s change "a bit" topic and consider the Schrodinger equation:

i~∂t |ψ(t)i = Ĥ(t) |ψ(t)i (9.1)


2

If Ĥ(t) = 2m + V̂ (x, t), in position space, we would get an equation resembling
the diffusion (Bloch) equation:

~2 2
i~∂t ψ(x, t) = − ∂ ψ(x, t) + V (x, t)ψ(x, t) (9.2)
2m x
∂t w(x, t) = D∂x2 w(x, t) − V (x, t)w(x, t) (9.3)
Stick to the free case (V̂ = 0) and formally set the diffusion coefficent D
i~
equal to 2m ≡ DQM :

i~ 2
∂t w(x, t) = ∂ w(x, t)
2m x
~2 2
i~∂t w(x, t) = − ∂ w(x, t)
2m x

66
As we can see we get nothing but the Schrodinger equation; this procedure
is further applied in the propagator of the diffusion equation:
1 (x−x0 )2
− 4D(t−t
W0 (x, t|x0 , t0 ) = p e 0) →
4πD(t − t0 )
(x−x0 )2
r
m im
→ K0 (x, t|x0 , t0 ) = e 2~(t−t0 )
2π~i(t − t0 )

The function obtained, K0 (x, t|x − 0, t0 ), is nothing but the propagator of


the free theory; in other words:
iĤt
K0 (x, t|x0 , t0 ) = hψ(x, t)| e− ~ |ψ(x0 , t0 )i (9.4)

Starting from the fact that W0 (x, t|x0 , t0 ) can be expressed using a Wiener
Path Integral, let’s try to find an integral representation fo K0 :
Z

K0 (x, t|x0 , t0 ) = dw x(τ )δ(x(t) − x) =
D=DQM
N r
Z Y (xj −xj−1 )2
m im PN
j=1
= lim dxj e 2~ ∆tj
=
N →∞
j=1
2π~i∆tj
t r
Z Y
m i t
R m 2
= dx(τ )e ~ 0 dτ 2 ẋ(τ )
τ =t0
2π~idτ

We are left with one problem: while W0 (x, t|x0 , t0 ) ∈ R+ has a probabilistic
interpretation (i.e. is a p.d.f.) we can’t say the same for K0 , since it is
complex
R∞ and trying to use the probability amplitude interpretation we get
2
−∞
dx|K 0 (x, t|x0 , t0 )| = ∞. So even if we can make sense of K0 using a Path
Integral (called Feynman Path Integral) we can’t really say that this carries
a rigorous probability measure as in diffusion. Nevertheless let’s generalize
this result to an Hamiltonian with a potential:

~2 2
i~∂t ψ = − ∂ ψ + V (x, t)ψ (9.5)
2m x
1
Multiplying both sides by i~

i
∂t ψ = DQM ∂x2 ψ − V (x, t)ψ (9.6)
~
67
we obtain the Bloch equation where the substitution D → DQM and V →
i
~
V (x, t) was made. Thanks to this we can generalize the Feynman-Kac
formula. Calling K(x, t|x0 , t0 ) the propagator of the Schrodinger equation
with the potential V , the modified Fk formula reads:
Rt i
− t0 ~ V (x(τ ),τ )dτ
K(x, t|x0 , t0 ) = he δ(x(t) − x)iF (9.7)
Expliciting the hiF :
t r
Z Y
m i t
R m 2
K(x, t|x0 , t0 ) = dx(τ )e ~ 0 dτ 2 ẋ(τ ) −V (x(τ ),τ ) δ(x(t) − x) (9.8)
τ =t0
2π~idτ

We recognize in the exponent the classical action:


Z t
m
S[x] = dτ ẋ(τ )2 − V (x(τ ), τ ) (9.9)
t0 2
So:
t r
Z Y
m i
K(x, t|x0 , t0 ) = dx(τ )e ~ S[x] δ(x(t) − x) (9.10)
τ =t0
2π~idτ

9.2 Phase Space Feynman Path Integral


Here we employ a different approach to the Feynman Path Integral. Let’s
begin by considering a time independent Hamiltonian and the unitary evo-
H(t−t0 )
lution operator U (t, t0 ) = e−i ~ . Set, w.l.o.g, t0 = 0 and divide the time
interval in N steps of size  ≡ Nt ; then:
Ht H H
U (t, t0 = 0) = e−i ~ = e|−i ~ . . e−i ~}
.{z (9.11)
N times

This expression is valid for any N (even in the limit of N → ∞). Let’s
go a bit further and consider a wavefunction at time t0 = 0 and the same
wavefunction at time t; then by the definition of the evolution operator:
Ht
|ψ(t)i = U (t, 0) |ψ(0)i = e−i ~ |ψ(0)i (9.12)
Z Z
Ht
ψ(x, t) = hx|ψ(t)i = dx0 hx| e−i ~ |x0 i hx0 |ψ(0)i ≡ dx0 K(x, x0 , t) hx0 |ψ(0)i
(9.13)
(9.14)

68
where we introduced the time evolution kernel K(x, x0 , t) (notice the similar-
ity with ESCK relation). Consider just K(x, x0 , t) and use the previous time
slicing property:
Ht H H
K(x, x0 , t) = hx| e−i ~ |x0 i = hx| |e−i ~ . . e−i ~} |x0 i
.{z (9.15)
N times

Inserting
R between the sliced evolution operators N − 1 resolution of the iden-
tity dxj |xj i hxj | j = 1 . . . N − 1 we get for K(x, x0 , t):
Z
H H H
dx1 . . . dxN −1 hx| e−i ~ |xN −1 i hxN −1 | e−i ~ |xN −2 i hxN −2 | . . . |x1 i hx1 | e−i ~ |x0 i
(9.16)
Again we stress that this expression is exact and still valid for N → ∞.
Calling x ≡ xN we can write K(x, x0 , t) as:
Z N
H
Y
K(x, x0 , t) = dx1 . . . dxN −1 hxj | e−i ~ |xj−1 i (9.17)
j=1

p̂ 2
Now restrict the Hamiltonian operator to Ĥ = 2m + V̂ (x) and use the Trotter
decomposition (Baker-Campbell-Hausdorff formula), i.e. ∀A, B self adjoint
operators and ∀s:
N
es(A+B) = lim esA/N esB/N (9.18)
N →∞
 N
p̂2
−i 2m~ −i V̂~
U (t, 0) = lim e e (9.19)
N →∞

So as  = t/N → 0:
H p̂2 V̂
e−i ~ = e−i 2m~ e−i ~ + o(2 ) (9.20)
from which:
Z N
Y p̂2 V̂
K(x, x0 , t) = lim dx1 . . . dxN −1 hxj | e−i 2m~ e−i ~ |xj−1 i (9.21)
N →∞
j=1

Let’s analyse just the inner term:


p̂2 V̂ p̂2 V (xj−1 )
hxj | e−i 2m~ e−i ~ |xj−1 i = hxj | e−i 2m~ |xj−1 i e−i ~ (9.22)

69
Insert two identity resolutions in momentum space:
Z
p̂2 p̂2
−i 2m~
hxj | e |xj−1 i = dpj dpj−1 hxj |pj i hpj | e−i 2m~ |pj−1 i hpj−1 |xj−1 i =
(9.23)
Z
dpj dpj−1 i pj xj p̂2 pj−1
= e ~ hpj | e−i 2m~ |pj−1 i e−i ~ xj−1 = (9.24)
2π~
dpj dpj−1 i pj xj −i p2j−1
Z pj−1
= e ~ e 2m~ δ(pj − pj−1 )e−i ~ xj−1 = (9.25)
2π~
dpj i pj (xj −xj−1 )−i p2j
Z
= e ~ 2m~ (9.26)
2π~
Using 1.15 we get:
2
p̂2 1 im (xj −xj−1 )
hxj | e−i 2m~ |xj−1 i = q e 2~  (9.27)
2πi m~ 

Going back to our previous equation:


Z N 2
Y 1 im (xj −xj−1 )
K(x, x0 , t) = lim dx1 . . . dxN −1 q e 2~  = (9.28)
N →∞
j=1 2πi m~ 
(xj −xj−1 )2
Z  
PN
1 i
j=1  m
2 2
−V (xj−1 )
N/2 δ(xN − x)e
~
= lim dx1 . . . dxN
N →∞
2πi m~ 
(9.29)
 2

m (xj −xj−1 )
PN
The term j=1  2 2
− V (xj−1 ) is nothing but the discretization
of the action:
Z t Z t 
mẋ2

S[x] = dτ L(x(τ ), ẋ(τ )) = dτ − V (x(τ )) (9.30)
0 0 2

So in a formal way:
t
Z Y
dx(τ ) i
K(x, x0 , t) = q e ~ S[x(τ )] δ(x(t) − x) (9.31)
τ =0 2πi m~ dτ

70
Now if we make the substitutions t → −it and ~
2m
→ D:
Z Y t
dx(τ ) ~i R0t (−idτ )(− 4D ~
ẋ2 (τ )−V (x(τ )))
K(x, x0 , −it) = √ e (9.32)
τ =0 4πDdτ
Z Y t
dx(τ ) − R t dτ 1 ẋ2 (τ ) − R t dτ 1 V (x(τ ))
= √ e 0 4D e 0 ~ (9.33)
τ =0 4πDdτ

we recover the Feynman-Kac formula. This procedure of sending the time to


an imaginary value is called Wick rotation and the resulting Path Integral is
referred as Euclidean Path Integral.

9.3 Finite Temperature Quantum Mechanics


For the last section of our chapter about Quantum Mechanics, consider a
quantum system in contact with a thermal bath (canonical ensemble); its
partition function reads:
X
Z(β) = tr e−βH = e−βEn

(9.34)
Energy eigenvalues En

Often (as we wrote) the trace is taken in the energy representation; here we
want to use the position one:
Z
Z(β) = Dq hq| e−βH |qi (9.35)

Notice that if we take β = it~ or t = −i~β we recover the propagator K(q, q, t)


with equal starting and endpoint (=periodic trajectory) that we know we
can compute using the Feynman Path Integral; in other words a quantum
mechanical partition function can be computed using an Euclidean Path
Integral or, again, a Quantum Mechanics Theory is equivalent to a Statistical
Mechanics Theory (the D wants to account for the presence of more than
one particle in the system).
As an example consider a free particle confined on a ring on length L; its
energy eigenvalues are:
2
~2 2πn

En = n∈Z (9.36)
2m L

71
So: ∞ Z ∞
X ~2
−β 2m ( 2πn
2
L )
L ~2 2 L
Z(β) = e ≈ dke−β 2m k = (9.37)
n=−∞
2π −∞ Λ

where Λ ≡ √ h is the thermal wavelength. Consider now K0 (x, x0 , t)


2πmkB T
(that is, the propagator for the free Schroedinger equation) and send t →
−i~β: √ 2
2πmkB T − m(x−x 0)
K0 (x, x0 , t = −i~β) = e 2~2 β (9.38)
h
Setting x = x0 (periodic trajectory) and doing the integral over dq (that
gives a factor of L as the volume of the system) we recover Z(β):
Z
L
Z(β) = dqK0 (q, q, t = −i~β) = (9.39)
Λ

9.4 Field Theory Example


Let’s apply what we discovered above to an actual quantum field theory,
the scalar field. A free scalar field Lagrangian density (also known as Klein-
Gordon Lagrangian) reads (we use natural units, i.e. c = ~ = 1):
1 1
L= (∂µ φ) (∂ µ φ) − m2 φ2 (9.40)
2 2
This is a relativistic theory so the term (∂µ φ) (∂ µ φ) is equal to:
(∂t φ)2 − (∇φ)2
since we use the mostly negative metric, i.e. (+, −, −, −). Using the Feynman
Path Integral we compute the partition function of the theory (that is just
the unconditional path integral):
Z R D−1 1 2 1 2 1 2 2
Z = Dφei dtd x 2 (∂t φ) − 2 (∇φ) − 2 m φ (9.41)

where we took the spatial dimensions of the theory equal to D − 1. Now use
a Wick rotation and send t → −ix0 :
t → −ix0 (9.42)
dt → −idx0 (9.43)
∂t → i∂0 (9.44)

72
We get: Z
dx0 dD−1 x 21 (∂0 φ)2 + 12 (∇φ)2 + 12 m2 φ2
R
Z= Dφe− (9.45)

Notice that now the Path Integral is in the same fashion as the Polymer Field
Theory one. In the end:
Z R D 1 2 1 2 2
Z = Dφe− d x 2 (∇φ) + 2 m φ (9.46)

where now the gradient is taken in D dimensions. The two point function is
the inverse of the operator (aka the Green’s function):

− ∇ 2 + m2 (9.47)

73
Chapter 10

Topics in Stochastic Phenomena

10.1 Stochastic Amplification


We are used to consider the noise in a signal as an annoying background and
we do a lot of work to get rid of it. However, in certain systems, the presence
of noise is crucial to the behavior of the system itself. In the example we’ll see
the noise will be the component keeping the oscillations alive in a predator-
prey system. Consider the Volterra model, an "urn" containing a number n
of predators, denoted by A, m preys, denoted by B and N − n − m empty
slots (denoted by E), where N is the capacity of the urn (n + m ≤ N ). At
each instant we can have birth processes, death processes and predator-prey
interactions:
• B-Birth: BE → BB
• A-Death: A → E
• B-Death: B → E
• PP: AB → AE
• PP+Birth: AB → AA
The "reactions" occur with rates: b, d1 , d2 , p1 , p2 . If a configurations space
point is denoted by (n, m) each reaction moves the point by a vector:
• B-Birth: (0, 1)
• A-Death: (−1, 0)

74
• B-Death: (0, −1)
• PP: (0, −1)
• PP+Birth: (1, −1)
Before proceeding, let’s rescale b, p1 , p2 by N −1 and d1 , d2 by N . The master
equation’s rates are:
Predator death T1 = T (n − 1, m|n, m) = d1 n
This is because we choose a predator with probability n/N which we multi-
ply by the death rate (we rescaled!).
m
Prey birth T2 = T (n, m + 1|n, m) = 2b N (N − m − n)
m N −m−n
We need to choose the preys and the empty slots and viceversa set b( N N −1
+
N −n−m m
N N −1
) and rescale b
Prey death T3 = T (n, m − 1|n, m) = 2p2 nm N
+ d2 m
Predator birth T4 = T (n + 1, m − 1|n, m) = 2p1 nm N
Choose first the predators and after the preys ( Nn Nm−1 ) and vice versa ( N
m n
N −1
),
then rescale.
Now we would be ready to write and (try to) solve the master equation.
However here we take a simpler but very powerful approach, the Linear
Noise Approximation Ãă la Wallace.

10.2 Linear Noise Approximation


Consider a system like a former one with a certain number of reactions (la-
beled by j) which change the state vector of the system k and a totale size
of Ω. We map the previous master equation’s rates Tj :
 
k
Tj = Ωrj (10.1)

and define a vector vj such that the reaction j brings the state vector from
k to k + vj . Suppose that it exists a time scale τ during which we observe
many transitions but the rates rj do not vary substantially:
1 rj τ
τ 
Ωrj ∆rj
The jumps of the system along different channels j’s are independent so we
can assume a Poisson statistics for the variation of k that we approximate

75
by a Gaussian: X
k(t + τ ) ≈ k(t) + N (λj , λj )vj (10.2)
j
k

where λj = τ Ωrj Ω
. Let’s decompose the gaussian:
X √ X√
k(t + τ ) ≈ k(t) + τ Ω rj vj + τΩ rj N (0, 1)vj (10.3)
j j

(recall: rj is a function of Ωk )
Expand now k in terms of a deterministic and a stochastic part:

k = Ωx + Ωξ (10.4)

k 1
=x+ √ ξ (10.5)
Ω Ω
Now insert in the previous expansion and isolate the O(1) (deterministic)
and O( √1Ω ) (stochastic) terms (for simplicity we drop the arrow on vectors):
X
x(t + τ ) = x(t) + τ rj (x(t))vj (10.6)
j

As τ → 0:
dx(t) X
= rj (x(t))vj (10.7)
dt j

Calling xs the stationary solution of this equation, we write the stochastic


part:
r
1 1 τ X τ X
q
√ ξ(t + τ ) = √ ξ(t) + √ < ∇x rj (xs ), ξ > vj + rj (xs )N (0, 1)vj
Ω Ω Ω j Ω j
(10.8)
dξ(t) X Xq
= < ∇x rj (xs ), ξ > vj + rj (xs )ηj vj (10.9)
dt j j

where ηj is a white noise (< η(t)η(t0 ) >= δ(t − t0 )).

76
10.3 Continued: Volterra model
We apply the linear noise approximation to the previously introduced Volterra
model with the concentrations f1 and f2 of the predators and preys :

x = (f1 , f2 )T = (n/N, m/N )T

The rates are:

• r1 = d1 f1 v1 = (−1, 0)

• r2 = 2bf2 (1 − f1 − f2 ) v2 = (0, 1)

• r3 = 2p2 f1 f2 + d2 f2 v3 = (0, −1)

• r4 = 2p1 f1 f2 v4 = (1, −1)

The deterministic equation read:


df1 X (1)
= rj vj = −d1 f1 + 2p1 f1 f2 (10.10)
dt j

df2 X (2)
= rj vj = 2bf2 (1 − f1 − f2 ) − 2p2 f1 f2 − d2 f2 − 2p1 f1 f2 = (10.11)
dt j

= (2b − d2 )f2 − 2bf22 + f1 f2 (−2b − 2p2 − 2p1 ) (10.12)

This would be the Lotka-Volterra model if it didn’t include the f22 term. From
this equation we should extract the stationary solutions (f1∗ , f2∗ ) and pulg
them into the stochastic equation. The stochastic part needs the gradient of
the rates:

• ∇r1 = (d1 , 0)

• ∇r2 = (−2bf2 , 2b(1 − f1 − 2f2 ))

• ∇r3 = (2p2 f2 , 2p2 f1 + d2 )

• ∇r4 = (2p1 f2 , 2p2 f1 )

77
At the end we would end up with two 2x2 matrices A, B connecting ξ1 and
ξ2 and the noisy part:

= Aξ + Bη
dt
where η is a two dimensional white noise. Try to go for the power spectrum
1  
ξ˜i = Aik ξ˜k + Bik η̃k

1   
ξ˜i ξ˜j∗ = − 2 Ail ξ˜l + Bil η̃l Ajk ξ˜k∗ + Bjk η̃k∗
ω
1  
= − 2 Ail Ajk ξ˜l ξ˜k∗ + Bil Bjk η̃l η̃k∗ + Ail Bjk ξ˜l η̃k∗ + Bil Ajk η̃l ξ˜k∗
ω
1 X 
< ξ˜i ξ˜j∗ >= − 2 Ail Ajk < ξ˜l ξ˜k∗ > +Bil Bjk δlk
ω l,k
1
AP AT + B T B

P =− 2
ω

10.4 Stochastic Resonance


Consider a particle subject to a Mexican hat-like potential with two minima
at x = c and x = −c. Suppose that there exists an external driving force
that in a period of length τ highers and lowers in a cyclic way the relative
strength of the potential at x = ±c: at time t = 0 V (c) = V (−c), t = τ /4
V (c) > V (−c), t = τ /2 V (c) = V (−c) and t = 3τ /4 V (c) < V (−c). Since
the strength varies with time the particle jumps between the two position of
the potential (since at x = 0 the potential has a local maximum). We model
the particle dynamics using a Langevin equation:

ẋ = −Vo0 (x) + ξ(t)

ξ(t)ξ(t0 ) = Kδ(t − t0 )
Here K is the strength of the noise and in case of a pure thermodynamic
interpretation we would set K = 2kB T . We can write the typical jump time:

2 c
 Z y  
−2V0 (z)
Z
2V0 (y)
hτ i0 = dy exp dz exp (10.13)
K −c K −∞ K

78
In the Kramers approximation we find the jumping rate W0 equals to:
 
−1 1 p 00 00
−2∆V
W0 = hτ i0 ≈ |V (0)|V (c) exp (10.14)
2π K
where ∆V = V (0) − V (c). We add a modulation to the potential:
x
V (x, t) = V0 (x) + V1 sin ωs t (10.15)
c

If hτ i0  ts ≡ and ∆V  V1 , the jump rates are:
ωs
 
1 p 00 00
2
W1,2 = |V (0)|V (c) exp − (∆V ± V1 sin ωs t) (10.16)
2π K
W1 is the one from −c to +c, W2 viceversa. We move to a reduced description,
studying only the positions ±c:
(
ṗ1 = −W1 (t)p1 + W2 (t)p2
(10.17)
ṗ2 = −W2 (t)p1 + W1 (t)p2
where p1 (t) is the probability of being at x = −c, p2 (t) at x = c. Since
p2 (t) = 1 − p1 (t):
ṗ1 = −(W1 (t) + W2 (t))p1 (t) + W2 (t) (10.18)
| {z }
≡W (t)

This equation approach a periodic solution:


Z t  Z t 
osc 1 0 0 00 00
p1 (t) = −hW its
dt W2 (t − t ) exp − dt W (t ) (10.19)
1| − e{z } 0 t−t0
hW iaverage over ts

Consider weakly modulated rates:


(
W
W1 (t) = 2
−  sin ωs t
W
(10.20)
W2 (t) = 2
+  sin ωs t
Calling p1 = p:
W
ṗ(t) = −W p(t) ++  sin ωs t (10.21)
2
If we just consider the deviations of p from 1/2, i.e. ∆p = p − 21 :
∆ṗ = −W ∆p +  sin ωs t (10.22)

79
Chapter 11

Disordered systems

11.1 Spin Glasses


Start by considering the Ising Model:
X
H=− Jij Si Sj (11.1)
<i,j>

Given that Jij > 0 (ferromagnetic interaction), we know that at low tem-
perature we find "almost" all states in up or down states (having a doubly
degenerate ground state) while at hight temperature the system is in a dis-
ordered phase (paramagnetic). As usual, the order parameter is hmi, the
magnetization.
Differently from the Ising model, if the hamiltonian of the system such as 11.1
has Jij quenched and either positive (ferromagnetic) or negative (antiferro-
magnetic), we say that the system is a spin glass; because of this, in general,
we can’t have huge positive or negative domains in the ground state, since it
is almost impossible to satisfy positive and negative interaction. Indeed an
Ising model has a global minimum for the energy, while a Spin Glass has a lot
of local minima. Take for example 3 spins: the first two interact with ferro-
magnetic interaction, the third and the first antiferromagnetic, so one cannot
satisfy at the same time equal orientation and opposite. In a spin glass we
take the interaction term to be a picked from a probability distribution so,
in some sense, our system is disordered. This kind of disorder appears even
in machine learning and in binary SAT problems (logical equations). In a
system like this we have to deal with two kind of averages, the one over the

80
disorder and the ensemble average. For an observable A, already averaged
over the ensemble, we define:
Z
Ā = average over disorder = dJp(J)hAiJ (11.2)
Z
1
hAiJ = DσA(σ)e−βH(σ;J) (11.3)
ZN (J)
Again, the first equation is the disorder-averaged A while the second is the
ensemble average of A, given a realization of J. Moving to the free energy:
1
FN (J) = − log ZN (J) (11.4)
βN
we ask ourselves: how F changes with J? The answer is inside the concept
of self averaging quantities! We need to compute the relative variance as
N → 0 for a quantity A: if it goes to zero we have a self averaging quantity.

Ā2 − (Ā)2
→ 0 as N → ∞
(Ā)2

The free energy is a self averaging quantity (because of the log dependence):
1
FN (J) = − log ZN (J) → F∞ (β) (11.5)
βN
while we don’t know about the partition function (exponential dependence
on the noise). Because of presence of two averages, we can define two kind
of free energies, the annealed FA (usually simpler to compute but givs some
wrong results) and the quenched FQ (harder but realistic):
Z Z
1
FQ = − dJp(J) log dσe−βH(σ;J) =
βN
Z
1 1
=− dJp(J) log ZN (J) = − log ZN (11.6)
βN βN
Z Z
1 1
FA = − log dJp(J) dσe−βH(σ;J) = − log Z N (11.7)
βN βN
In the annealed version we first do the average over disorder then we compute
the partition function, in the quenched version is viceversa.

81
11.2 Replica trick
There exists a famous trick to compute the quenched version of the free
energy, called the replica trick; it consists in computing the average over
disorder partition function for n replicas of the system in exam and taking a
propert limit:
Zn − 1
log Z = lim (11.8)
n→0 n
The n replicas partition function is:
Z
Z = Dσ1 ...Dσn e−βH(σ1 ;J) ...e−βH(σn ;J)
n (11.9)

An average in this framework is done in the following way:


Z
hAi = lim Dσ1 ...Dσn A(σ1 )e−βH(σ1 ;J) ...e−βH(σn ;J) (11.10)
n→0

(we assume that the limit doesn’t depend over which replica we make the
average).

11.3 Pure states


At low temperature T for N → ∞ one can have ergodicity breaking: the
system in equilibrium explores only a part of phase space. This implies that
the Gibbs measure split in parts, called pure states. If we label α a pure
state we have: X
hAi = wα hAiα (11.11)
α

with hiα as the average over a pure state and:


X
wα = 1
α

dσe−βH(σ)
R
Zα α
wα ≡ = (11.12)
Z Z

82
For Ising at T → 0 we get two pure states α = + and α = −:

w± = 1/2
hσi+ > 0
hσi− < 0
X
hσi = wα hσiα = 0
α

11.4 Clustering
Take two points i, j such that |i − j| → ∞: we expect that hσi σj i → hσi ihσj i
only for pure states. Indeed for Ising Model at T < Tc :
1 1 1 1
hσi σj i = hσi σj i+ + hσi σj i− ≈ hσi2+ + hσi2− 6= 0 (11.13)
2 2 2 2
Here it is easy to study a positive or negative domain: so the magnetization
is a good order parameter. For Spin Glasses it is not possible: in the end
the magnetization is not so meaningful, since the interaction is either ferro-
magnetic or anti-ferromagnetic. Moreover we have to deal with ergodicity
breaking against the dimensionality of the system; in the mean field d → ∞
we find very different results from d < ∞: in the former case you find
metastable states the system cannot possibly leave since they have infinitely
high barriers as N → ∞ (a metastable state is a local minima).

11.5 Overlaps
In this section we introduce the concept of an overlap, as a measure fo similar-
ity between states. Take a frozen (=quenched) disorder J; at low temperature
we find configurations with definite spin values and there is no generation of
global magnetization, so we define a new order parameter, called Edward-
Anderson parameter:
N
1 X
qEA ≡ hσi i2 (11.14)
N i=1

83
Notice that qEA is non zero even if the local magnetization is different from
zero. For two configurations σ, τ we introduce the mutual overlap:
N
1 X
qστ ≡ σi τi (11.15)
N i=1

This quantity compares two configurations: for the Ising model at low tem-
perature we find ±1 for same or opposite state, otherwise similar to zero.
The self overlap is:
N
1 X
qσσ = σi σi (11.16)
N i=1
That is 1 for the Ising model. Instead, for a system with pure states α, γ (i.e.
split Gibbs measure), the overlap is:
N
1 X
qαγ = hσi iα hσi iγ = (11.17)
N i=1
N Z Z
1 X 1 −βH(σ) 1
= Dσσi e Dτ τi e−βH(τ ) = (11.18)
N i=1 Zα α Zγ γ
Z Z
1
= Dσ Dτ e−βH(σ) e−βH(τ ) qστ (11.19)
Zα Zγ α γ

We can see it as an average of overlaps of configurations in α and β. The self


overlap in this case is:
N
1 X
qαα = hσi i2α (11.20)
N i=1
This quantity is in [0, 1] and measure the variability of the system after a
change in spin configuration. We find a number near 1 if α is made of just
one configuration (or a small number of), near 0 if α is a big ensemble of
configuration.

84
11.6 Overlap distribution
In Spin Glasses we find a huge number of low temperature pure states; it
would be useful to have a distribution of overlap between replicas:
Z Z
1
p(q) = 2 Dσ dτ e−βH(σ) e−βH(τ ) δ(q − qστ ) =
Z
Z Z
X 1 1
= wα wγ Dσ Dτ e−βH(σ) e−βH(τ ) δ(q − qστ ) =
αγ
Z α α Zγ γ
X
= wα wγ δ(q − qαγ )
αγ

For Ising model we have four possible realization of q that are equally dis-
tributed (w+ = w− = 1/2):
N N
1 X 1 X 2
q++ = hσi i2α = mi = m2
N i=1 N i=1

q−− = m2
q+− = q−+ = −m2
1 1
p(q) = δ(q − m2 ) + δ(q + m2 )
2 2

11.7 p-Spin spherical model


As a spin glass model we define the p-Spin spherical model:
X
H(σ) = − Ji1 ,...,ip σi1 ...σip p ≥ 3 (11.21)
i1 >...>ip =1

Spin are σi ∈ R. To keep the energy finite one introduces a spherical con-
straint:
XN
σi2 = N
i=1
Notice that qσσ = 1. The p-body interaction J is over all the possibile
combinations of p spins: this is a mean field model. Moreover, J is taken
from a gaussian distribution:
1 J2
 
p(J) ∝ exp − 2
2 Jv

85
p!
for Jv2 ≡ 2N p−1
: this gives H ∝ N , as needed:
J 2 N p−1
 
p(J) ∝ exp −
p!
We now try to solve the system via the so-called replica symmetric approach.
We start with the annealed calculation; disregard any needed finite normal-
ization. As a notation set p = 3 but everything is fit for p > 3. Remember
1
FA = − log Z̄N
βN
The partition function is:
p−1
Z Z Y  
2 N
Z̄ = Dσ dJijk exp −Jijk + Jijk βσi σj σk = (11.22)
i<j<k
p!
!p !
β2 N β2
Z X  
= Dσ exp σi = Ω(N ) exp (11.23)
4N p−1 i
4

where Ω(N ) is the surface of a N dimensional sphere; we used:


N
X N
X
p! ≈ (11.24)
i<j<k ijk

In the end we obtain:


β 1
FA = − − S∞ (11.25)
4 β
with S∞ = logNΩ . This annealed free energy is a good approximation for the
paramagnetic phase and we can prove that FA ≥ FQ . For the quenched
version we need to compute Z n , i.e. use the replica trick. Notation: σia is
the spin of the a-th replica in the i site.
n
!
Z YZ p−1
N X
Z n = Dσia 2
dJijk exp −Jijk + Jijk β σia σja σka = (11.26)
ijk
p! a=1
n
!
β 2 p! X a a a b b b
Z Y
= Dσia exp σ σ σ σσσ = (11.27)
ijk
4N p−1 a,b i j k i j k
n N
!p !
Z 2
β p! X X
= Dσia exp σaσb (11.28)
4N p−1 a,b i=1 i i

86
We started with a set of coupled spins and uncoupled replicas and we ended
up with, after the averaging over the disorder, decoupled sites and coupled
replicas (thanks to a mean fieldP
model). Notice the appearance of the overlap
between two replicas Qab = N N1 a b
i=1 σi σi . We introduce:
Z X
1 = dQab δ(N Qab − σia σib ) (11.29)
i

Now we use the exponential representation of the delta function


Z Z Z λ0 +i∞
dx ixf (q) dx ixf (q)+λ0 f (q) dλ λf (q)
δ(f (q)) = e = e = e (11.30)
2π 2π λ0 −i∞ 2π

to get: Z
Zn = DQab Dλab exp[−N S(Q, λ)] (11.31)

β2 X p X 1
S(Q, λ) = − Qab − λab Qab + log det(2λab ) (11.32)
4 a,b a,b
2
We can perform the saddle point approximation since N → ∞; however two
problems arise:
• S(Q, λ) doesn’t depend on n so we need to do the thermodynamic limit
first (risky)
• The number of indipendent pairs for Qab is n(n − 1)/2 as n → 0 is
negative
Let’s proceed; we use the equality:

log det Mab = (M −1 )ab
∂Mab
with Mab = 2λab . We must have:
∂S(Q, λ)
= −Qab + (2λab )−1 = 0 (11.33)
∂λab
−1
Qab = 2λab (11.34)
We get for the s.p. free energy:
" #
1 β2 X p
F = lim − Q + log det Qab (11.35)
n→0 2βn 2 a,b ab

87
and for the saddle point equation (stability condition):

∂F β 2 p p−1
0= = Q + Q−1 (11.36)
∂Qab 2 ab ab

Now we need to make an assumption on Qab .

11.8 Replica Symmetric Solution


The first assumption we make on Qab is simple; the overlap between the
same replica is 1 and the one between different replicas doesn’t depend on
the replicas:
Qab = q0 + (1 − q0 ) δab (11.37)
1 q0
Q−1
ab = δab − (11.38)
1 − q0 (1 − q0 ) [1 + (n − 1)q0 ]
As n → 0 the saddle point equation 11.36 reads:

β 2 p p−1 q0
q0 − =0 (11.39)
2 (1 − q0 )2

The first solution is q0 = 0 which gives:


 
 2X 
1  β p  = −β

F = lim −  Q ab + log det Q ab (11.40)
n→0 2βn  2 a,b
 | {z } 
 4
=1
| {z }
=n

which is the annealed solution. For q0 6= 0:


2
q0p−2 (1 − q0 )2 = T 2 (11.41)
p
However one can check that one solution of this equation decreases with T
which is unphysical and the other one isn’t stable (negative eigenvalues of
∂ 2 F ). We need to give more structure to Qab !

88
11.9 Replica Trick and Physics
11.10 Replica Symmetry Breaking
We make the structure of Qab a bit more complicated; we break the permu-
tation of replicas symmetry by saying that:

Qaa = 1

Qab = q0 different pure states (11.42)

Qab = q1 for same pure states

in a particular way: Qab is a block circulant matrix with two m × m block


Aij = δij + (1 − δij ) q1 and Bij = q0 :
A B ···
 
B
. ... .. 
 B .. . 

Q= . .
 .. .. ...

B 
B ··· B A
 
1 q1 q1
 q1 1 q1 q0 
 
 q1 q1 1 
 

 1 q1 q 1 

 q1 1 q 1 
Q= (11.43)
 
 q1 q1 1 


 ... 

 

 1 q1 q1 

 q0 q1 1 q1 
q1 q1 1
n
so for each row we have f = m blocks. Above we depicted the case m = 3.
P
It is important to notice that a Qab = 1 + (m − 1)q1 + (n − m)q0 ∀b
which ensures that all replicas are equivalent. One can compute (look in the
appendix) the eigenvalues of Qab :
n
Λ1 = 1 − q1 d1 = n −
m
n
Λ2 = m(q1 − q0 ) + (1 − q1 ) d2 = −1
m
Λ3 = nq0 + m(q1 − q0 ) + (1 − q1 ) d3 = 1

89
and can show that:
1X p X p
Qab = Qab = 1 + (m − 1)q1p + (n − m)q0p (11.44)
n ab a

From this results the free energy as n → 0 is:

β2 m−1
− 2βF = [1 + (m − 1)q1p − mq0p ] + log(1 − q1 )+
2 m
1 q0
+ log(m(q1 − q0 ) + 1 − q1 ) + (11.45)
m m(q1 − q0 ) + 1 − q1

To get further insights and results we suggest the paper "Spin-glass theory
for pedestrians" from Castellani and Cavagna, from which these notes on
spin glass theory are taken.

11.11 Random Field Ising Model


We study a standard Ising model (spins are Si ∈ {−1, 1} for i = 1...N ) with
a disorder in random magnetic fields hi ; the ferromagnetic coupling is not
random and the interaction is not limited to nearest neighbor.
J X X
H=− Si Sj − hi Si (11.46)
N i,j i
Z
F N = FQ = −kB T dN hp(h) log Z(T, h, J) (11.47)

1 2 2
p(hi ) = √ e−hi /2σ (11.48)
2πσ 2
Use replica trick:
∂ (n)
F = −kB T Z̄ (11.49)
∂n n=0
Use index a for the replicas a = 1 . . . n.
! !
X βJ X X a a XX
Zn = exp S S exp β Sia hi (11.50)
Sia
N a ij i j i a

90
The overline stands for the average over disorder, which gives:
 
 !2 
 βJ X X a a β 2 σ 2 X X a 
 
X
Zn = exp
 Si Sj + Si  (11.51)
Sia  N a ij
2 i a


 | {z } 
P a 2
=( i Si )

We arrive at interacting replicas. Use the Hubbard-Stratonovich transforma-


tion:
Z
b 2 1 x2
a
e 2
za
=√ e− 2b −za xa dxa
2πb
p X
za = 2Jβ Sia
i
1
b=
N
 
n/2 X Z Y !2 
2 2 X X
 
(n) N  NX 2 p XX
a β σ a

Z = dxa exp − x a + 2Jβ S i x a + Si
=
2π Sia a
 2
 a i a
2 i a


| {z }
N times the same object
(11.52)
 n/2 Z Y " !#
N 1X 2
= dxa exp N − x + log Z1 (xa ) (11.53)
2π a
2 a a
With:
 !2 
2 2
X p X β σ X
Z1 (xa ) = exp 2βJ xa S a + Sa  (11.54)
Sa a
2 a

We employ now the saddle point approximation; assume for now xa = x (like
a replica symmetric solution). The saddle point is xm :
d
− nx + log Z1 (x) = 0 (11.55)
dx P P a A[S,xm ]
S a =±1 aS e
p
nxm = 2βJ P A[S,xm ]
(11.56)
S a =±1 e

91
With: !2
p X β 2σ2 X
A[S, x] = 2βJx Sa + Sa (11.57)
a
2 a
At the end xm is an average in the replica of the magnetization at a given
site:
p 1X a p
xm = 2βJh S i = 2βJm (11.58)
n a
Rewrite everything using m:

Z̄ (n) ∝ eN [−nβJm +log Z1 (m)]


2
(11.59)
X
Z1 (m) = eA[S,m] (11.60)
S a =±1
!2
X β 2σ2 X
A[S, m] = 2βJm Sa + Sa (11.61)
a
2 a

This let us derive a self consistency equation for m:


!
1 X 1 X a A[S,m]
m= S e (11.62)
Z1 (m) S a =±1 n a

Use Hubbard-Stratonovich to a S a :
P
Z
ds 1 2 P a
e A[S,m]
= √ e− 2 s +(2βJm+βσs) a S (11.63)

The partition function becomes:
Z
ds 1 2
Z1 (m) = √ e− 2 s +n log 2 cosh(2βJm+βσs) (11.64)

In the limit n → 0:
Z
ds 1 2
m= √ e− 2 s tanh (2βJm + βσs) (11.65)

m = mSC (m) = tanh (β (2Jm + h)) (11.66)


h ∼ σN (0, 1) (11.67)
The overline, again, stands for an average over a disorder.

92
Chapter 12

Levy flights

12.1 Sub and super diffusion


Remember the properties of the fundamental solution P (x, t) of the diffusion
equation for x0 = 0 and t0 = 0:
1 x2
P (x, t) = √ e− 4Dt (12.1)
4πDt
It’s obvious, being P (x, t) a gaussian distribution, that:
hx2 (t)i = 2Dt (12.2)
Now let’s try to generalize and consider a number ζ 6= 1; introduce a particle
undergoing a diffusion process which is different from the standard brownian,
that is for some Dζ :
hx2 (t)i = 2Dζ tζ (12.3)
If 0 < ζ < 1 we talk about subdiffusion, which is typical of transport of
charge carriers in semiconductors and chemical and monomers of polymer
diffusion; in this case the particle tends to stay in the same state: indeed the
waiting time distribution is proportional to t−1−α for 0 < α < 1 that has not
finite first moment.
If ζ > 1 we talk about superdiffusion or Levy flights; the distribution of the
jump length is proportional to |x|−1−µ for 0 < µ < 2 and we can associate a
generalized diffusion equation:

∂t P (x, t) = Dµ P (x, t) (12.4)
∂|x|µ

93
which is understood in the Fourier space:

∂t P̃ (k, t) = −Dµ |k|µ P̃ (k, t) (12.5)

having the solution for some initial conditions P (x, 0) = ρ(x):


µ
P̃ (k, t) = f˜(k)e−Dµ |k| t (12.6)

12.2 Cauchy random walk


The Cauchy random walk is a Levy flight whit µ = 1:

P̃ (k, t) = f˜(k)e−D1 |k|t (12.7)

Use the initial conditions ρ(x) = δ(x) and set x0 (t) = D1 t:


Z ∞
1
P (x, t) = dke−x0 (t)|k|+ikx = (12.8)
2π −∞
Z ∞
1 1 1
= dke−x0 (t)k cos kx = 2 (12.9)
π 0 πx0

1 + x0x(t)

We notice that P (x, t) is the Cauchy distribution which shows a "strange"


property, i.e. no variance is well defined: indeed the Cauchy distribution
is a fat tailed one hence no central limit theorem can be applied (i.e. an
average of a lot of samples from a Cauchy walk don’t converge to a gaussian
distribution). For further insights, for initial condition:
(
1
|x| ≤ x0
ρ(x) = 2x0 (12.10)
0 otherwise
Z x0
1 1  sin kx0
ρ̃(k) = dxe−ikx = e−ikx0 − e+ikx0 = (12.11)
−x0 2x0 −2ikx0 kx0

94
we get:
Z ∞
1 sin kx0 −D1 t|k|+ikx
P (x, t) = dk e = (12.12)
2π −∞ kx0
Z ∞ Z x0
1 1 −D1 t|k|+ikx
= dk dye−iky e = (12.13)
2π −∞ −x0 2x0
Z x0 Z ∞
1 1
= dy dke−D1 t|k|+ik(x−y) = (12.14)
2x 0 2π
Z−xx0
0 −∞
1 1 1
= dy 2 = (12.15)
2x0 πD1 t

−x0 x−y
1 + D1 t
  x
1 −1 x − y 0
= atan = (12.16)
2x0 π D1 t −x0
    
1 x + x0 x − x0
= atan − atan (12.17)
2πx0 D1 t D1 t

Another way to solve the problem is by Green’s function formalism, being


G(x, t) = πD11 t  1x 2 that function.
1+ D1 t

Try, by applying de l’Hospital, to discover how P (x, t) behaves in the x0 → 0


limit).

95
Chapter 13

Instantons

13.1 Classical Instantons


Consider a Langevin equation in the low noise limit, i.e.:

ẋ (t) = F (x ) + ξ(t) (13.1)

The infinitesimal propagator is:


2
1 x0 −x

0 − ∆t −F (x)
P (x , t + ∆t|x, t) = √ e 2 ∆t
(13.2)
2π∆t
R tf
The propagator can be expressed via an action S[x] = ti
L(x, ẋ)dt with
L(x, ẋ) = 21 (ẋ − F (x))2 :
Z
S[x]
P (xf , tf |xi , ti ) = D[x]e−  (13.3)

In the limit  → 0 we use the saddle point approximation and we look for
trajectories x∗ that extremize S[x], i.e. we have to solve the Euler-Lagrange
equation:
d ∂L ∂L
= (13.4)
dt ∂ ẋ ∂x
dF (x) dF (x)
ẍ − ẋ = − (ẋ − F (x)) (13.5)
dx dx
d F (x)2 d
ẍ = = (−Vef f (x)) (13.6)
dx 2 dx

96
So given x∗ (t) with x∗ (ti ) = xi and x∗ (tf ) = xf we get P (xf , tf |xi , ti ) ≈

e−S[x ]/ where, indeed, S[x∗ ] = minx(ti )=xi ,x(tf )=xf S[x]. The solution x∗ is
called an instanton.

13.2 Instanton Examples


Now, for an example, set U (x) = γ2 x2 (Ornstein-Uhlenbeck Process) from
which we get F (x) = −γx and dF dx
= −γ so the instanton solution is given
by:
d γ 2 x2
 
ẍ = = γ 2x (13.7)
dx 2
The solution is x(t) = x0 eγt
The second example is U (x) = a2 x2 + 4b x4 for which F (x) = −ax − bx3 . The
2 2 4 +b2 x6
effective potential is Vef f = − a x +2abx
2
and the instanton equation is:

ẍ = −a2 x − 4abx3 − 3b2 x5 (13.8)

The last one we want to treat is (sine-Gordon model): This corresponds to


2
a particle with energy E in a potential Vef f (x) = − F 2(x) :

ẋ2 F (x)2
E= − (13.9)
2 2
from which: Z x
ds
t − t0 = ± p (13.10)
x0 2E + F 2 (s)
Set E = 0. Suppose a < 0 and b > 0; hence the force is positive if x > − ab
p

or − − ab < x < 0. Pick x0 > 0 and x ≤ − ab :


p p

Z x
ds
b(t − t0 ) = ± as 3
(13.11)
x0 b + s
= (13.12)

The last model we analyze is the sine-Gordon model:

Vef f (x) = V0 (cos x − 1) (13.13)


dVef f (x)
ẍ = − = V0 sin x (13.14)
dx
97
The energy is:
ẋ2
E= + V0 (cos x − 1) (13.15)
2
Inverting this equation we get:

ẋ = ±2 E + 1 − cos x (13.16)
Z x

t − t0 = ± p (13.17)
x0 2 E + V0 (1 − cos α)

For E = 0:
Z x

t − t0 = ± p = (13.18)
x0 2 V0 (1 − cos α)
Z x
1 dα
= ±√ = (13.19)
2V0 x0 2 sin α2
1  α  x
= ±√ log tan (13.20)
2V0 4 x0
In the end we get:
1 tan(x/4)
t − t0 = ± √ log (13.21)
2V0 tan(x0 /4)

tan(x/4) = tan(x0 /4)e± 2V0 (t−t0 )
(13.22)
 √ 
x = 4 atan e± 2V0 t+c (13.23)

13.3 Quantum and Statistical Mechanics Instan-


tons
Consider a Field Lagrangian density:
1
L= (∂x φ)2 − V0 (φ) (13.24)
2
2 2
with V0 (φ) = m2 φ2 + g4 (φ2 ) . If g = 0 we can compute exactly the propagator
of the theory otherwise we apply the saddle point approximation by solving
the Euler-Lagrange equation:
 
∂L ∂L
∂x = (13.25)
∂ (∂x φ) ∂φ

98
∂x2 φ = −m2 φ − gφ3 (13.26)
2
The only stationary solution is φ0 = 0 (a minimum) if mq > 0; otherwise
2
φ0 = 0 is a maximum and we have two minima, ±φ0 = ± −m g
. let’s stick
2
to the case φ20 = − mg . We rewrite the potential as:

m2 2 m2 4
V0 (φ) = φ − 2φ (13.27)
2 4φ0
φ20 m2
Subtract from it the minima, i.e. Vc = V0 (φ0 ) = 4
= V0 . At the end we
get:

m2 2 m2 4 φ20 m2 m2
V = V0 (φ)−Vc = φ − 2φ − = − 2 (φ − φ0 )2 (φ + φ0 )2 (13.28)
2 4φ0 4 4φ0

If we Wick rotate, i.e. t = ix, we arrive at a system with potential −V (φ).


We solve this system in the case of E = 0:
Z φ
dφ̃
x − x0 = ± r  2  2 = (13.29)
φ0 m2
− 4φ20
φ̃ − φ0 φ̃ + φ0
Z φ
φ
1 dφ̃ 1 −1 φ̃

=±   = ± tanh ( ) = (13.30)
m φ0
2
φ0 1 − φ̃φ2 m φ0 φ0
0

1 φ
± tanh−1 ( ) (13.31)
m φ0
Inverting the instantons are:

φ = ±φ0 tanh [m (x − x0 )] (13.32)

99
Chapter 14

Field Approach to a Master


Equation - NOT RELATED TO
COURSE

In a previous chapter we developed a formalism to simplify the treatment


of some model based on master equations in order to show the role of the
noise; here we do the opposite and "make our life harder", since we apply
the so-called Doi-Peliti formalism to put a master equation process in a field
theory setting.

14.1 Fock Space and Coherent states


Withut entering the details, a Fock Space is the sum of all the Hilbert spaces
useful for representing all the possible particle states: zero particle state,
one particle state and so on. In the context of Quantum Mechanics we have
symmetrized states for bosons while antisymmetric states for fermions while
in what we are about to develop we forget about QM and treat those states
as a pure formal way to express how many particles the system is dealing
with. A fock space, in QM, is populated by the action hermitian conjugate
creation and destruction operators, a† and a; for example:

|1i ∝ a† |0i (14.1)

100
In quantum mechanics one used to have:

a |ni = n |n − 1i (14.2)

a† |ni = n + 1 |n + 1i (14.3)
a† a |ni = n |ni (14.4)
[a, a† ] = 1 (14.5)

hence (a)† = a† . In our setting we forget about hermitian conjugate operators


and use:

a |ni = n |n − 1i (14.6)
a† |ni = |n + 1i (14.7)

a a |ni = n |ni (14.8)

[a, a ] = 1 (14.9)

101
Appendices

102
Appendix A

Characteristic Functions and


Central Limit Theorem

A.1 Characteristic Functions


Here we define the characteristic function for a random variable and prove
the central limit theorem for the sum of i.i.d. random variables. Let be X a
random variable. Its mean or first moment µ is defined as:

hXi = µ (A.1)

Its variance or central secondo moment σ 2 is:

h(X − hXi)2 i = σ 2 (A.2)

In general we have, for a function f (x), given that the distribution of X is


p(x): Z
hf i = dxf (x)p(x) (A.3)
Z
p(f ) = hδ(f − f (x))i = dxp(x)δ(f − f (x)) (A.4)
Z
hf i = dxf k (x)p(x)
k
(A.5)

If f (x) = x the expected values hxk i for k ≥ 1 ∈ N are called moments of


the random variable X. The moments are easily compute using the fourier

103
transform of the distribution of X, also known as the characteristic function
of X, which we denote by ϕ(k):
Z
ϕ(k) = he i = dxeikx p(x)
ikx
(A.6)

Indeed, since eikx = 1 + ikx − 21 k 2 x2 + ... = ∞ in kn xn


P
n=0 n!
, the series represen-
tation for φ(k) is:
∞ n n
ikx
X i k
ϕ(k) = he i = hxn i (A.7)
n=0
n!
from which:
n n∂
hx i = (−i) ϕ(k) (A.8)
∂k n
k=0

A.2 Central Limit Theorem


Consider nPi.i.d. random variables x1 . . . xn with mean µ and variance σ 2 and
call Sn = ni=1 xi . From Sn define the random variable:
Sn − nµ
Y (x) ≡ √ (A.9)

It’s easy to find that hY i = 0 and Var(Y ) = 1. We want to find the distri-
bution for Y in the limit n → ∞:
Z
1
p(Y (x) = y) = hδ(Y (x) − y)i = h dαe−iα(Y (x)−y) i = (A.10)
2π R
Z Z Y n
1

iα y− √nµ
Pn
√iα x
= dαe nµ
dxi p(xi )e nσ i=1 i = (A.11)
2π R i=1
Z  Z n
1

iα y− √nµ √iα x
= dαe nµ
dxp(x)e nσ (A.12)
2π R
The quantity in the parenthesis is the characteristic function for k = √α so

for n → ∞:
α α α2 2 2
+ o(n−3/2 ) =

ϕ(k = √ ) = 1 + iµ √ − σ + µ (A.13)
nσ nσ 2nσ 2
iαµ 2
√ −α +o(n−3/2 )
=e nσ 2n
(A.14)

104
In the end:

Z
1
 
iα y− √nµ iαµ n α2 −1/2
p(Y (x) = y) = dαe nµ
e σ − 2 +o(n ) = (A.15)
2π R
Z
1 α2 −1/2 1 y2
= dαeiαy− 2 +o(n ) → √ e− 2 (A.16)
2π R 2π
Since we have limn→∞ Yn (x) ∼ N (0, 1) from the normal distribution proper-
2
ties we find Sn → N (nµ, nσ 2 ) and n1 Sn → N (µ, σn ) .

105
Appendix B

Circulant matrices

A circulant matrix is a special kind of symmetric matrix characterized by


the fact that, fixed the first row, every following row is a shifted version of
the previous one. In other words, fixing b0 , b1 , . . . , bm−1 (m is the number of
rows of the matrix):

b1 · · · · · · bm−1
 
b0
 bm−1
 . b0 b1 · · · bm−2 
 . ... ... ... .. 
 . . (B.1)


 . .. .. ..
 ..

··· . . . 
b1 b2 · · · bm−1 b0

Suppose we want to find the matrix eigenvalues. Start by introducing ρj =


ei2πj/m j = 0 . . . m − 1 as the family of m-th roots of the unity and, given a
ρ, the vector:
T
v = 1, ρ, . . . , ρm−1 (B.2)
The product between the first row and this vector is:

λ = b0 + b1 ρ + b2 ρ2 + · · · + bm−1 ρm−1 (B.3)

It’s immediate to show that the product of the i-th matrix row and the vector
v is just λ multiplied by ρi−1 , i.e. v is an eigenvector and λ and eigenvalue;
indeed the i-th row of the matrix, identifying bm = b0 , is:

(bm−i+1 , bm−i+2 , . . . , bm−1 , b0 , b1 , . . . , bm−i ) (B.4)

106
and the product is:

bm−i+1 + bm−i+2 ρ + · · · + bm−1 ρi−2 + b0 ρi + b1 ρi+1 + · · · + bm−i ρm−1 (B.5)

Just factor out ρi−1 (that is the i-th element of v):

ρi−1 bm−i+1 ρ1−i + bm−i+2 ρ2−i + · · · + bm−1 ρ−1 + b0 + b1 ρ + · · · + bm−i ρm−i




(B.6)
and because of bm = b0 we have the thesis. Another way to see this is by
introducing a n dimensional translation operator T such that:

Thk = δh,k+1 (B.7)


T
having used periodic conditions n+1 ≡ 1. The action on v = (1, ρ, . . . , ρm−1 )
is:
T v = ρ−1 v = ρm−1 v (B.8)
since ρ−1 = ρm−1 .

107
Appendix C

Disordered systems

Consider an m × m matrix:
 
α β ··· β

 β α ··· β  
 .. .. . . .. 
 . . . . 
β β ··· α

Its generic element can be written as:

cij = αδij + β (1 − δij ) (C.1)

The eigenvalues equation reads:


n
X n
X
cij vj = αδij vj + β (1 − δij ) vj = λvi (C.2)
j=1 j=1
m
X
(α − β) vi + β vj = λvi (C.3)
j=1

If nj=1 vj = 0 (can be done in m − 1 ways by choosing different values for


P
vj ):
λ=α−β

108
Pm
else sum over i (and j=1 vj 6= 0):
m
X m X
X m m
X
(α − β) vi + β vj = λ vi (C.4)
i=1 i=1 j=1 i=1
Xm Xm m
X
(α − β) vi + mβ vi = λ vi (C.5)
i=1 i=1 i=1
Pm
divide by j=1 vj :
λ = α + (m − 1)β
So we have λ1 = α − β with degλ1 = m − 1 and λ2 = α + (m − 1)β with
degλ2 = 1. The replica symmetric matrix is a block circulant matrix where
the first row block is:  
A B . . . B 
| {z }
f − 1 times

Aij = δij + q1 (1 − δij )


Bij = q0
f = n/m
where n is the number of replicas. The eigenvalues of A are:

λ1A = 1 − q1 deg1 = m − 1

λ2A = 1 + (m − 1)q1 deg2 = 1


The eigenvalue of B are:

λ1B = 0 deg1 = m − 1

λ2B = mq0 deg2 = 1


Now A and B commute and they are the same kind of circulant matrix of
first row (α, β, . . . , β) solved above. The block matrix is the block analogue
of this kind of circular matrix and because A and B commute the eigenvalues
of this matrix have the same form when we make the substitution α → λA

109
and β → λB ; there is only one requirement: we can mix only eigenvalues
having the same set of eigenvectors. In the end we get

λA = 1 − q1 λB = 0 (C.6)
λ1 = λA − λB = 1 − q1 (C.7)
λ1 = λA + (f − 1)λB = 1 − q1 (C.8)

and

λA = 1 + (m − 1)q1 λB = mq0 (C.9)


λ2 = λA − λB = 1 + (m − 1)q1 + mq0 = 1 − q1 + m(q1 − q0 ) (C.10)
λ3 = λA + (f − 1)λB = 1 − q1 + m(q1 − q0 ) + nq0 (C.11)
n
The eigenvalue λ2 has degeneracy f − 1 = m − 1 and λ3 has 1; since it is
n
assured that the matrix is diagonalizable the degeneracy of λ1 is n − m .

110
Appendix D

Brownian Bridge

A (generalized) Brownian Bridge is the random variable:

B(x, t) ∼ W (x(T ) = xT , t|x(0) = 0, x(t) = x)

It expresses the position of a brownian particle having fixed the endpoints of


the motion itself.
Z Z
dα −iαxT dβ −iβx iαxT +iβx
hδ(x(t) − x)δ(x(T ) − xT )iW = e e he iW
2π 2π
Discretize in a way that τi = i and t = τj = j:

dN x
Z  
1
exp − xAx + Jx
(4πD)N/2 4D

Ji = iβ
JN = iα
 
2 −1 ··· ··· 0
0
 −1 2 −1 . . . 0
 
0 

.. .. . 
 0 −1 2
 . . .. 
A= .

 .. .. .. 
 0 . . −1 0 

 .. .. . . 
 . . . −1 2 −1 
0 0 · · · 0 −1 1
det AN = 2 det AN −1 − det AN −2

111
Since:
det A2 = 1
we get
det AN = 1 ∀N
Define:
x
y=√
2D

J 0 = J 2D
dN x dN y
Z   Z  
1 1 0
exp − xAx + Jx = exp − yAy + J y =
(4πD)N/2 4D (2π)N/2 2
2 A−1 −2αβA−1 −α2 A−1
= eD(−β )
1 0 −1 J 0 −1 J
= (det A)−1/2 e 2 J A = eDJA jj jN NN

One discovers:
A−1
j,N = j

A−1
ii = i

So:
eD(−jβ ) = e−Dtβ 2 −2Dtαβ−T Dα2
2 −2jαβ−N α2

If we do first the β integration we obtain just the product of probabilities:


Z Z
dα dβ −Dtβ 2 −2Dtαβ−T Dα2 −iαxT −iβx
e
2π 2π
Z  
dβ −Dtβ 2 −2Dtαβ−iβx 1 1 2
e =√ exp (−2Dtα − ix) =
2π 4πDt 4Dt
x2
 
1 2
=√ exp − + Dtα + ixα
4πDt 4Dt
Then:
x2
 Z
1 dα −(T −t)Dα2 −iα(xT −x)
√ exp − e =
4πDt 4Dt 2π
x2 (xT − x)2
   
1 1
P [x(t) = x|x(T ) = xT ] = √ exp − p exp −
4πDt 4Dt 4πD(T − t) 4D(T − t)
If we do first the α integration we obtain the Brownian Bridge:
Z  
dα −DT α2 −2Dtαβ−iαxT 1 1 2
e =√ exp (−2Dtβ − ixT ) =
2π 4πDT 4DT

112
 
1 1 2 2 2 2

=√ exp −xT + 4DtiβxT + 4D t β
4πDT 4DT
Then:
x2T
 Z
1 dβ −Dt(1− t )β 2 −iβx+iβ t xT
√ exp − e T T =
4πDT 4DT 2π
x2T (x − Tt xT )2
   
1 1
P [x(t) = x|x(T ) = xT ] = √ exp − exp −
4Dt(1 − Tt )
q
4πDT 4DT 4πDt(1 − Tt )

We can say, given Zt ∼ N (0, 1):


r
t t
X t = XT + 2D (T − t)Zt
T T

113
Appendix E

Nearest neighbour matrix


eigenvalues

Consider an N × N matrix Ax,y which depends only on |x − y|, i.e. Ax,y =


A(|x − y|). Let’s diagonalize it in the limit N → ∞:
X X
Ax,y eipy = eipx A(|x − y|)eip(y−x) (E.1)
y y

If A(|x − y|) = δ|x−y|,a :


X X
Ax,y eipy = eipx δ|x−y|,a eip(y−x) (E.2)
y y

In a d dimensional cubic lattice we have 2d nearest neighbors so (calling µ


the directions):
X
Ax,y eipy = (E.3)
y
d
X a
X
ipx ipx
=e +e eipµ y = (E.4)
µ=1 yµ =−a,yµ 6=0
d
X
ipx ipx
eipµ a + e−ipµ a =

=e +e (E.5)
µ=1
d
!
X
= eipx 1+2 cos(pµ a) (E.6)
µ=1

114
π π
− ≤ pµ ≤ µ = 1...d
a a
Indeed from the point x ∈ Zd we can go to 2d nearest neighbors points using
d directions and their opposites.

115

You might also like