Models of Theoretical Physics Notes
Models of Theoretical Physics Notes
Models of Theoretical Physics Notes
Baiesi - Maritan
October 5, 2019
Contents
3 Fokker-Planck Equation 24
3.1 Master Equation derivation . . . . . . . . . . . . . . . . . . . 24
1
3.2 Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3.2 Ito Integrals . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3.3 Differentiation rules . . . . . . . . . . . . . . . . . . . . 31
3.3.4 Correlation . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3.5 Change Of Variables . . . . . . . . . . . . . . . . . . . 33
3.3.6 Fokker-Planck derivation from Langevin . . . . . . . . 33
2
9.3 Finite Temperature Quantum Mechanics . . . . . . . . . . . . 71
9.4 Field Theory Example . . . . . . . . . . . . . . . . . . . . . . 72
11 Disordered systems 80
11.1 Spin Glasses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
11.2 Replica trick . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.3 Pure states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
11.4 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.5 Overlaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
11.6 Overlap distribution . . . . . . . . . . . . . . . . . . . . . . . 85
11.7 p-Spin spherical model . . . . . . . . . . . . . . . . . . . . . . 85
11.8 Replica Symmetric Solution . . . . . . . . . . . . . . . . . . . 88
11.9 Replica Trick and Physics . . . . . . . . . . . . . . . . . . . . 89
11.10Replica Symmetry Breaking . . . . . . . . . . . . . . . . . . . 89
11.11Random Field Ising Model . . . . . . . . . . . . . . . . . . . . 90
12 Levy flights 93
12.1 Sub and super diffusion . . . . . . . . . . . . . . . . . . . . . . 93
12.2 Cauchy random walk . . . . . . . . . . . . . . . . . . . . . . . 94
13 Instantons 96
13.1 Classical Instantons . . . . . . . . . . . . . . . . . . . . . . . . 96
13.2 Instanton Examples . . . . . . . . . . . . . . . . . . . . . . . . 97
13.3 Quantum and Statistical Mechanics Instantons . . . . . . . . . 98
Appendices 102
3
A Characteristic Functions and Central Limit Theorem 103
A.1 Characteristic Functions . . . . . . . . . . . . . . . . . . . . . 103
A.2 Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . 104
4
Chapter 1
The variables are coupled and the integral is solved by diagonalizing the
matrix A (this can be done thanks to the spectral theorem) via an orthogonal
5
matrix O : OOT = In and changing the variables: y = Ox. The Jacobian of
this transformation is 1 so, calling ai , i = 1...n the eigenvalues of A, we have:
n Z ∞
Y ai 2 (2π)n/2
Z[A] = dxi e− 2 xi = √ (1.4)
i=1 −∞ det A
Qn
since i=1 ai = det A.
The last result we need is the further generalization of Z[A]; Pnit is done Tby
adding a linear term to the argument of the exponential, i.e. i=1 bi xi = b x
where b ∈ Rn .
Z
T 1 T −1
Z[A, b] = dn xe−A2 (x)+b x = Z[A, 0]e 2 b A b (1.5)
Rn
(we have defined Z[A, 0] = Z[A]). This result can be derived by either
completing the square or by changing the variables to y = x − x∗ where
x∗ = A−1 b is the extremum of the exponential’s argument. From now on we
call Z[A, b] a generating function.
1.1.1 Examples
3 −1
A=
−1 3
A has eigenvalues λ1 = 2 and λ2 = 4 and determinant 8, obtaining
Z[A, 0] = √π2 .
Suppose now that b = (1, 0)T ; then by using A−1 :
−1 1 3 1
A =
8 1 3
π
Z[A, b] = √ e3/16
2
6
From this distribution we can calculate expected values of products (also
called correlation functions) of l of the variables xi .
Z
1
hxk1 ...xkl i = xk1 ...xkl e−A2 (x) (1.7)
Z[A, 0] Rn
1.2.1 Examples
Using the results from the previous example we compute a correlation func-
tion:
1 T −1 1
hx1 x2 i = ∂b1 ∂b2 e 2 b A b = A−1
1,2 =
b=0 8
Notice that this result could have been derived just by looking the inverse of
A, being the covariance matrix.
Now using A = In we compute hxk1 ...xkl i (supposing every kj index is differ-
ent from the others):
∂ ∂ bT b l bT b
hxk1 ...xkl i = ... e = 2 bk1 ...bkl e =0
∂bk1 ∂bkl b=0 b=0
This result is expected, since in this case the covariance matrix is the identity.
Instead the two point function is:
hxi xj i = δij
7
1.3 Wick’s Theorem
We discovered that correlation functions can be computed using derivatives
of Z[A, b]; however there is another way, called Wick’s theorem: it states that
any even-number correlation function can be written as the sum of products
of two points correlation functions, e.g. (defining Gi,j = A−1
i,j ):
1.3.1 Examples
Setting ourselves in one dimension, we can compute the moments of a gaus-
sian with variance σ (i.e. the matrix A is a number equals to 1/σ) using
Wick’s theorem:
hx2 i = σ 2
hx4 i = σ 2 σ 2 + σ 2 σ 2 + σ 2 σ 2 = 3σ 4 = 3hx2 i2
(2π)1/2
Z
I(λ) = dn xe−F (x)/λ ≈ λn/2 e−F (xc )/λ (1.10)
det(∂ 2 F (xc ))1/2
where xc is the saddle √point of F (x). To derive this result we just change
the variable x = xc + λy and ignore all factors O(λ1/2 ) in the exponent
argument.
In the case of a complex variable:
(2π)1/2 g(zc )esf (zc )
Z
I(s) = g(z)esf (z) dz ≈ (1.11)
C |sf 00 (zc )|1/2
where zc is the maximum of f (z) inside the contour C; beware that f and g
must be real (???).
8
1.4.2 Gaussian with imaginary mean
Here we want to make sense of:
Z
2
e−a(x−ib) dx (1.12)
R
• −γR = [−R, R]
9
Z π/4−/2
2 cos(2θ)+Rb0 sin(θ)
=R dθe−aR →0
0
as R → ∞. The same holds on γ− . On γ̄R we have:
∞ b02
e−i(π/4−/2) e− 4a
Z
−az 2 −ib0 z π ib2
e = ( )1/2 → (4πai)−1/2 e 4a
2π −∞ a 2π →0
~2 2
i~∂t ψ(x, t) = − ∂ ψ(x, t) (1.16)
2m x
Set ~ = 1 and move to Fourier space:
Z
dp
ψ(x, t) = ψ̃(p, t)eipx (1.17)
2π
p2
i∂t ψ̃(p, t) = ψ̃(p, t) (1.18)
2m
p2 t
ψ̃(p, t) = ψ̃(p, 0)e−i 2m (1.19)
p2 t
Since we use ψ(x, 0) = δ(x) we have ψ̃(p, t) = e−i 2m . To find ψ(x, t) we make
t
use of Fresnel integrals (a = 2m and b = −ix):
Z −1/2 i(−ix)2
1 2
−i p2mt +ipx t m 1/2 mx2
e− 2it
t
ψ(x, t) = dpe = 4π i e 4 2m
=
2π 2m 2πti
(1.20)
Putting back ~:
m 1/2 mx2
ψ(x, t) = e− 2~it (1.21)
2π~it
This is called the propagator for the free Schrodinger equation.
10
1.4.5 Indented Integrals and prescription
Prove that:
1 1
lim =P ± iπδ(x − x0 )
→0 x − x0 ∓ i x − x0
We here prove:
1
lim
→0 x − x0 + i
11
Chapter 2
Since the particle move through the boundary of A, i.e. ∂A, we can say that
exist a current vector j(x, t) whose integral describe this flux (related to the
change in particle number):
Z Z Z
d
∂t d xρ(x, t) = − d xj(x, t)n̂ = − dd x∇ · j(x, t)
d
(2.2)
A ∂A A
12
(the minus sign is due to the fact that when the divergence decreases the
density increases: less particles are going out). Since the region A is arbitrary:
At this moment we don’t have any external field so the only way to construct
the current j(x, t) is from ρ and its derivatives; assuming ρ is small (along
with its derivatives) we choose:
In vector form:
w(tn ) = W w(tn−1 ) = W n w(0)
Suppose that the only possible jumps are the ones from nearest neighbour
sites, i.e.:
Wij = p+ δi,j+1 + p− δi,j−1 (2.8)
13
where p+ is the probability of a right jump, while p− of a left one. We now
derive wi (tn ); in n time steps n+ steps on the right are done and n− on the
left such that n+ + n− = n; the position i is i = n+ − n− , i.e.:
n+i
n+ ≡ (2.9)
2
n−i
n− ≡ (2.10)
2
If n − i is odd or |i| > n the probability is zero; otherwise is a binomial
n+ ∼ B(p+ , n):
n n+ n− n n+ n−n+ n n+i n−i
wi (n) = p+ p− = p+ p − = p+2 p−2 (2.11)
n+ n+ n+
To calculate moments we use the generating function:
n
X
n+ n n n−n
ŵ(z, n) = z p++ p− + = (p+ z + p− )n (2.12)
n =0
n+
+
∂
hn+ i = z ∂z ŵ = np+ (2.13)
∂
2
hn2+ i = z ∂z ŵ = np+ (1 + (n − 1)p+ ) (2.14)
Since xi = il = l(2n+ + n) we obtain:
hxn i = nl(p+ − p− ) (2.15)
Var(xn ) = 4l2 p+ p− n (2.16)
Moving to the continuous limit we set p+ = p− = 1/2 and n = t/
l2
Var(xt ) = t
l2
the limit is taken on l → 0 , → 0 and n → ∞ keeping
≡ 2D constant:
Var(xt ) = 2Dt (2.17)
This is the same result obtained by solving the diffusion equation: as ex-
pected the binomial approaches a gaussian distribution in the continuous
limit. Indeed using the central limit theorem on the distribution we obtain:
1 x2
w(x, t) = √ e− 4Dt
4πDt
14
For further insights let’s take the continuum limit of the discrete master
equation itself. In general a master equation reads:
X
wi (tn+1 ) = rj (tn )wj (tn ) (2.18)
j6=i
In our case:
1
wi (tn+1 ) = (wi−1 (tn ) + wi+1 (tn )) (2.19)
2
Divide by and substitute tn = n; since we are looking for a continuum
distribution we substitute i with x:
1
w(x, t + ) = (w(x − l, t) + w(x + l, t)) (2.20)
2
1
w(x, t + ) − w(x, t) = (w(x − l, t) + w(x + l, t) − w(x, t)) (2.21)
2
l2
Now we send → 0, l → 0 and n → ∞ keeping
≡ 2D fixed, obtaining:
We obtain:
dck (t)
= −Dk 2 ck (t)
dt
2
ck (t) = ck (0)e−Dk t
Z
2
w(x, t) = ck (0)e−Dk t eikx dk (2.26)
R
15
To impose initial conditions:
Z
dk
w(x, 0) = ck (0)eikx √
R 2π
dx0
Z
0 −ikx0
ck (0) = w(x , 0)e √
R 2π
For w(x, 0) = δ(x − x0 ) we obtain ck (0) = √1 and:
2π
1 (x−x0 )2
e− 4Dt
w(x, t) = √ (2.27)
4πDt
We can shift even the time and consider the solution for t > t0 :
θ(t − t0 ) (x−x0 )2
−
w(x, t|x0 , t0 ) = p e 4D(t−t0 ) (2.28)
4πD(t − t0 )
We call this solution the propagator for the brownian motion and denote by
W (x, t|x0 , t0 ); it satisfies:
W (x, t|x0 , t0 ) = W (x − x0 , t − t0 |0, 0)
Mathematically speaking W is the Green Function of the diffusion equation
and for arbitrary initial condition w(x0 , t0 ) we can derive, using W , the full
solution w(x, t):
Z
w(x, t) = dx0 W (x, t|x0 , t0 )w(x0 , t0 )
16
2.3 Wiener Integral
To start we have to define now an object T , which can be a finite subset of
R or an interval, e.g.: T = [0, ∞) or T = {t1 , ..., tn } and RT as the set of
all functions having as domain T : in our example if T = [0, ∞), RT is the
set of all functions x : [0, ∞) → R or if T = {t1 , ..., tn }, RT is the set of all
sequences {x1 , ..., xn } = {x(t1 )...x(tn )} for some xi ’s.
Using T and RT , we want to construct a brownian motion measure Pw for
sets A ⊂ RT , i.e. give a probability to this sets (also called ensembles).
Start from finite sets: consider ∀n ∈ N a finite set of time instants T =
{t1 , ..., tn } (where ti < ti+1 and ti ∈ R). Define ∆ti = ti −ti−1 and Hi = [ai , bi ]
∀i = 1...n and ai < bi ∈ R: we call A the set {x : x(t1 ) ∈ H1 , ..., x(tn ) ∈ Hn }.
We can now, by means of ESCK relation, define the measure Pw (A) = Pt1 ,...,tn
of A-like ensembles as:
n
(xi − xi−1 )2
Z Z
Y 1
Pt1 ,...,tn = dx1 ... dxn 1/2
exp − (2.30)
H1 Hn i=1
(4πD∆t i ) 4D∆t i
Notice that this relation is valid for any n ∈ N and by the use of Kolgomorov
extension theorem we are free to extend this result to any subset RT (i.e.:
n → ∞), given a suitable choice of T (e.g.: [0, ∞)). This way we constructed
a probability space (RT , F, Pw ) where F is the set of measurable subsets of
RT to which A belongs.
Practically for any computation we rely on discretization; for example, if
we had to Rfind the expected value of aPfunctional of the trajectory x(τ ),
∞
F (x(τ )) = 0 a(τ )x(τ )dτ we would use N i=0 a(ti )x(ti ) and take N → ∞
17
2.4.1 Identity
Using normalization of Wiener measure:
Z
h1iw = 1dxw (τ ) = 1 (2.31)
C=[0,0;t]
AN (i, i) = 2
AN (i, j) = −δi,j+1 − δi,j−1
We need to compute the determinant of AN : using Laplace expansion in the
last column we obtain:
det AN = N + 1
18
2.4.3 Two-point function
To compute the two point correlation function it’s straightforward to use
ESCK relation:
x2 (x2 −x1 )2
Z
dx1 dx2 1
− 4D∆t − 4D∆t
hx(t1 )x(t2 )iw = √ √ e 1 e 2 x x
1 2 (2.34)
4πD∆t1 4πD∆t2
Using x = x1 and y = x2 − x1 :
In general:
hx(t1 )x(t2 )iw = x20 + 2D min{t1 − t0 , t2 − t0 } (2.35)
Discretizing:
N
X N
X
A(ti )(xi − xi−1 ) = Ai (xi − xi−1 )
i=1 i=1
N
Z Y N yi2
dyi X − N
P
= F( Ai yi )e i=1 ∆ti
(2.38)
i=1
(π∆ti )1/2 i=1
We introduce the identity as a delta function:
Z Z N
Z Y yi2
dα iαz dyi − N
P
i=1 ( ∆ti +iαAi yi )
dze F (z) e
2π i=1
(π∆ti )1/2
19
( N
)
α2 X 2
Z Z
dα
= dzF (z) exp − A ∆ti + iαz =
2π 4 i=1 i
s Z ( N
)
π X
= PN 2
dzF (z) exp −z 2 / A2i ∆ti
i=1 Ai ∆ti i=1
As N → ∞:
N
X Z t Z t Z t
lim A2i ∆ti = 2
A (τ )dτ = [ a(s)ds]2 dτ ≡ R (2.39)
N →∞ 0 0 τ
i=1
Z t r Z
π 2
hF ( a(τ )x(τ )dτ )iW = dzF (z)e−z /R (2.40)
0 R
For D 6= 1/4 send R → 4DR. Rt
Using F (z) = ehz we obtain the moment generating function for 0 a(τ )x(τ )dτ :
Rt 2 R/4
heh 0 a(τ )x(τ )dτ
iW = eh (2.41)
Z t 2k+1
h a(τ )x(τ )dτ iW = 0 (2.42)
0
Z t 2k 2
R (2k)!
h a(τ )x(τ )dτ i= (2.43)
0 2 2k k!
Start by discretizing:
N
!
(xi −xi−1 )2
Z Y dx PN
+pi x2i ]
(N )
I4 = √ i e − i=1 [ (2.45)
i=1
π
We can recast the argument of exponential as a bilinear form using the matrix
a: this matrix, calling ai = pi + 2 for i = 1 . . . N − 1 and aN = pN + 1 , is a
20
three-diagonal matrix having as diagonal elements ai ’s and − 1 as the other
two off-diagonals elements.
− 1
a1 0 ... ... 0
− 1 −1
a2
... ... 0
− 1 − 1 · · ·
0 a3 0
a = .. .. (2.46)
. ... − 1 . . . ... .
.
..
−1
0 ... ... 0 − 1 aN
So, in terms of det a, we have that:
(N )
I4 = (N det a)−1/2 = (det(a))−1/2
We denote the determinant of the matrix a by D1N and define DkN as the
determinant of the matrix obtained by removing the first k − 1 rows and
columns from a.
ak − 1
0 ··· ··· 0
− 1 ak+1 − 1 0 ··· 0
... ... ... ..
1
0 − .
DkN =
... .. .. (2.47)
0 . . − 1 0
. .. ...
..
. − 1 aN −1 − 1
0 0 ··· 0 − 1 aN
By Laplace expansion on the first row of DkN :
DkN = ak Dk−1
N N
− Dk+2 = (2 pk + 2)Dk−1
N N
− Dk−2 (2.48)
DkN − N
2Dk−1 + N
Dk−2 N
= pk Dk−1 (2.49)
2
Calling τ = (k − 1)/N and taking → 0 and N → ∞ we arrive at:
∂τ2 D(τ ) = p(τ )D(τ ) (2.50)
From this we find that D1N → D(0).
(N )
Since DN = pN 2 + 1 we find that D(t) = 1.
(N )
Since DN −1 = pN pN −1 4 + 2pN 2 + pN −1 2 + 1 we have that:
(N ) (N )
D − DN −1
Ḋ(t) = lim N =0
→0
21
Going back to our integral:
1
I4 = p
D(0)
For p(τ ) = k 2 :
From which:
(N )1
lim I4 = I4 = √ (2.55)
N →∞ cosh kt
A generalization of the previous computation is:
Rt
p(τ )x2 (τ )dτ
he− 0 δ(x(t) − x)i (2.56)
i.e.: the previous expected value but with fixed endpoint. Start by rewriting
the delta function:
Z ∞
− 0t p(τ )x2 (τ )dτ
R 1 Rt 2
he δ(x − x(t))i = dαeiαx he− 0 p(τ )x (τ )dτ e−iαx(t) i (2.57)
2π −∞
(N −1)
|a0 | N |a0 | D̃
a−1
N,N = = (N ) = 1 (N ) (2.60)
|a| D1 D1
22
we introduced a0 as the matrix obtained from removing the last row and
(N −1)
column from a. Introduce now D̃k as the determinant of the matrix
(N −1) N 0
obtained from D1 ≡ |a | by eliminating the first k−1 rows and columns:
ak −1 0 ··· ··· 0
−1 ak+1 −1 0 ··· 0
.. .. .. ..
0 −1 . . . .
(N −1)
D̃k = .
.. . . . .
(2.61)
0 . . −1 0
. .. ..
..
. . −1 aN −2 −1
0 0 ··· 0 −1 aN 1
23
Chapter 3
Fokker-Planck Equation
The time instants are tn = n while the position is taken to lie in a one
dimensional lattice of spacing l. Let’s try to find the continuous version of
this equation. Introduce w(xi , tn ) = 1l wi (tn ) and use an integral version of
the sum above:
Z
w(x, tn+1 ) = dzW (z|x − z, tn )w(x − z, tn ) (3.2)
24
Now we expand W in the argument x − z for z → 0: this comes from a
physical argument, since W must be big for small jumps (since the time
steps are supposed to be short):
z2 2
Z
w(x, tn+1 )−w(x, tn ) = dz[−z∂x [W (z|x, tn )w(x, tn )]+ ∂x W (z|x, tn )w(x, tn )−...)]
2
(3.4)
Using integration by parts:
∞
(−1)k
X Z
w(x, tn+1 ) − w(x, tn ) = ∂xk [w(x, tn ) dzz k W (z|x)] (3.5)
k=1
k!
F (y) = F (−y)
Z
dyF (y) = 1
25
3.2 Langevin Equation
Let’s go back to the Wiener path integral. In the discretization we find that
the probability density of the difference of two subsequent points xi+1 and xi
is:
z2
dzi − i
dP(xi+1 − xi = zi ) = √ e 4D∆ti (3.10)
4πD∆ti
i.e.: p
zi ∼ N (0, 2D∆ti ) (3.11)
from which:
xi+1 = xi + zi (3.12)
√
Introducing another random variable ∆Bi such that zi = 2D∆Bi (i.e.:
∆Bi ∼ N (0, ∆ti )) we can rewrite the previous expression as:
√
∆x = 2D∆B (3.13)
or in a formal way: √
dX(t) = 2DdB(t) (3.14)
where B(t) is called pure Brownian motion which again can be recast to a
standard normal variable ξi in the following way:
At this point this formalism seems pointless; consider now a particle (brow-
nian particle) in a (smaller) particle bath (a liquid for example) subject to
friction and an external force (which is composed by a deterministic force
and a noise due to collision with the bath)
mr̈ = −γ ṙ + F (3.16)
for F = Fext + Fnoise . Now if we look at the system in time scales t for which
we can neglect the inertia of the particle, i.e. t m/γ we arrive at (in one
dimension):
1 1 1
ẋ = F = Fext + Fnoise (3.17)
γ γ γ
or:
1 1
xi+1 = xi + Fext ∆ti + Fnoise ∆ti (3.18)
γ γ
26
√
Setting Fext = 0 we recover the equation dX(t) = 2DdB(t) if we identify:
1 √
Fnoise = 2Dξi (3.19)
γ
Fext
Setting f = γ
we arrive at the (formal) Langevin equation:
√
dx(t) = f (x(t), t)dt + 2DdB(t) (3.20)
27
we choose a particular set of rules for integrals containing brownian motions.
Using λ = 0 we choose the Ito prescription (which preserves causality) using
λ = 1/2 the Stratonovich one: the first sets τi = ti−1 the second τi = ti +t2i−1 .
Now set G(s) = B(s) as use the Ito prescription:
n n
X 1 X
(Bi−1 + ∆Bi )2 − Bi−1
2
− (∆Bi )2
Sn = Bi−1 (Bi − Bi−1 ) =
i=1
2 i=1
(3.26)
The first part is:
n
X X
(Bi−1 + ∆Bi )2 − Bi−1
2
= (Bi−1 + ∆Bi + Bi−1 )(Bi−1 + ∆Bi − Bi−1 ) =
i=1 i
(3.27)
X X
= (Bi + Bi−1 )∆Bi = Bi2 − Bi−1
2
= Bn2 − B02 (3.28)
i i
n
!2
X
lim h (∆Bi )2 − (t − t0 ) i=0 (3.30)
n→∞
i=1
X X
lim h(t − t0 )2 + (∆Bi )2 (∆Bj )2 − 2(t − t0 ) (∆Bi )2 i = 0 (3.31)
n→∞
i,j i
2
P
P we use the fact that all the ∆Bi are independent for saying that h i (∆Bi ) i =
Now
i ∆ti = t − t0 . We are left with:
X
lim h (∆Bi )2 (∆Bj )2 − (t − t0 )2 i = 0 (3.32)
n→∞
i,j
28
Since h(∆Bi )2 (∆Bj )2 i = 3δi,j ∆t2i + (1 − δij )∆ti ∆tj :
X X X
h (∆Bi )2 (∆Bj )2 − (t − t0 )2 i = 3 δi,j ∆t2i + (1 − δij )∆ti ∆tj − (t − t0 )2 ≤
i,j i,j i,j
(3.33)
X
≤ max ∆tk ∆ti = (3.34)
k
i
= max ∆tk (t − t0 ) → 0 N → ∞ (3.35)
k
In the end:
Bt2 − B02 t − t0
−S= (3.36)
2 2
This is the result using the Ito prescription, the one we’ll keep through the
notes. The Stratonivch one consists in finding the mean square limit of:
n
X ti + ti−1
Sn = B (B(ti ) − B(ti−1 )) (3.37)
i=1
2
Bt2 − B02
SStrat =
2
• B(τ )
Rτ
• G(τ ) = t0
dsg(s)dB(s) for g non anticipating
Rτ
• G(τ ) = t0
g(s)ds for g non anticipating
Rτ
• G(τ ) = t0
F (B(s))dB(s)
29
Now we want to prove a very powerful differential relation:
To do that consider:
lim IN = 0 (3.40)
N →∞
!2
X
IN ≡ h Gi−1 (∆Bi2 − ∆ti ) i (3.41)
i
Consider in the first addend the argument of the sum; since Gi−1 is non
anticipating it is independent from ∆Bi :
2
hG2i−1 (∆Bi )2 − ∆ti i = hG2i−1 i(h∆Bi4 i −2∆ti h∆Bi2 i +(∆ti )2 ) = (3.44)
| {z } | {z }
3∆t2i ∆ti
Consider in the second term the argument of the sum for i > j:
since (Gi−1 Gj−1 )(∆Bj2 − ∆tj ) is independent from ∆Bi2 − ∆ti then:
30
The second factor is zero since h∆Bi2 i = ∆ti . We are left with:
N
X
IN = 2 ∆t2i hG2i−1 i (3.48)
i=1
2
< ∞ or suphG2 i < ∞:
P
If i hGi−1 i∆ti
We have proved 3.38, that is (dB(τ ))2 = dτ . We can generalize this formula;
indeed for any k > 0 we have:
31
Ignoring all terms (dB(τ ))m+2 for m > 0 (they are zero!):
n n n n n−1 n
dB (t) = B (t) + B dB(t) + B n−2 dB 2 (t) − B n (t)
0 1 2
n(n − 1) n−2
dB n (t) = nB n−1 dB(t) + B dt (3.53)
2
The differential is made up of the regular part (nxn−1 ) coming from standard
calculus and the Ito part related to second derivative! ( n(n−1)
2
xn−2 ). Now take
n = m + 1:
m(m + 1) m−1
dB m+1 (t) = (m + 1)B m dB(t) + B dt
2
Divide by m + 1, integrate ad rearrange:
Z t
B m+1 (t) − B m+1 (t0 ) m t m−1
Z
m
B (τ )dB(τ ) = − B (τ )dτ (3.54)
t0 m+1 2 t0
3.3.4 Correlation
Let G and H non anticipating functions. We want to prove that:
Z t Z t Z t
h G(τ1 )dB(τ1 ) H(τ2 )dB(τ2 )i = hG(τ )H(τ )idτ (3.55)
t0 t0 t0
32
The remaining term is (due to independence):
N
X Z t
2
hGi−1 Hi−1 ih(∆Bi ) i → hG(τ )H(τ )idτ (3.59)
i=1 t0
∂h (dx(t))2 ∂ 2 h
dh(x(t)) = dx(t) + 2
+ ... = h0 (x(t))[f (x(t), t)dt + g(x(t), t)dB(t)]+
∂x 2 ∂x
(3.60)
h00 (x(t))
+ [f (x(t), t)dt + g(x(t), t)dB(t)]2 + ... (3.61)
2
In the end we get:
h00 2
0
dh = dt h f + g + h0 gdB (3.62)
2
The last term is zero due to independence (h is non anticipating) and the
identity hdB(t)i = 0.
33
Introduce now w(x, t) = hδ(x − x(t))iB via an integral:
34
Chapter 4
The Wiener measure for Bi is known; let’s try to derive the one for xi by a
change of variable; we need to compute the jacobian of the transformation:
0√
j>i
Jij == 2D i = j (4.2)
∂xi
otherwise
∂Bj
35
Expanding the square in the exponential argument we get:
N N
! N
! N
!
Y dxi X ∆x2i X kxi−1 ∆xi X k 2 x2i−1
√ exp − exp − exp −
i=1
4πD∆t i i=1
4D∆t i i=1
2D i=1
∆ti /4D
(4.5)
Taking the N → ∞ limit (in the mean squared sense):
Y dx(τ ) Rt 2 k
Rt k2
Rt 2
dP = √ e− 0 dτ ẋ(τ ) e− 2D 0 dτ x(τ )dI x(τ ) e− 4D 0 x (τ )dτ (4.6)
τ 4πDdτ
Let’s start by compute the term in the middle using the change of variables
3.62: consider a function h(x) ∈ C 3 (R):
∆x2i
∆h(xi ) = h0 (xi−1 )∆xi + h00 (xi−1 ) + O(∆x3i ) (4.7)
2
Now making use of discrete Wiener measure we can say that, in the mean
L2 sense, ∆x2i → 2D∆ti while all higher moments go to zero:
from which:
Z t Z t
0
h(x(t)) − h(x(0)) = h (x(τ ))dI x(τ ) + D h00 (x(τ ))dτ (4.9)
0 0
Setting h0 (x) = x:
Z t
x2 (t) − x2 (0)
= x(τ )dI x(τ ) + Dt (4.10)
2 0
Z t
x2 (t) − x2 (0)
x(τ )dI x(τ ) = − Dt (4.11)
0 2
which is the argument, apart from an overall constant, of the middle term’s
exponent in 4.6:
Y dx(τ ) Rt 2 x2 (t)−x2 (0) kt k2
Rt 2
dP = √ e− 0 dτ ẋ(τ ) e−k 4D + 2 e− 4D 0 x (τ )dτ (4.12)
τ 4πDdτ
36
Now we are ready to compute the propagator:
Z
W (x, t|x0 , 0) = dP ({x(τ )})δ(x(t) − x) = (4.13)
x2 − x20 kt
k2
Rt 2
= exp −k + he− 4D 0 x (τ )dτ δ(x(t) − x)iW (4.14)
4D 2
No need to do anything else, since we have already computed the last term
for D = 1/4 in 2.70; we just need to send t → 4Dt and k → k/4D:
r
k2
− 4D
Rt 2 k kx2
he 0 x (τ )dτ
δ(x(t) − x)iW = e− 4D coth 4Dkt (4.15)
4πD sinh 4Dkt
Putting everything together and, for simplicity setting x0 = 0:
r
k x2 kt kx2
W (x, t|0, 0) = e−k 4D + 2 − 4D coth 4Dkt = (4.16)
s 4πD sinh 4Dkt
k k x2
− 2D −2kt )
= e (1−e (4.17)
2πD(1 − e−2kt )
∂t W (x, t|x0 , 0) = ∂x [kxW (x, t|x0 , 0) + D∂x W (x, t|x0 , 0)] (4.18)
which yields:
k 2
D
= mωkB T
(4.21)
D = kBγ T = 6πηR
kB T
(4.22)
37
To derive the full solution we lean on the Fourier transform that poses the
Fokker-Planck equation in "momentum" space. In order to do this we need
to know how to transform a function like xf (x):
Z Z
F (xf (x)) (p) = dxxf (x)e −ipx
= dxf (x)(i∂p e−ipx ) = i∂p f˜(p)
dt
The first differential equation is ds = 1 and gives us t = s + s0 ; we choose
freely S0 = 0. The second ODE is:
dp(s)
= kp(s)
ds
Its solution is straightforward and since t = s it is:
p(t) = p0 ekt
W̃ (t) = W (0)e− 2k
38
D(1−e−2kt )
To simplify the notation set for now a(t) = k
and inverse Fourier
a(t)p2
− 2
transform W̃ (p, a(t)) = e−ipx0 e (it’s easy, being gaussian):
s
(x−x0 )2
k k
− 2D −2kt )
W (x, t|x0 , 0) = −2kt
e (1−e (4.25)
2πD(1 − e )
D(1 − e−2kt )
xt ∼ N (x0 , ) (4.26)
k
Its worth studying the limits of the variance:
D
t → ∞ Var(x) → Stationary Regime
k
where: !
N N
Y dB α X (∆B α )2
dP (B1α , ..., BN
α
)= √ i exp − i
(4.28)
i=1
2π∆ti i=1
2∆ti
The properties of the measure are easy to derive:
dB α (τ ) = 0 (4.29)
α β αβ
dB (τ )dB (τ ) = δ dτ (4.30)
dB α (τ )dτ = 0 (4.31)
dB α1 . . . dB αk = 0 ∀k > 2 (4.32)
39
It is also easy to write down a multidimensional Langevin equation:
p
dxα (t) = f α (x(t), t)dt + 2Dα dB α (4.33)
(4.34)
" d
# d p
X X
α
Dα ∂α2 h 2Dα ∂α hdB α (t)
= dt ∂t h + ∂hα f + + (4.35)
α=1 α=1
And for just h not dependent on time the Fokker-Planck counterpart is:
d
X
−∂α (f α w) + Dα ∂α2 w
∂t w(x, t|x0 , t0 ) = (4.37)
α=1
The last thing we need derive is the multidimensional Wiener measure for
x(t); the Jacobian is:
d
!N/2
Y
J= (2Dα ) (4.38)
α=1
40
4.3 The Fokker-Planck equation with velocity
As an application of multidimensional Wiener Path Integral consider a gen-
eralized Langevin equation:
√
mv̇(t) = −γv + F(r) + γ 2Dξ (4.41)
and cast it in a two equation differential form:
( √
F(r)
dv(t) = − γvm
+ m
dt + γ 2D
m
dB
(4.42)
dr(t) = vdt
This system is a six dimensional system of Brownian particles in the variables
x = (vx , vy , vz , x, y, z)T and with diffusion costants:
( 2
γ D
2 α = 1, 2, 3
Dα = m (4.43)
0 α = 4, 5, 6
Since the diffusion costants for the position variables (x, y, z) are (going to)
zero the Gaussian-like measures involving (x, y, z) collapse into delta func-
tions (the zero variance limit of the gaussian distirbution is a delta function).
The Wiener measure is:
3 3 Z t 2 !
Y dv δ (ṙ(τ ) − v(τ )) 1 γv F(r)
dP ({x(τ )}) = 3/2 3
exp − dτ v̇ − +
τ
(4πDdτ ) (dτ ) 4D 0 m m
(4.44)
which leads to a Fokker-Planck equation:
γ 2D
γv F(r)
∂t w(v, r, t|v0 , x0 ) = ∇v − w + 2 ∇v w + ∇r (−vw) (4.45)
m m m
The stationary version of this equation is called Kramer’s equation; as ex-
pected, its solution is the Maxwell-Boltzmann distribution with potential
V (r), given that F(r) = −∇V (r):
1 −β mv 2
+V (r)
W∗ = e 2
(4.46)
Z∗
which gives us (calling R the radius of the brownian particle):
1 kB T
D= = (4.47)
γβ 6πηR
41
Chapter 5
42
The second term in the exponential’s argument is a standard integral; to deal
with it consider a generic function h(r) and write its increment (as usual, this
is in the Ito sense):
∆h(r) = ∆r∇h(r) + D∆t∇2 h(r) + O(∆t3/2 ) (5.6)
Consider the increment between rN and r0 and apply the previous formula:
N
X N
X N
X
h(rN ) − h(r0 ) = ∆h(r ) = ∆ri ∇h(ri−1 ) + D∆t ∇2 h(ri−1 )
| {z i}
i=1 i=1 i=1
=h(ri )−h(ri−1 )
(5.7)
In the continuum limit these two equations give:
Z t Z t
dI r(τ )∇h(r(τ )) = h(r) − h(r0 ) − D dτ ∇2 h(r(τ )) (5.8)
0 0
43
5.2.1 Proof 1
The first proof starts from the Feynman-Kac formula and derives the Bloch
equation. Start by discretizing WB using ψN :
Z NY
+1
PN +1 ∆x2i
dx PN +1
ψN +1 = √ i e− i=1 4D − i=1 Vi δ(xN +1 − x) (5.13)
i=1
4πD
Set z ≡ √N −x :
x
2D
Z
dz 2 √
ψN +1 (x) = √ e−z /2 ψN (x + z 2D)e−V (x) (5.16)
2π
and expand around x (small z):
−V (x)
Z
dz 2
h √ i
ψN +1 (x) = e √ e−z /2 ψN (x) + ψN
0 00
(x) z 2D + ψN (x) z 2 D + ... =
2π
(5.17)
2 00 2
= (1 − V (x) + O( )) ψN (x) + DψN + O( ) = (5.18)
00 2
= ψN (x) + [DψN − V ψN ] + O( ) (5.19)
from which:
ψN +1 (x) − ψN (x)
= D∂x2 ψN (x) − V (x)ψN (x) (5.20)
For N → ∞ we get the Bloch equation 5.12:
∂t WB = D∂x2 − V (x) WB
(5.21)
44
5.2.2 Proof 2
The second proof is based on two observations.
Observation 1 WB follows ESCK relation: let’s prove it. Divide the time
interval in which we observe the motion, (t0 , t), in N + N 0 steps. The discrete
propagator from t0 up to the instant tN 0 is:
N0
Z Y P 0 (∆xi )2 P 0
(N 0 ) 0 0 dxi − N − N
i=1 V (xi )
W (x , t |x0 , t0 ) ≡ √ e i=1 4D∆ti δ(x0 − x0N )
i=1
4πD∆ti
(5.22)
The one from tN 0 to tN is:
Z +N 0
NY PN +N 0 (∆xi )2 PN +N 0
(N ) 0 0 dxi − i=N 0 +1 4D∆t − V (xi )
W (x, t|x , t ) ≡ √ e i i=N 0 +1 δ(x−xN +N 0 )
i=N 0 +1
4πD∆ti
(5.23)
(N +N 0 )
If WB statisfied the ESCK relation, we would have that W → WB as
0
N, N → ∞ and it would be the integrated product of 5.22 and 5.23:
Z
(N +N 0 ) 0
W (x, t|x0 , t0 ) = dx0 W (N ) (x, t|x0 , t0 )W (N ) (x0 , t0 |x0 , t0 ) = ... =
(5.24)
+N 0
Z NY
dxi 0 (∆xi )2 PN +N 0
− N +N
P
− i=1 V (xi )
= √ e i=1 4D∆ti
δ(x − xN +N 0 ) (5.25)
i=1
4πD∆ti
45
Given these two observations, set in the previous equation h(τ ) = V (x(τ ))
and introduce a delta function δ(x − x(t)):
Z t
− tt dτ V (x(τ )) − τ dsV (x(s))
R R
δ(x − x(t))e 0 = δ(x − x(t)) − δ(x − x(t)) dτ V (x(τ ))e t0
t0
(5.27)
By taking averages over Wiener Path Integral we recognize the appearence
of WB as define in the Feynman-Kac formula 5.11:
Rt
− dτ V (x(τ ))
WB (x, t|x0 , t0 ) = hδ(x − x(t))e t0 iW =
Z t
− τ dsV (x(s))
R
= hδ(x − x(t))iW − dτ hV (x(τ ))e t0 δ(x − x(t))iW (5.28)
t0
In the first term, hδ(x − x(t))iW , we recognize W (x, t|x0 , t0 ) i.e. the solution
of the diffusion equation with W (x, t0 |x0 , t0 ) = δ(x − x0 ). The second term
can be cast to:
Z t Z t
− tτ dsV (x(s))
R
dτ hV (x(τ ))e 0 iW = dτ WB (x0 , τ |x0 , t0 )V (x0 )W (x, t|x0 , τ )
t0 t0
(5.29)
To summarize we arrive at:
Z t Z
0
WB (x, t|x0 , t0 ) = W (x, t|x0 , t0 )− dt dx0 W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 )
t0
(5.30)
Take time of both sides:
Z
∂t WB (x, t|x0 , t0 ) = ∂t W (x, t|x0 , t0 )− dx0 W (x, t|x0 , t)V (x0 )WB (x0 , t|x0 , t0 )
Z t Z
− dt0 dx0 ∂t W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 ) (5.31)
t0
Z
∂t WB (x, t|x0 , t0 ) = − dx0 δ(x − x0 )V (x0 )WB (x0 , t|x0 , t0 )
D∂x2 W (x, t|x0 , t0 )
Z t Z
− dt0 dx0 D∂x2 W (x, t|x0 , t0 )V (x0 )WB (x0 , t0 |x0 , t0 ) (5.32)
t0
46
Z t Z
0 0 0 0 0 0 0
∂t WB (x, t|x0 , t0 ) = D∂x2 W (x, t|x0 , t0 ) − dt dx W (x, t|x , t )V (x )WB (x , t |x0 , t0 )
t0
| {z }
WB (x,t|x0 ,t0 ) from 5.28
47
Chapter 6
L(x)G(x, y) = δ d (x − y) (6.1)
for some function f . Using the properties of the delta function and 6.1:
Z Z
f (x) = δ (x − y)f (y)d y = f (y)L(x)G(x, y)dd y
d d
(6.3)
48
given L(x)q(x) = 0. So the Green’s function lets us express the solution to
a differential equation with an integral.
Let’s try to solve it for one particular case (d = 3):
Dk2 + µ G̃(k) = 1
1
G̃(k) = (6.7)
Dk2 +µ
Z
1 3 1
G(r) = 3
d k 2
eikr =
D(2π) k + µ/D
Z 2π Z ∞ Z π
1 2 1
= dθ dkk 2 dφ sin φeikr cos φ =
D(2π)3 0 0 k + µ/D 0
Z ∞
1 k
= 2 dk 2 sin kr =
2π rD 0 k + µ/D
Z ∞
1 k
= 2 = dk 2 eikr
4π rD −∞ k + µ/D
1 õ
G(r) = e− D r (6.8)
4πrD
In the d dimensional case we have a behaviour like:
1
G(r) ∼ e−r/ξ (6.9)
rd−2
q
D
where ξ = µ
.
49
counts the number of configuration of the polymer with N building blocks.
So the entropy is:
S = kB log Ω(r, N ) (6.10)
Defining w(r, N ) as the distribution of the endpoint of the polymer with N
blocks, we get: Z
hr i = r2 w(r, N )d3 r
2
(6.11)
It will be useful to introduce the Fourier transform:
Z
G(q, N ) = d3 reiq·r w(r, N )
In the above equation w(ri ; 1) is the one body distribution that describes
fully the phantom polymer. Considering the chain for N + 1 blocks:
Z NY
+1 N +1
! N +1
X Y
3 3
w(r, N + 1) = d ri δ ri − r w(ri , 1) = (6.13)
i=1 i=1 i=1
Z N
Z Y N
X +1 Z
3 0 3 3
d3 rN +1 w(rN +1 , 1)δ 3 (rN +1 + r0 − r)
= dr d ri w(ri , 1) δ ( ri − r)
i=1 i=1
(6.14)
Recognize in the first part w(r0 , N ):
Z
w(r, N + 1) = d3 r0 w(r0 , N )w(r − r0 , 1) (6.15)
50
Now if we use some facts. Since w(r, 1) is a p.d.f.:
Z Z
d r w(r − r , 1) = d3 r0 w(r0 , 1) = 1
3 0 0
(6.17)
l2 2
w(r, N + 1) = w(r, N ) + ∇ w(r, N ) + O(l4 ) (6.20)
6
Here l is a length defined via the one body distribution:
1
w(r, 1) = δ(|r| − l) (6.21)
4πl2
indeed: Z Z
3 4π 2
d rw(r, 1)r = drr2 δ(r − l)r2 = l2 (6.22)
4πl2
l2
For continuous values of N , calling D = 2d
where d = 3:
∂
w(r, N ) = D∇2 w(r, N ) (6.23)
∂N
which is just the diffusion equation 4.45. For initial condition w(r, 0) = δ d (r):
d/2
1 2 /4DN
w(r, N ) = e−r (6.24)
4πDN
51
The entropy is:
r2
S = kB − + N-dependent constant (6.27)
4DN
If r0 > r then:
∆S = S(r0 ) − S(r) < 0
so stretching a rubber band decreases the entropy (i.e. less number of possible
configurations).
We introduced a bra and a ket to stress the fact that we are expressing this
operator in position space; however this is, as for now, just formal. As in
matrix multiplication take a function f (r0 ); the action of A on f is:
Z
dd r0 A(r, r0 )f (r0 ) = −D∇2r f (r) + ωf (r) (6.29)
52
dimensional matrix we can "use" the Wick theorem to find expected values
of infinite dimensional gaussian integrals (path integrals...). Some examples:
Z ∞ 0
dq eiq(x−x ) 1 √ω
−1 0 − D |x−x0 |
A (x, x )d=1 = 2
= √ e (6.33)
−∞ 2π Dq + ω 2 ωD
1 √ω
−1 0 − D |x−x0 |
A (x, x )d=3 = − 0
e (6.34)
4πD|x − x |
Another thing to point out is that we can recast the diffusion equation in a
Schroedinger-like form. Define H0 = −D∇2 then:
H = −D∇2 + V
and:
hr| H |r0 i = (−D∇2r + V (r))δ d (r − r0 )
The formal solution is:
54
R
for L such that A = dd rL . We obtain:
1
D (∇ϕ)2 + µϕ2 + V (r)ϕ2
L= (6.47)
2
µϕ = −V (r)ϕ + D∇2 ϕ (6.48)
that is the Bloch equation for Polymers after applying Laplace Transform
of variable µ on the variable N (∂N ϕ → µϕ). In the spirit of Wiener Path
Integrals let’s try to compute the partition function for this action to have a
probabilistic interpretation:
Z Y
Z= dϕ(r)e−A (6.49)
r∈Rd
L
r ∈ Λ = {an : n ∈ Zd |nµ | < }
a
X 1
Aa = ad ϕ(r)A(r, r0 )ϕ(r0 )/2 = ϕAϕ
r,r0 ∈Λ
2
from which:
Za = (det A)−1/2 (2π)N /2
Now we use the trick of the external field J(r) to compute correlation func-
tions:
Z
P 1 1 1 −1
he r∈Λ J(r)ϕ(r)
i= Dϕe− 2 ϕAϕ+ϕJ = e 2 JA J ≡ Ẑa (J) (6.51)
Za
Now we can apply Wick’s Theorem for Gaussian variables and we discover
that the odd-points correlation function vanish while for even-points:
55
X
hϕ(r1 )...ϕ(r2k )i = hϕ(ri1 )ϕ(rj1 )i...hϕ(rik )ϕ(rjk )i
all pairings ofϕ
which holds even for a → 0. Now generalize and consider: ϕ(r) = (ϕ1 , ..., ϕn ) ϕ:
Rd → Rn .
Z Z d
1 X X
A[ϕ] = d r dd r0
d
ϕα (r)A(r, r0 )ϕα (r0 ) = A[ϕα ]
2 α=1 α
from which:
Z Y Z n
(n) −A[ϕ] −A[ϕ1 ]
Z = Dϕα e = Dϕ1 e = Zn (6.52)
α
In the limit:
lim Z (n) = 1
n→0
1 Y
hϕ1 (r1 )ϕ1 (r2 )i = Dϕα e−A[ϕ] ϕ1 (r1 )ϕ1 (r2 ) = (6.53)
Zn α
Z
1 −A[ϕ1 ] n−1
Dϕ 1 ϕ1 (r1 )ϕ1 (r2 )e Z = (6.54)
Zn
Z
1
= Dϕ1 ϕ1 (r1 )ϕ1 (r2 )e−A[ϕ1 ] = A−1 (r1 , r2 ) (6.55)
Z
Thus: n
Z Y
1
−1
A (r1 , r2 ) = lim Dϕα e− 2 ϕAϕ ϕ1 (r1 )ϕ1 (r2 ) (6.56)
n→0
α=1
where: XZ
ϕAϕ = dd rdd r0 ϕα (r)A(r, r0 )ϕα (r0 ) (6.57)
α
dd r(τ )
Z Y
− 0N dτ ṙ2 −g 0N dτ1 0N dτ2 δ d (r(τ1 )−r(τ2 )) d
R R R
w(r, N ) = d/2
e δ (r(N ) − r)
τ
(4πDdτ )
(6.58)
56
Define the beads density:
Z N
ρ(r) = dτ1 δ d (r − r(τ1 )) (6.59)
0
from which:
Z N Z N Z
d
dτ1 dτ2 δ (r(τ1 ) − r(τ2 )) = dd rρ2 (r) (6.60)
0 0
In the end:
Z RN
1
dd σ 2 (r)
R
w(r, N ) = Dσe− 4g he−i 0 dτ σ(r(τ )) d
δ (r(N ) − r)iW (6.66)
57
Using an n dimensional scalar field we obtain (see previous sections):
Z n
Y
w̃(r, µ) = lim Dσ Dφα
n→0
α=1
n
!
Z 2
1 σ (r) X
exp − dd r +D (∇φ(r)α )2 + µφ2 (r) + iσ(r)φ2 (r) φ1 (r)φ2 (r0 ) =
2 2g α=1
Z Y n
= lim Dφα φ1 (r)φ2 (r0 )e−H[φ] (6.69)
n→0
α=1
where Z
1 2 1 2
H [φ] = d r D (∇φ) + µφ + g φ2
d 2
(6.70)
2 2
58
Chapter 7
∂ log Z
m(T ) = lim lim (7.3)
h→0 |Λ|→∞ ∂h |Λ|
h(σ + σ 0 )
0 0
T(σ, σ ) = exp Kσσ + (7.5)
2
59
and rewrite the partition function 7.2:
X X
Z= ... exp(K (σ1 σ2 + ... + σN −1 σN + σN σ1 ) + h (σ1 + ... + σN )) =
σ1 =±1 σN =±1
!
X X X hX
= ... exp K (σi σi+1 ) + (σi + σi+1 ) =
σ1 =±1 σN =±1 i
2 i
N
X X Y h
= ... exp Kσi σi+1 + (σi + σi+1 )
σ =±1 σ =±1 i=1
2
1 N
Introducing the bra’s hσ| and ket’s |σi and the matrix T:
Z (N ) = tr TN = λN N
1 + λ2 (7.8)
60
for A positive definite. Apply this transformation to the Ising model; in the
Hamiltonian the spins σi and σj are coupled, in general, through a matrix
Kij that is non zero if i and j are nearest neighbors or i = j:
1X X K0 |Λ|
− βH = Ki,j σi σj + h σi − (7.10)
2 i,j i
2
The additional constant term compensating for Ki,i . Using 7.9 we can write:
! Z Y !
1X 1 1 X X
exp Kx,y σx σy = √ |Λ|/2
dϕx exp − ϕx K−1
x,y ϕy + σx ϕx
2 x,y det K (2π) x
2 x,y∈Λ x∈Λ
(7.11)
The partition function becomes:
K0 |Λ|
!
e− 2 XZ Y 1 X X
ZI = √ |Λ|/2
dϕx exp − ϕx K−1
x,y ϕy + σx (ϕx + h)
det K (2π) σ x∈Λ
2 x,y∈Λ x∈Λ
(7.12)
Z Y !
1 X YX
= const dϕx exp − ϕx K−1 ϕ
x,y y eσx (h+ϕx ) = (7.13)
x∈Λ
2 x,y∈Λ x∈Λ σx ±1
Z Y !
1 X −1
X
= const dϕx exp − ϕx Kx,y ϕy + log cosh(h + ϕx ) (7.14)
x∈Λ
2 x,y∈Λ x∈Λ
Define now:
1 X X
H[ϕ] = ϕx K−1
x,y ϕy − log cosh(h + ϕx ) (7.15)
2 x,y∈Λ x∈Λ
such that: Z Y
ZI = const dϕx exp(−H[ϕ]) (7.16)
x
We are ready to use the saddle point approximation, calling ϕ̄ the stationary
point of the exponential argument:
X
∂ϕx H[ϕ] = 0 = K−1
x,y ϕ̄y − tanh(ϕ̄x + h) (7.17)
y∈Λ
1
∂ϕx ∂ϕy H[ϕ] = K−1
x,y − δx,y 2 (7.18)
cosh (ϕ̄x + h)
61
First we look for uniform solutions, i.e. ϕ̄x = ϕ̃ ∀x; since:
K 0 x = y
Kx,y = K x, y nearest neighbor (7.19)
0 otherwise
P
the sum y Kx,y is independent of x:
X
Kx,y = K0 + 2d
|{z} K (7.20)
|{z}
y x=y coord.numb.
K̃m2
f (m, h) = − log cosh K̃m + h (7.26)
2
62
Expanding around small m and h we arrive at:
K̃(1 − K̃)m2 K̃ 2 m4
f (m, h) ≈ + − K̃mh (7.27)
2 2
This is the form of the free energy one can obtain using the Landau approach.
Settng K0 = 2Kd we get:
4dJ Tc
K̃ = 4dK = ≡ (7.28)
kB T T
For zero magnetic field if K̃ < 1 or T > Tc we have only an equilibrium phase
m̄ = 0; for K̃ > 1 or T < Tc we get m̄ = 0 as a metastable state while two Z2
symmetric solutions ±m0 as stable equilibrium phases; indeed minimizing f :
!
K̃m2
∂m f (m, 0) = K̃m (1 − K̃) + + o(m4 ) = 0
3
we get:
• m = 0 K̃ < 1
1/2
3(K̃−1)
• m = 0 and m = ± K̃
K̃ ≥ 1
K = K0 I + K∆
∆x,y ≡ δ|x−y|,a
−1
1 K 1 K
K−1 = I+ ∆ = − 2 ∆ + ...
K0 K0 K0 K 0
63
Now we approximate H:
1 X 1 X 2 K X
ϕx K−1
x,y ϕy = ϕx − ϕx ∆x,y ϕy + ... =
2 x,y∈Λ 2K0 x∈Λ 2K02 x,y∈Λ
1 X 2 K X
= ϕx − ϕx ϕy + ... (7.29)
2K0 x∈Λ 4K02
hx,yi
X X1 1 4
log cosh(h + ϕx ) = ϕ2x − ϕ + ϕx h + ... (7.30)
x∈Λ x∈Λ
2 12 x
Since ∆x,y couples only nearest neighbor fields we can expand the field ϕy
around a direction µ (one among d, the dimensionality) such that y = x + aµ
for a → 0:
a2
ϕy = ϕx + a∂µ ϕx + ∂µ2 ϕx + ... (7.31)
2
Thus: (TO FINISH)
64
Chapter 8
65
Chapter 9
~2 2
i~∂t ψ(x, t) = − ∂ ψ(x, t) + V (x, t)ψ(x, t) (9.2)
2m x
∂t w(x, t) = D∂x2 w(x, t) − V (x, t)w(x, t) (9.3)
Stick to the free case (V̂ = 0) and formally set the diffusion coefficent D
i~
equal to 2m ≡ DQM :
i~ 2
∂t w(x, t) = ∂ w(x, t)
2m x
~2 2
i~∂t w(x, t) = − ∂ w(x, t)
2m x
66
As we can see we get nothing but the Schrodinger equation; this procedure
is further applied in the propagator of the diffusion equation:
1 (x−x0 )2
− 4D(t−t
W0 (x, t|x0 , t0 ) = p e 0) →
4πD(t − t0 )
(x−x0 )2
r
m im
→ K0 (x, t|x0 , t0 ) = e 2~(t−t0 )
2π~i(t − t0 )
Starting from the fact that W0 (x, t|x0 , t0 ) can be expressed using a Wiener
Path Integral, let’s try to find an integral representation fo K0 :
Z
K0 (x, t|x0 , t0 ) = dw x(τ )δ(x(t) − x) =
D=DQM
N r
Z Y (xj −xj−1 )2
m im PN
j=1
= lim dxj e 2~ ∆tj
=
N →∞
j=1
2π~i∆tj
t r
Z Y
m i t
R m 2
= dx(τ )e ~ 0 dτ 2 ẋ(τ )
τ =t0
2π~idτ
We are left with one problem: while W0 (x, t|x0 , t0 ) ∈ R+ has a probabilistic
interpretation (i.e. is a p.d.f.) we can’t say the same for K0 , since it is
complex
R∞ and trying to use the probability amplitude interpretation we get
2
−∞
dx|K 0 (x, t|x0 , t0 )| = ∞. So even if we can make sense of K0 using a Path
Integral (called Feynman Path Integral) we can’t really say that this carries
a rigorous probability measure as in diffusion. Nevertheless let’s generalize
this result to an Hamiltonian with a potential:
~2 2
i~∂t ψ = − ∂ ψ + V (x, t)ψ (9.5)
2m x
1
Multiplying both sides by i~
i
∂t ψ = DQM ∂x2 ψ − V (x, t)ψ (9.6)
~
67
we obtain the Bloch equation where the substitution D → DQM and V →
i
~
V (x, t) was made. Thanks to this we can generalize the Feynman-Kac
formula. Calling K(x, t|x0 , t0 ) the propagator of the Schrodinger equation
with the potential V , the modified Fk formula reads:
Rt i
− t0 ~ V (x(τ ),τ )dτ
K(x, t|x0 , t0 ) = he δ(x(t) − x)iF (9.7)
Expliciting the hiF :
t r
Z Y
m i t
R m 2
K(x, t|x0 , t0 ) = dx(τ )e ~ 0 dτ 2 ẋ(τ ) −V (x(τ ),τ ) δ(x(t) − x) (9.8)
τ =t0
2π~idτ
This expression is valid for any N (even in the limit of N → ∞). Let’s
go a bit further and consider a wavefunction at time t0 = 0 and the same
wavefunction at time t; then by the definition of the evolution operator:
Ht
|ψ(t)i = U (t, 0) |ψ(0)i = e−i ~ |ψ(0)i (9.12)
Z Z
Ht
ψ(x, t) = hx|ψ(t)i = dx0 hx| e−i ~ |x0 i hx0 |ψ(0)i ≡ dx0 K(x, x0 , t) hx0 |ψ(0)i
(9.13)
(9.14)
68
where we introduced the time evolution kernel K(x, x0 , t) (notice the similar-
ity with ESCK relation). Consider just K(x, x0 , t) and use the previous time
slicing property:
Ht H H
K(x, x0 , t) = hx| e−i ~ |x0 i = hx| |e−i ~ . . e−i ~} |x0 i
.{z (9.15)
N times
Inserting
R between the sliced evolution operators N − 1 resolution of the iden-
tity dxj |xj i hxj | j = 1 . . . N − 1 we get for K(x, x0 , t):
Z
H H H
dx1 . . . dxN −1 hx| e−i ~ |xN −1 i hxN −1 | e−i ~ |xN −2 i hxN −2 | . . . |x1 i hx1 | e−i ~ |x0 i
(9.16)
Again we stress that this expression is exact and still valid for N → ∞.
Calling x ≡ xN we can write K(x, x0 , t) as:
Z N
H
Y
K(x, x0 , t) = dx1 . . . dxN −1 hxj | e−i ~ |xj−1 i (9.17)
j=1
p̂ 2
Now restrict the Hamiltonian operator to Ĥ = 2m + V̂ (x) and use the Trotter
decomposition (Baker-Campbell-Hausdorff formula), i.e. ∀A, B self adjoint
operators and ∀s:
N
es(A+B) = lim esA/N esB/N (9.18)
N →∞
N
p̂2
−i 2m~ −i V̂~
U (t, 0) = lim e e (9.19)
N →∞
So as = t/N → 0:
H p̂2 V̂
e−i ~ = e−i 2m~ e−i ~ + o(2 ) (9.20)
from which:
Z N
Y p̂2 V̂
K(x, x0 , t) = lim dx1 . . . dxN −1 hxj | e−i 2m~ e−i ~ |xj−1 i (9.21)
N →∞
j=1
69
Insert two identity resolutions in momentum space:
Z
p̂2 p̂2
−i 2m~
hxj | e |xj−1 i = dpj dpj−1 hxj |pj i hpj | e−i 2m~ |pj−1 i hpj−1 |xj−1 i =
(9.23)
Z
dpj dpj−1 i pj xj p̂2 pj−1
= e ~ hpj | e−i 2m~ |pj−1 i e−i ~ xj−1 = (9.24)
2π~
dpj dpj−1 i pj xj −i p2j−1
Z pj−1
= e ~ e 2m~ δ(pj − pj−1 )e−i ~ xj−1 = (9.25)
2π~
dpj i pj (xj −xj−1 )−i p2j
Z
= e ~ 2m~ (9.26)
2π~
Using 1.15 we get:
2
p̂2 1 im (xj −xj−1 )
hxj | e−i 2m~ |xj−1 i = q e 2~ (9.27)
2πi m~
So in a formal way:
t
Z Y
dx(τ ) i
K(x, x0 , t) = q e ~ S[x(τ )] δ(x(t) − x) (9.31)
τ =0 2πi m~ dτ
70
Now if we make the substitutions t → −it and ~
2m
→ D:
Z Y t
dx(τ ) ~i R0t (−idτ )(− 4D ~
ẋ2 (τ )−V (x(τ )))
K(x, x0 , −it) = √ e (9.32)
τ =0 4πDdτ
Z Y t
dx(τ ) − R t dτ 1 ẋ2 (τ ) − R t dτ 1 V (x(τ ))
= √ e 0 4D e 0 ~ (9.33)
τ =0 4πDdτ
Often (as we wrote) the trace is taken in the energy representation; here we
want to use the position one:
Z
Z(β) = Dq hq| e−βH |qi (9.35)
71
So: ∞ Z ∞
X ~2
−β 2m ( 2πn
2
L )
L ~2 2 L
Z(β) = e ≈ dke−β 2m k = (9.37)
n=−∞
2π −∞ Λ
where we took the spatial dimensions of the theory equal to D − 1. Now use
a Wick rotation and send t → −ix0 :
t → −ix0 (9.42)
dt → −idx0 (9.43)
∂t → i∂0 (9.44)
72
We get: Z
dx0 dD−1 x 21 (∂0 φ)2 + 12 (∇φ)2 + 12 m2 φ2
R
Z= Dφe− (9.45)
Notice that now the Path Integral is in the same fashion as the Polymer Field
Theory one. In the end:
Z R D 1 2 1 2 2
Z = Dφe− d x 2 (∇φ) + 2 m φ (9.46)
where now the gradient is taken in D dimensions. The two point function is
the inverse of the operator (aka the Green’s function):
− ∇ 2 + m2 (9.47)
73
Chapter 10
74
• B-Death: (0, −1)
• PP: (0, −1)
• PP+Birth: (1, −1)
Before proceeding, let’s rescale b, p1 , p2 by N −1 and d1 , d2 by N . The master
equation’s rates are:
Predator death T1 = T (n − 1, m|n, m) = d1 n
This is because we choose a predator with probability n/N which we multi-
ply by the death rate (we rescaled!).
m
Prey birth T2 = T (n, m + 1|n, m) = 2b N (N − m − n)
m N −m−n
We need to choose the preys and the empty slots and viceversa set b( N N −1
+
N −n−m m
N N −1
) and rescale b
Prey death T3 = T (n, m − 1|n, m) = 2p2 nm N
+ d2 m
Predator birth T4 = T (n + 1, m − 1|n, m) = 2p1 nm N
Choose first the predators and after the preys ( Nn Nm−1 ) and vice versa ( N
m n
N −1
),
then rescale.
Now we would be ready to write and (try to) solve the master equation.
However here we take a simpler but very powerful approach, the Linear
Noise Approximation Ãă la Wallace.
75
by a Gaussian: X
k(t + τ ) ≈ k(t) + N (λj , λj )vj (10.2)
j
k
where λj = τ Ωrj Ω
. Let’s decompose the gaussian:
X √ X√
k(t + τ ) ≈ k(t) + τ Ω rj vj + τΩ rj N (0, 1)vj (10.3)
j j
(recall: rj is a function of Ωk )
Expand now k in terms of a deterministic and a stochastic part:
√
k = Ωx + Ωξ (10.4)
k 1
=x+ √ ξ (10.5)
Ω Ω
Now insert in the previous expansion and isolate the O(1) (deterministic)
and O( √1Ω ) (stochastic) terms (for simplicity we drop the arrow on vectors):
X
x(t + τ ) = x(t) + τ rj (x(t))vj (10.6)
j
As τ → 0:
dx(t) X
= rj (x(t))vj (10.7)
dt j
76
10.3 Continued: Volterra model
We apply the linear noise approximation to the previously introduced Volterra
model with the concentrations f1 and f2 of the predators and preys :
• r1 = d1 f1 v1 = (−1, 0)
• r2 = 2bf2 (1 − f1 − f2 ) v2 = (0, 1)
df2 X (2)
= rj vj = 2bf2 (1 − f1 − f2 ) − 2p2 f1 f2 − d2 f2 − 2p1 f1 f2 = (10.11)
dt j
This would be the Lotka-Volterra model if it didn’t include the f22 term. From
this equation we should extract the stationary solutions (f1∗ , f2∗ ) and pulg
them into the stochastic equation. The stochastic part needs the gradient of
the rates:
• ∇r1 = (d1 , 0)
77
At the end we would end up with two 2x2 matrices A, B connecting ξ1 and
ξ2 and the noisy part:
dξ
= Aξ + Bη
dt
where η is a two dimensional white noise. Try to go for the power spectrum
1
ξ˜i = Aik ξ˜k + Bik η̃k
iω
1
ξ˜i ξ˜j∗ = − 2 Ail ξ˜l + Bil η̃l Ajk ξ˜k∗ + Bjk η̃k∗
ω
1
= − 2 Ail Ajk ξ˜l ξ˜k∗ + Bil Bjk η̃l η̃k∗ + Ail Bjk ξ˜l η̃k∗ + Bil Ajk η̃l ξ˜k∗
ω
1 X
< ξ˜i ξ˜j∗ >= − 2 Ail Ajk < ξ˜l ξ˜k∗ > +Bil Bjk δlk
ω l,k
1
AP AT + B T B
P =− 2
ω
ξ(t)ξ(t0 ) = Kδ(t − t0 )
Here K is the strength of the noise and in case of a pure thermodynamic
interpretation we would set K = 2kB T . We can write the typical jump time:
2 c
Z y
−2V0 (z)
Z
2V0 (y)
hτ i0 = dy exp dz exp (10.13)
K −c K −∞ K
78
In the Kramers approximation we find the jumping rate W0 equals to:
−1 1 p 00 00
−2∆V
W0 = hτ i0 ≈ |V (0)|V (c) exp (10.14)
2π K
where ∆V = V (0) − V (c). We add a modulation to the potential:
x
V (x, t) = V0 (x) + V1 sin ωs t (10.15)
c
2π
If hτ i0 ts ≡ and ∆V V1 , the jump rates are:
ωs
1 p 00 00
2
W1,2 = |V (0)|V (c) exp − (∆V ± V1 sin ωs t) (10.16)
2π K
W1 is the one from −c to +c, W2 viceversa. We move to a reduced description,
studying only the positions ±c:
(
ṗ1 = −W1 (t)p1 + W2 (t)p2
(10.17)
ṗ2 = −W2 (t)p1 + W1 (t)p2
where p1 (t) is the probability of being at x = −c, p2 (t) at x = c. Since
p2 (t) = 1 − p1 (t):
ṗ1 = −(W1 (t) + W2 (t))p1 (t) + W2 (t) (10.18)
| {z }
≡W (t)
79
Chapter 11
Disordered systems
Given that Jij > 0 (ferromagnetic interaction), we know that at low tem-
perature we find "almost" all states in up or down states (having a doubly
degenerate ground state) while at hight temperature the system is in a dis-
ordered phase (paramagnetic). As usual, the order parameter is hmi, the
magnetization.
Differently from the Ising model, if the hamiltonian of the system such as 11.1
has Jij quenched and either positive (ferromagnetic) or negative (antiferro-
magnetic), we say that the system is a spin glass; because of this, in general,
we can’t have huge positive or negative domains in the ground state, since it
is almost impossible to satisfy positive and negative interaction. Indeed an
Ising model has a global minimum for the energy, while a Spin Glass has a lot
of local minima. Take for example 3 spins: the first two interact with ferro-
magnetic interaction, the third and the first antiferromagnetic, so one cannot
satisfy at the same time equal orientation and opposite. In a spin glass we
take the interaction term to be a picked from a probability distribution so,
in some sense, our system is disordered. This kind of disorder appears even
in machine learning and in binary SAT problems (logical equations). In a
system like this we have to deal with two kind of averages, the one over the
80
disorder and the ensemble average. For an observable A, already averaged
over the ensemble, we define:
Z
Ā = average over disorder = dJp(J)hAiJ (11.2)
Z
1
hAiJ = DσA(σ)e−βH(σ;J) (11.3)
ZN (J)
Again, the first equation is the disorder-averaged A while the second is the
ensemble average of A, given a realization of J. Moving to the free energy:
1
FN (J) = − log ZN (J) (11.4)
βN
we ask ourselves: how F changes with J? The answer is inside the concept
of self averaging quantities! We need to compute the relative variance as
N → 0 for a quantity A: if it goes to zero we have a self averaging quantity.
Ā2 − (Ā)2
→ 0 as N → ∞
(Ā)2
The free energy is a self averaging quantity (because of the log dependence):
1
FN (J) = − log ZN (J) → F∞ (β) (11.5)
βN
while we don’t know about the partition function (exponential dependence
on the noise). Because of presence of two averages, we can define two kind
of free energies, the annealed FA (usually simpler to compute but givs some
wrong results) and the quenched FQ (harder but realistic):
Z Z
1
FQ = − dJp(J) log dσe−βH(σ;J) =
βN
Z
1 1
=− dJp(J) log ZN (J) = − log ZN (11.6)
βN βN
Z Z
1 1
FA = − log dJp(J) dσe−βH(σ;J) = − log Z N (11.7)
βN βN
In the annealed version we first do the average over disorder then we compute
the partition function, in the quenched version is viceversa.
81
11.2 Replica trick
There exists a famous trick to compute the quenched version of the free
energy, called the replica trick; it consists in computing the average over
disorder partition function for n replicas of the system in exam and taking a
propert limit:
Zn − 1
log Z = lim (11.8)
n→0 n
The n replicas partition function is:
Z
Z = Dσ1 ...Dσn e−βH(σ1 ;J) ...e−βH(σn ;J)
n (11.9)
(we assume that the limit doesn’t depend over which replica we make the
average).
dσe−βH(σ)
R
Zα α
wα ≡ = (11.12)
Z Z
82
For Ising at T → 0 we get two pure states α = + and α = −:
w± = 1/2
hσi+ > 0
hσi− < 0
X
hσi = wα hσiα = 0
α
11.4 Clustering
Take two points i, j such that |i − j| → ∞: we expect that hσi σj i → hσi ihσj i
only for pure states. Indeed for Ising Model at T < Tc :
1 1 1 1
hσi σj i = hσi σj i+ + hσi σj i− ≈ hσi2+ + hσi2− 6= 0 (11.13)
2 2 2 2
Here it is easy to study a positive or negative domain: so the magnetization
is a good order parameter. For Spin Glasses it is not possible: in the end
the magnetization is not so meaningful, since the interaction is either ferro-
magnetic or anti-ferromagnetic. Moreover we have to deal with ergodicity
breaking against the dimensionality of the system; in the mean field d → ∞
we find very different results from d < ∞: in the former case you find
metastable states the system cannot possibly leave since they have infinitely
high barriers as N → ∞ (a metastable state is a local minima).
11.5 Overlaps
In this section we introduce the concept of an overlap, as a measure fo similar-
ity between states. Take a frozen (=quenched) disorder J; at low temperature
we find configurations with definite spin values and there is no generation of
global magnetization, so we define a new order parameter, called Edward-
Anderson parameter:
N
1 X
qEA ≡ hσi i2 (11.14)
N i=1
83
Notice that qEA is non zero even if the local magnetization is different from
zero. For two configurations σ, τ we introduce the mutual overlap:
N
1 X
qστ ≡ σi τi (11.15)
N i=1
This quantity compares two configurations: for the Ising model at low tem-
perature we find ±1 for same or opposite state, otherwise similar to zero.
The self overlap is:
N
1 X
qσσ = σi σi (11.16)
N i=1
That is 1 for the Ising model. Instead, for a system with pure states α, γ (i.e.
split Gibbs measure), the overlap is:
N
1 X
qαγ = hσi iα hσi iγ = (11.17)
N i=1
N Z Z
1 X 1 −βH(σ) 1
= Dσσi e Dτ τi e−βH(τ ) = (11.18)
N i=1 Zα α Zγ γ
Z Z
1
= Dσ Dτ e−βH(σ) e−βH(τ ) qστ (11.19)
Zα Zγ α γ
84
11.6 Overlap distribution
In Spin Glasses we find a huge number of low temperature pure states; it
would be useful to have a distribution of overlap between replicas:
Z Z
1
p(q) = 2 Dσ dτ e−βH(σ) e−βH(τ ) δ(q − qστ ) =
Z
Z Z
X 1 1
= wα wγ Dσ Dτ e−βH(σ) e−βH(τ ) δ(q − qστ ) =
αγ
Z α α Zγ γ
X
= wα wγ δ(q − qαγ )
αγ
For Ising model we have four possible realization of q that are equally dis-
tributed (w+ = w− = 1/2):
N N
1 X 1 X 2
q++ = hσi i2α = mi = m2
N i=1 N i=1
q−− = m2
q+− = q−+ = −m2
1 1
p(q) = δ(q − m2 ) + δ(q + m2 )
2 2
Spin are σi ∈ R. To keep the energy finite one introduces a spherical con-
straint:
XN
σi2 = N
i=1
Notice that qσσ = 1. The p-body interaction J is over all the possibile
combinations of p spins: this is a mean field model. Moreover, J is taken
from a gaussian distribution:
1 J2
p(J) ∝ exp − 2
2 Jv
85
p!
for Jv2 ≡ 2N p−1
: this gives H ∝ N , as needed:
J 2 N p−1
p(J) ∝ exp −
p!
We now try to solve the system via the so-called replica symmetric approach.
We start with the annealed calculation; disregard any needed finite normal-
ization. As a notation set p = 3 but everything is fit for p > 3. Remember
1
FA = − log Z̄N
βN
The partition function is:
p−1
Z Z Y
2 N
Z̄ = Dσ dJijk exp −Jijk + Jijk βσi σj σk = (11.22)
i<j<k
p!
!p !
β2 N β2
Z X
= Dσ exp σi = Ω(N ) exp (11.23)
4N p−1 i
4
86
We started with a set of coupled spins and uncoupled replicas and we ended
up with, after the averaging over the disorder, decoupled sites and coupled
replicas (thanks to a mean fieldP
model). Notice the appearance of the overlap
between two replicas Qab = N N1 a b
i=1 σi σi . We introduce:
Z X
1 = dQab δ(N Qab − σia σib ) (11.29)
i
to get: Z
Zn = DQab Dλab exp[−N S(Q, λ)] (11.31)
β2 X p X 1
S(Q, λ) = − Qab − λab Qab + log det(2λab ) (11.32)
4 a,b a,b
2
We can perform the saddle point approximation since N → ∞; however two
problems arise:
• S(Q, λ) doesn’t depend on n so we need to do the thermodynamic limit
first (risky)
• The number of indipendent pairs for Qab is n(n − 1)/2 as n → 0 is
negative
Let’s proceed; we use the equality:
∂
log det Mab = (M −1 )ab
∂Mab
with Mab = 2λab . We must have:
∂S(Q, λ)
= −Qab + (2λab )−1 = 0 (11.33)
∂λab
−1
Qab = 2λab (11.34)
We get for the s.p. free energy:
" #
1 β2 X p
F = lim − Q + log det Qab (11.35)
n→0 2βn 2 a,b ab
87
and for the saddle point equation (stability condition):
∂F β 2 p p−1
0= = Q + Q−1 (11.36)
∂Qab 2 ab ab
β 2 p p−1 q0
q0 − =0 (11.39)
2 (1 − q0 )2
88
11.9 Replica Trick and Physics
11.10 Replica Symmetry Breaking
We make the structure of Qab a bit more complicated; we break the permu-
tation of replicas symmetry by saying that:
Qaa = 1
Qab = q0 different pure states (11.42)
Qab = q1 for same pure states
89
and can show that:
1X p X p
Qab = Qab = 1 + (m − 1)q1p + (n − m)q0p (11.44)
n ab a
β2 m−1
− 2βF = [1 + (m − 1)q1p − mq0p ] + log(1 − q1 )+
2 m
1 q0
+ log(m(q1 − q0 ) + 1 − q1 ) + (11.45)
m m(q1 − q0 ) + 1 − q1
To get further insights and results we suggest the paper "Spin-glass theory
for pedestrians" from Castellani and Cavagna, from which these notes on
spin glass theory are taken.
1 2 2
p(hi ) = √ e−hi /2σ (11.48)
2πσ 2
Use replica trick:
∂ (n)
F = −kB T Z̄ (11.49)
∂n n=0
Use index a for the replicas a = 1 . . . n.
! !
X βJ X X a a XX
Zn = exp S S exp β Sia hi (11.50)
Sia
N a ij i j i a
90
The overline stands for the average over disorder, which gives:
!2
βJ X X a a β 2 σ 2 X X a
X
Zn = exp
Si Sj + Si (11.51)
Sia N a ij
2 i a
| {z }
P a 2
=( i Si )
We employ now the saddle point approximation; assume for now xa = x (like
a replica symmetric solution). The saddle point is xm :
d
− nx + log Z1 (x) = 0 (11.55)
dx P P a A[S,xm ]
S a =±1 aS e
p
nxm = 2βJ P A[S,xm ]
(11.56)
S a =±1 e
91
With: !2
p X β 2σ2 X
A[S, x] = 2βJx Sa + Sa (11.57)
a
2 a
At the end xm is an average in the replica of the magnetization at a given
site:
p 1X a p
xm = 2βJh S i = 2βJm (11.58)
n a
Rewrite everything using m:
Use Hubbard-Stratonovich to a S a :
P
Z
ds 1 2 P a
e A[S,m]
= √ e− 2 s +(2βJm+βσs) a S (11.63)
2π
The partition function becomes:
Z
ds 1 2
Z1 (m) = √ e− 2 s +n log 2 cosh(2βJm+βσs) (11.64)
2π
In the limit n → 0:
Z
ds 1 2
m= √ e− 2 s tanh (2βJm + βσs) (11.65)
2π
92
Chapter 12
Levy flights
93
which is understood in the Fourier space:
94
we get:
Z ∞
1 sin kx0 −D1 t|k|+ikx
P (x, t) = dk e = (12.12)
2π −∞ kx0
Z ∞ Z x0
1 1 −D1 t|k|+ikx
= dk dye−iky e = (12.13)
2π −∞ −x0 2x0
Z x0 Z ∞
1 1
= dy dke−D1 t|k|+ik(x−y) = (12.14)
2x 0 2π
Z−xx0
0 −∞
1 1 1
= dy 2 = (12.15)
2x0 πD1 t
−x0 x−y
1 + D1 t
x
1 −1 x − y 0
= atan = (12.16)
2x0 π D1 t −x0
1 x + x0 x − x0
= atan − atan (12.17)
2πx0 D1 t D1 t
95
Chapter 13
Instantons
In the limit → 0 we use the saddle point approximation and we look for
trajectories x∗ that extremize S[x], i.e. we have to solve the Euler-Lagrange
equation:
d ∂L ∂L
= (13.4)
dt ∂ ẋ ∂x
dF (x) dF (x)
ẍ − ẋ = − (ẋ − F (x)) (13.5)
dx dx
d F (x)2 d
ẍ = = (−Vef f (x)) (13.6)
dx 2 dx
96
So given x∗ (t) with x∗ (ti ) = xi and x∗ (tf ) = xf we get P (xf , tf |xi , ti ) ≈
∗
e−S[x ]/ where, indeed, S[x∗ ] = minx(ti )=xi ,x(tf )=xf S[x]. The solution x∗ is
called an instanton.
ẋ2 F (x)2
E= − (13.9)
2 2
from which: Z x
ds
t − t0 = ± p (13.10)
x0 2E + F 2 (s)
Set E = 0. Suppose a < 0 and b > 0; hence the force is positive if x > − ab
p
Z x
ds
b(t − t0 ) = ± as 3
(13.11)
x0 b + s
= (13.12)
For E = 0:
Z x
dα
t − t0 = ± p = (13.18)
x0 2 V0 (1 − cos α)
Z x
1 dα
= ±√ = (13.19)
2V0 x0 2 sin α2
1 α x
= ±√ log tan (13.20)
2V0 4 x0
In the end we get:
1 tan(x/4)
t − t0 = ± √ log (13.21)
2V0 tan(x0 /4)
√
tan(x/4) = tan(x0 /4)e± 2V0 (t−t0 )
(13.22)
√
x = 4 atan e± 2V0 t+c (13.23)
98
∂x2 φ = −m2 φ − gφ3 (13.26)
2
The only stationary solution is φ0 = 0 (a minimum) if mq > 0; otherwise
2
φ0 = 0 is a maximum and we have two minima, ±φ0 = ± −m g
. let’s stick
2
to the case φ20 = − mg . We rewrite the potential as:
m2 2 m2 4
V0 (φ) = φ − 2φ (13.27)
2 4φ0
φ20 m2
Subtract from it the minima, i.e. Vc = V0 (φ0 ) = 4
= V0 . At the end we
get:
m2 2 m2 4 φ20 m2 m2
V = V0 (φ)−Vc = φ − 2φ − = − 2 (φ − φ0 )2 (φ + φ0 )2 (13.28)
2 4φ0 4 4φ0
1 φ
± tanh−1 ( ) (13.31)
m φ0
Inverting the instantons are:
99
Chapter 14
100
In quantum mechanics one used to have:
√
a |ni = n |n − 1i (14.2)
√
a† |ni = n + 1 |n + 1i (14.3)
a† a |ni = n |ni (14.4)
[a, a† ] = 1 (14.5)
a |ni = n |n − 1i (14.6)
a† |ni = |n + 1i (14.7)
†
a a |ni = n |ni (14.8)
†
[a, a ] = 1 (14.9)
101
Appendices
102
Appendix A
hXi = µ (A.1)
103
transform of the distribution of X, also known as the characteristic function
of X, which we denote by ϕ(k):
Z
ϕ(k) = he i = dxeikx p(x)
ikx
(A.6)
104
In the end:
√
Z
1
iα y− √nµ iαµ n α2 −1/2
p(Y (x) = y) = dαe nµ
e σ − 2 +o(n ) = (A.15)
2π R
Z
1 α2 −1/2 1 y2
= dαeiαy− 2 +o(n ) → √ e− 2 (A.16)
2π R 2π
Since we have limn→∞ Yn (x) ∼ N (0, 1) from the normal distribution proper-
2
ties we find Sn → N (nµ, nσ 2 ) and n1 Sn → N (µ, σn ) .
105
Appendix B
Circulant matrices
b1 · · · · · · bm−1
b0
bm−1
. b0 b1 · · · bm−2
. ... ... ... ..
. . (B.1)
. .. .. ..
..
··· . . .
b1 b2 · · · bm−1 b0
It’s immediate to show that the product of the i-th matrix row and the vector
v is just λ multiplied by ρi−1 , i.e. v is an eigenvector and λ and eigenvalue;
indeed the i-th row of the matrix, identifying bm = b0 , is:
106
and the product is:
(B.6)
and because of bm = b0 we have the thesis. Another way to see this is by
introducing a n dimensional translation operator T such that:
107
Appendix C
Disordered systems
Consider an m × m matrix:
α β ··· β
β α ··· β
.. .. . . ..
. . . .
β β ··· α
108
Pm
else sum over i (and j=1 vj 6= 0):
m
X m X
X m m
X
(α − β) vi + β vj = λ vi (C.4)
i=1 i=1 j=1 i=1
Xm Xm m
X
(α − β) vi + mβ vi = λ vi (C.5)
i=1 i=1 i=1
Pm
divide by j=1 vj :
λ = α + (m − 1)β
So we have λ1 = α − β with degλ1 = m − 1 and λ2 = α + (m − 1)β with
degλ2 = 1. The replica symmetric matrix is a block circulant matrix where
the first row block is:
A B . . . B
| {z }
f − 1 times
λ1A = 1 − q1 deg1 = m − 1
λ1B = 0 deg1 = m − 1
109
and β → λB ; there is only one requirement: we can mix only eigenvalues
having the same set of eigenvectors. In the end we get
λA = 1 − q1 λB = 0 (C.6)
λ1 = λA − λB = 1 − q1 (C.7)
λ1 = λA + (f − 1)λB = 1 − q1 (C.8)
and
110
Appendix D
Brownian Bridge
dN x
Z
1
exp − xAx + Jx
(4πD)N/2 4D
Ji = iβ
JN = iα
2 −1 ··· ··· 0
0
−1 2 −1 . . . 0
0
.. .. .
0 −1 2
. . ..
A= .
.. .. ..
0 . . −1 0
.. .. . .
. . . −1 2 −1
0 0 · · · 0 −1 1
det AN = 2 det AN −1 − det AN −2
111
Since:
det A2 = 1
we get
det AN = 1 ∀N
Define:
x
y=√
2D
√
J 0 = J 2D
dN x dN y
Z Z
1 1 0
exp − xAx + Jx = exp − yAy + J y =
(4πD)N/2 4D (2π)N/2 2
2 A−1 −2αβA−1 −α2 A−1
= eD(−β )
1 0 −1 J 0 −1 J
= (det A)−1/2 e 2 J A = eDJA jj jN NN
One discovers:
A−1
j,N = j
A−1
ii = i
So:
eD(−jβ ) = e−Dtβ 2 −2Dtαβ−T Dα2
2 −2jαβ−N α2
112
1 1 2 2 2 2
=√ exp −xT + 4DtiβxT + 4D t β
4πDT 4DT
Then:
x2T
Z
1 dβ −Dt(1− t )β 2 −iβx+iβ t xT
√ exp − e T T =
4πDT 4DT 2π
x2T (x − Tt xT )2
1 1
P [x(t) = x|x(T ) = xT ] = √ exp − exp −
4Dt(1 − Tt )
q
4πDT 4DT 4πDt(1 − Tt )
113
Appendix E
114
π π
− ≤ pµ ≤ µ = 1...d
a a
Indeed from the point x ∈ Zd we can go to 2d nearest neighbors points using
d directions and their opposites.
115