Pde
Pde
Pde
1 Introduction 9 1.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.2 Equations from variational problems . . . . . . . . . . . . . . 15 1.2.1 Ordinary differential equations . . . . . . . . . . . . . 15 1.2.2 Partial differential equations . . . . . . . . . . . . . . 16 1.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2 Equations of first order 25 2.1 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.2 Quasilinear equations . . . . . . . . . . . . . . . . . . . . . . 31 2.2.1 A linearization method . . . . . . . . . . . . . . . . . 32 2.2.2 Initial value problem of Cauchy . . . . . . . . . . . . . 33 2.3 Nonlinear equations in two variables . . . . . . . . . . . . . . 40 2.3.1 Initial value problem of Cauchy . . . . . . . . . . . . . 48 2.4 Nonlinear equations in Rn . . . . . . . . . . . . . . . . . . . . 51 2.5 Hamilton-Jacobi theory . . . . . . . . . . . . . . . . . . . . . 53 2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3 Classification 63 3.1 Linear equations of second order . . . . . . . . . . . . . . . . 63 3.1.1 Normal form in two variables . . . . . . . . . . . . . . 69 3.2 Quasilinear equations of second order . . . . . . . . . . . . . . 73 3.2.1 Quasilinear elliptic equations . . . . . . . . . . . . . . 73 3.3 Systems of first order . . . . . . . . . . . . . . . . . . . . . . . 74 3.3.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 76 3.4 Systems of second order . . . . . . . . . . . . . . . . . . . . . 82 3.4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . 83 3.5 Theorem of Cauchy-Kovalevskaya . . . . . . . . . . . . . . . . 84 3.5.1 Appendix: Real analytic functions . . . . . . . . . . . 90 3 4 CONTENTS 3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4 Hyperbolic equations 107 4.1 One-dimensional wave equation . . . . . . . . . . . . . . . . . 107 4.2 Higher dimensions . . . . . . . . . . . . . . . . . . . . . . . . 109 4.2.1 Case n=3 . . . . . . . . . . . . . . . . . . . . . . . . . 112 4.2.2 Case n = 2 . . . . . . . . . . . . . . . . . . . . . . . . 115 4.3 Inhomogeneous equation . . . . . . . . . . . . . . . . . . . . . 117 4.4 A method of Riemann . . . . . . . . . . . . . . . . . . . . . . 120 4.5 Initial-boundary value problems . . . . . . . . . . . . . . . . . 125 4.5.1 Oscillation of a string . . . . . . . . . . . . . . . . . . 125 4.5.2 Oscillation of a membrane . . . . . . . . . . . . . . . . 128 4.5.3 Inhomogeneous wave equations . . . . . . . . . . . . . 131 4.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 5 Fourier transform 141 5.1 Definition, properties . . . . . . . . . . . . . . . . . . . . . . . 141 5.1.1 Pseudodifferential operators . . . . . . . . . . . . . . . 146 5.2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 6 Parabolic equations 151
6.1 Poissons formula . . . . . . . . . . . . . . . . . . . . . . . . . 152 6.2 Inhomogeneous heat equation . . . . . . . . . . . . . . . . . . 155 6.3 Maximum principle . . . . . . . . . . . . . . . . . . . . . . . . 156 6.4 Initial-boundary value problem . . . . . . . . . . . . . . . . . 162 6.4.1 Fouriers method . . . . . . . . . . . . . . . . . . . . . 162 6.4.2 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . 164 6.5 Black-Scholes equation . . . . . . . . . . . . . . . . . . . . . . 164 6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 7 Elliptic equations of second order 175 7.1 Fundamental solution . . . . . . . . . . . . . . . . . . . . . . 175 7.2 Representation formula . . . . . . . . . . . . . . . . . . . . . 177 7.2.1 Conclusions from the representation formula . . . . . 179 7.3 Boundary value problems . . . . . . . . . . . . . . . . . . . . 181 7.3.1 Dirichlet problem . . . . . . . . . . . . . . . . . . . . . 181 7.3.2 Neumann problem . . . . . . . . . . . . . . . . . . . . 182 7.3.3 Mixed boundary value problem . . . . . . . . . . . . . 183 7.4 Greens function for 4 . . . . . . . . . . . . . . . . . . . . . . 183 7.4.1 Greens function for a ball . . . . . . . . . . . . . . . . 186 CONTENTS 5 7.4.2 Greens function and conformal mapping . . . . . . . 190 7.5 Inhomogeneous equation . . . . . . . . . . . . . . . . . . . . . 190 7.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 6 CONTENTS Preface These lecture notes are intented as a straightforward introduction to partial differential equations which can serve as a textbook for undergraduate and beginning graduate students. For additional reading we recommend following books: W. I. Smirnov [21], I. G. Petrowski [17], P. R. Garabedian [8], W. A. Strauss [23], F. John [10], L. C. Evans [5] and R. Courant and D. Hilbert[4] and D. Gilbarg and N. S. Trudinger [9]. Some material of these lecture notes was taken from some of these books. 7 8 CONTENTS Chapter 1 Introduction Ordinary and partial differential equations occur in many applications. An ordinary differential equation is a special case of a partial differential equation but the behaviour of solutions is quite different in general. It is much more complicated in the case of partial differential equations caused by the fact that the functions for which we are looking at are functions of more than one independent variable. Equation F(x, y(x), y0(x), . . . , y(n)) = 0 is an ordinary differential equation of n-th order for the unknown function y(x), where F is given. An important problem for ordinary differential equations is the initial value problem y0(x) = f(x, y(x)) y(x0) = y0 , where f is a given real function of two variables x, y and x0, y0 are given real numbers. Picard-Lindelof Theorem. Suppose (i) f(x, y) is continuous in a rectangle Q = {(x, y) 2 R2 : |x x0| < a, |y y0| < b}. (ii) There is a constant K such that |f(x, y)| K for all (x, y) 2 Q.
(ii) Lipschitz condition: There is a constant L such that |f(x, y2) f(x, y1)| L|y2 y1| 9 10 CHAPTER 1. INTRODUCTION x y x y 0 0 Figure 1.1: Initial value problem for all (x, y1), (x, y2). Then there exists a unique solution y 2 C1(x0, x0+) of the above initial value problem, where = min(b/K, a). The linear ordinary differential equation y(n) + an1(x)y(n1) + . . . a1(x)y0 + a0(x)y = 0, where aj are continuous functions, has exactly n linearly independent solutions. In contrast to this property the partial differential uxx+uyy = 0 in R2 has infinitely many linearly independent solutions in the linear space C2(R2). The ordinary differential equation of second order y00(x) = f(x, y(x), y0(x)) has in general a family of solutions with two free parameters. Thus, it is naturally to consider the associated initial value problem y00(x) = f(x, y(x), y0(x)) y(x0) = y0, y0(x0) = y1, where y0 and y1 are given, or to consider the boundary value problem y00(x) = f(x, y(x), y0(x)) y(x0) = y0, y(x1) = y1. Initial and boundary value problems play an important role also in the theory of partial differential equations. A partial differential equation for 1.1. EXAMPLES 11 y y0 xx y1 01x Figure 1.2: Boundary value problem the unknown function u(x, y) is for example F(x, y, u, ux, uy, uxx, uxy, uyy) = 0, where the function F is given. This equation is of second order. An equation is said to be of n-th order if the highest derivative which occurs is of order n. An equation is said to be linear if the unknown function and its derivatives are linear in F. For example, a(x, y)ux + b(x, y)uy + c(x, y)u = f(x, y), where the functions a, b, c and f are given, is a linear equation of first order. An equation is said to be quasilinear if it is linear in the highest derivatives. For example, a(x, y, u, ux, uy)uxx + b(x, y, u, ux, uy)uxy + c(x, y, u, ux, uy)uyy = 0 is a quasilinear equation of second order. 1.1 Examples 1. uy = 0, where u = u(x, y). All functions u = w(x) are solutions. 2. ux = uy, where u = u(x, y). A change of coordinates transforms this equation into an equation of the first example. Set = x + y, = x y, then
u(x, y) = u + 2 , 2 =: v(, ). 12 CHAPTER 1. INTRODUCTION Assume u 2 C1, then v = 1 2 (ux uy). If ux = uy, then v = 0 and vice versa, thus v = w() are solutions for arbitrary C1-functions w(). Consequently, we have a large class of solutions of the original partial differential equation: u = w(x + y) with an arbitrary C1-function w. 3. A necessary and sufficient condition such that for given C1-functions M, N the integral Z P1 P0 M(x, y)dx + N(x, y)dy is independent of the curve which connects the points P0 with P1 in a simply connected domain R2 is the partial differential equation (condition of integrability) My = Nx in . y x W P P 0 1 Figure 1.3: Independence of the path This is one equation for two functions. A large class of solutions is given by M = x, N = y, where (x, y) is an arbitrary C2-function. It follows from Gauss theorem that these are all C1-solutions of the above differential equation. 4. Method of an integrating multiplier for an ordinary differential equation. Consider the ordinary differential equation M(x, y)dx + N(x, y)dy = 0 1.1. EXAMPLES 13 for given C1-functions M, N. Then we seek a C1-function (x, y) such that Mdx + Ndy is a total differential, i. e., that (M)y = (N)x is satisfied. This is a linear partial differential equation of first order for : My Nx = (Nx My). 5. Two C1-functions u(x, y) and v(x, y) are said to be functionally dependent if det ux uy vx vy = 0,
which is a linear partial differential equation of first order for u if v is a given C1-function. A large class of solutions is given by u = H(v(x, y)), where H is an arbitrary C1-function. 6. Cauchy-Riemann equations. Set f(z) = u(x, y)+iv(x, y), where z = x+iy and u, v are given C1()-functions. Here is a domain in R2. If the function f(z) is differentiable with respect to the complex variable z then u, v satisfy the Cauchy-Riemann equations ux = vy, uy = vx. It is known from the theory of functions of one complex variable that the real part u and the imaginary part v of a differentiable function f(z) are solutions of the Laplace equation 4u = 0, 4v = 0, where 4u = uxx + uyy. 7. The Newton potential u= 1p x2 + y2 + z2 is a solution of the Laplace equation in R3 \ (0, 0, 0), i. e., of uxx + uyy + uzz = 0. 14 CHAPTER 1. INTRODUCTION 8. Heat equation. Let u(x, t) be the temperature of a point x 2 at time t, where R3 is a domain. Then u(x, t) satisfies in [0,1) the heat equation ut = k4u, where 4u = ux1x1+ux2x2+ux3x3 and k is a positive constant. The condition u(x, 0) = u0(x), x 2 , where u0(x) is given, is an initial condition associated to the above heat equation. The condition u(x, t) = h(x, t), x 2 @, t 0, where h(x, t) is given is a boundary condition for the heat equation. If h(x, t) = g(x), that is, h is independent of t, then one expects that the solution u(x, t) tends to a function v(x) if t ! 1. Moreover, it turns out that v is the solution of the boundary value problem for the Laplace equation 4v = 0 in v = g(x) on @. 9. Wave equation. The wave equation y u(x,t 1 ) u(x,t ) 2 lx Figure 1.4: Oscillating string utt = c24u, where u = u(x, t), c is a positive constant, describes oscillations of membranes or of three dimensional domains, for example. In the one-dimensional case utt = c2uxx describes oscillations of a string. 1.2. EQUATIONS FROM VARIATIONAL PROBLEMS 15 Associated initial conditions are u(x, 0) = u0(x), ut(x, 0) = u1(x), where u0, u1 are given functions. Thus the initial position and the initial velocity are prescribed. If the string is finite one describes additionally boundary conditions, for example u(0, t) = 0, u(l, t) = 0 for all t 0. 1.2 Equations from variational problems
A large class of ordinary and partial differential equations arise from variational problems. 1.2.1 Ordinary differential equations Set E(v) = Zb a f(x, v(x), v0(x)) dx and for given ua, ub 2 R V = {v 2 C2[a, b] : v(a) = ua, v(b) = ub}, where 1 < a < b < 1 and f is sufficiently regular. One of the basic problems in the calculus of variation is (P) minv2V E(v). Euler equation. Let u 2 V be a solution of (P), then d dx fu0(x, u(x), u0(x)) = fu(x, u(x), u0(x)) in (a, b). Proof. Exercise. Hints: For fixed 2 C2[a, b] with (a) = (b) = 0 and real , || < 0, set g() = E(u + ). Since g(0) g() it follows g0(0) = 0. Integration by parts in the formula for g0(0) and the following basic lemma in the calculus of variations imply Eulers equation. 16 CHAPTER 1. INTRODUCTION y y0 y1 abx Figure 1.5: Admissible variations Basic lemma in the calculus of variations. Let h 2 C(a, b) and Zb a h(x)(x) dx = 0 for all 2 C1 0 (a, b). Then h(x) 0 on (a, b). Proof. Assume h(x0) > 0 for an x0 2 (a, b), then there is a > 0 such that (x0 , x0 + ) (a, b) and h(x) h(x0)/2 on (x0 , x0 + ). Set (x) = 2 |x x0|2 2 if x 2 (x0 , x0 + ) 0 if x 2 (a, b) \ [x0 , x0 + ] . Thus 2 C1 0 (a, b) and Zb a h(x)(x) dx h(x0) 2 Z x0+ x0 (x) dx > 0, which is a contradiction to the assumption of the lemma. 2 1.2.2 Partial differential equations The same procedure as above applied to the following multiple integral leads to a second-order quasilinear partial differential equation. Set
E(v) = Z F(x, v,rv) dx, 1.2. EQUATIONS FROM VARIATIONAL PROBLEMS 17 where Rn is a domain, x = (x1, . . . , xn), v = v(x) : 7! R, and rv = (vx1 , . . . , vxn). Assume that the function F is sufficiently regular in its arguments. For a given function h, defined on @, set V = {v 2 C2() : v = h on @}. Euler equation. Let u 2 V be a solution of (P), then Xn i=1 @ @xi Fuxi Fu = 0 in . Proof. Exercise. Hint: Extend the above fundamental lemma of the calculus of variations to the case of multiple integrals. The interval (x0, x0+) in the definition of must be replaced by a ball with center at x0 and radius . Example: Dirichlet integral In two dimensions the Dirichlet integral is given by D(v) = Z v2x + v2 y dxdy and the associated Euler equation is the Laplace equation 4u = 0 in . Thus, there is natural relationship between the boundary value problem 4u = 0 in , u = h on @ and the variational problem min v2V D(v). But these problems are not equivalent in general. It can happen that the boundary value problem has a solution but the variational problem has no solution, see for an example Courant and Hilbert [4], Vol. 1, p. 155, where h is a continuous function and the associated solution u of the boundary value problem has no finite Dirichlet integral. The problems are equivalent, provided the given boundary value function h is in the class H1/2(@), see Lions and Magenes [14]. 18 CHAPTER 1. INTRODUCTION Example: Minimal surface equation The non-parametric minimal surface problem in two dimensions is to find a minimizer u = u(x1, x2) of the problem min v2V Z q 1 + v2x 1 + v2x 2 dx, where for a given function h defined on the boundary of the domain V = {v 2 C1() : v = h on @}.
S W Figure 1.6: Comparison surface Suppose that the minimizer satisfies the regularity assumption u 2 C2(), then u is a solution of the minimal surface equation (Euler equation) in @ @x1 p ux1 1 + |ru|2 ! + @ @x2 p ux2 1 + |ru|2 ! = 0. (1.1) In fact, the additional assumption u 2 C2() is superfluous since it follows from regularity considerations for quasilinear elliptic equations of second order, see for example Gilbarg and Trudinger [9]. Let = R2. Each linear function is a solution of the minimal surface equation (1.1). It was shown by Bernstein [2] that there are no other solutions of the minimal surface quation. This is true also for higher dimensions 1.2. EQUATIONS FROM VARIATIONAL PROBLEMS 19 n 7, see Simons [19]. If n 8, then there exists also other solutions which define cones, see Bombieri, De Giorgi and Giusti [3]. The linearized minimal surface equation over u 0 is the Laplace equation 4u = 0. In R2 linear functions are solutions but also many other functions in contrast to the minimal surface equation. This striking difference is caused by the strong nonlinearity of the minimal surface equation. More general minimal surfaces are described by using parametric representations. An example is shown in Figure 1.71. See [18], pp. 62, for example, for rotationally symmetric minimal surfaces. Figure 1.7: Rotationally symmetric minimal surface Neumann type boundary value problems Set V = C1() and E(v) = Z F(x, v,rv) dx Z @ g(x, v) ds, where F and g are given sufficiently regular functions and Rn is a bounded and sufficiently regular domain. Assume u is a minimizer of E(v) in V , that is u 2 V : E(u) E(v) for all v 2 V, 1An experiment from Beutelspachers Mathematikum, Wissenschaftsjahr 2008, Leipzig 20 CHAPTER 1. INTRODUCTION then Z Xn i=1 Fuxi (x, u,ru)xi + Fu(x, u,ru)
dx Z @ gu(x, u) ds = 0 for all 2 C1(). Assume additionally u 2 C2(), then u is a solution of the Neumann type boundary value problem Xn i=1 @ @xi Fuxi Fu = 0 in Xn i=1 Fuxi i gu = 0 on @, where = (1, . . . , n) is the exterior unit normal at the boundary @. This follows after integration by parts from the basic lemma of the calculus of variations. Example: Laplace equation Set E(v) = 1 2 Z |rv|2 dx Z @ h(x)v ds, then the associated boundary value problem is 4u = 0 in @u @ = h on @. Example: Capillary equation Let R2 and set E(v) = Z p 1 + |rv|2 dx + 2 Z v2 dx cos Z @ v ds. Here is a positive constant (capillarity constant) and is the (constant) boundary contact angle, i. e., the angle between the container wall and 1.2. EQUATIONS FROM VARIATIONAL PROBLEMS 21 the capillary surface, defined by v = v(x1, x2), at the boundary. Then the related boundary value problem is div (Tu) = u in Tu = cos on @, where we use the abbreviation
Tu = ru p 1 + |ru|2 , div (Tu) is the left hand side of the minimal surface equation (1.1) and it is twice the mean curvature of the surface defined by z = u(x1, x2), see an exercise. The above problem describes the ascent of a liquid, water for example, in a vertical cylinder with cross section . Assume the gravity is directed downwards in the direction of the negative x3-axis. Figure 1.8 shows that liquid can rise along a vertical wedge which is a consequence of the strong nonlinearity of the underlying equations, see Finn [7]. This photo was taken Figure 1.8: Ascent of liquid in a wedge from [15]. 22 CHAPTER 1. INTRODUCTION 1.3 Exercises 1. Find nontrivial solutions u of uxy uyx = 0 . 2. Prove: In the linear space C2(R2) there are infinitely many linearly independent solutions of 4u = 0 in R2. Hint: Real and imaginary part of holomorphic functions are solutions of the Laplace equation. 3. Find all radially symmetric functions which satisfy the Laplace equation in Rn\{0} for n 2. A function u is said to be radially symmetric if u(x) = f(r), where r = ( Pn i x2i )1/2. Hint: Show that a radially symmetric u satisfies 4u = r1n rn1f0 0 by using ru(x) = f0(r) x r. 4. Prove the basic lemma in the calculus of variations: Let Rn be a domain and f 2 C() such that Z f(x)h(x) dx = 0 for all h 2 C2 0 (). Then f 0 in . 5. Write the minimal surface equation (1.1) as a quasilinear equation of second order. 6. Prove that a sufficiently regular minimizer in C1() of E(v) = Z F(x, v,rv) dx Z @ g(v, v) ds, is a solution of the boundary value problem Xn i=1 @ @xi Fuxi Fu = 0 in Xn
i=1 Fuxi i gu = 0 on @, where = (1, . . . , n) is the exterior unit normal at the boundary @. 1.3. EXERCISES 23 7. Prove that Tu = cos on @, where is the angle between the container wall, which is here a cylinder, and the surface S, defined by z = u(x1, x2), at the boundary of S, is the exterior normal at @. Hint: The angle between two surfaces is by definition the angle between the two associated normals at the intersection of the surfaces. 8. Let be bounded and assume u 2 C2() is a solution of div Tu = C in ru p 1 + |ru|2 = cos on @, where C is a constant. Prove that C = |@| || cos . Hint: Integrate the differential equation over . 9. Assume = BR(0) is a disc with radius R and the center at the origin. Show that radially symmetric solutions u(x) = w(r), r = p x21 + x22, of the capillary boundary value problem are solutions of rw0 p1 + w02 0 = rw in 0 < r < R w0 p1 + w02 = cos if r = R. Remark. It follows from a maximum principle of Concus and Finn [7] that a solution of the capillary equation over a disc must be radially symmetric. 10. Find all radially symmetric solutions of rw0 p1 + w02 0 = Cr in 0 < r < R w0 p1 + w02 = cos if r = R. Hint: From an exercise above it follows that C= 2 R cos . 24 CHAPTER 1. INTRODUCTION 11. Show that div Tu is twice the mean curvature of the surface defined
by z = u(x1, x2). Chapter 2 Equations of first order For a given sufficiently regular function F the general equation of first order for the unknown function u(x) is F(x, u,ru) = 0 in 2 Rn. The main tool for studying related problems is the theory of ordinary differential equations. This is quite different for systems of partial differential of first order. The general linear partial differential equation of first order can be written as Xn i=1 ai(x)uxi + c(x)u = f(x) for given functions ai, c and f. The general quasilinear partial differential equation of first order is Xn i=1 ai(x, u)uxi + c(x, u) = 0. 2.1 Linear equations Let us begin with the linear homogeneous equation a1(x, y)ux + a2(x, y)uy = 0. (2.1) Assume there is a C1-solution z = u(x, y). This function defines a surface S which has at P = (x, y, u(x, y)) the normal N= 1p 1 + |ru|2 (ux,uy, 1) 25 26 CHAPTER 2. EQUATIONS OF FIRST ORDER and the tangential plane defined by z = ux(x, y)( x) + uy(x, y)( y). Set p = ux(x, y), q = uy(x, y) and z = u(x, y). The tuple (x, y, z, p, q) is called surface element and the tuple (x, y, z) support of the surface element. The tangential plane is defined by the surface element. On the other hand, differential equation (2.1) a1(x, y)p + a2(x, y)q = 0 defines at each support (x, y, z) a bundle of planes if we consider all (p, q) satisfying this equation. For fixed (x, y), this family of planes () = (; x, y) is defined by a one parameter family of ascents p() = p(; x, y), q() = q(; x, y). The envelope of these planes is a line since a1(x, y)p() + a2(x, y)q() = 0, which implies that the normal N() on () is perpendicular on (a1, a2, 0). Consider a curve x( ) = (x( ), y( ), z( )) on S, let Tx0 be the tangential plane at x0 = (x(0), y(0), z(0)) of S and consider on Tx0 the line L : l() = x0 + x0(0), 2 R, see Figure 2.1. We assume L coincides with the envelope, which is a line here, of the family of planes () at (x, y, z). Assume that Tx0 = (0) and consider two planes (0) : z z0 = (x x0)p(0) + (y y0)q(0) (0 + h) : z z0 = (x x0)p(0 + h) + (y y0)q(0 + h). At the intersection l() we have (x x0)p(0) + (y y0)q(0) = (x x0)p(0 + h) + (y y0)q(0 + h). Thus, x0(0)p0(0) + y0(0)q0(0) = 0.
From the differential equation a1(x(0), y(0))p() + a2(x(0), y(0))q() = 0 2.1. LINEAR EQUATIONS 27 y z x L S P( l 0) Figure 2.1: Curve on a surface it follows a1p0(0) + a2q0(0) = 0. Consequently (x0( ), y0( )) = x0( ) a1(x(, y( )) (a1(x( ), y( )), a2(x( ), y( )), since 0 was an arbitrary parameter. Here we assume that x0( ) 6= 0 and a1(x( ), y( )) 6= 0. Then we introduce a new parameter t by the inverse of = (t), where t( ) = Z 0 x0(s) a1(x(s), y(s)) ds. It follows x0(t) = a1(x, y), y0(t) = a2(x, y). We denote x( (t)) by x(t) again. Now we consider the initial value problem x0(t) = a1(x, y), y0(t) = a2(x, y), x(0) = x0, y(0) = y0. (2.2) From the theory of ordinary differential equations it follows (Theorem of Picard-Lindelof) that there is a unique solution in a neighbouhood of t = 0 provided the functions a1, a2 are in C1. From this definition of the curves 28 CHAPTER 2. EQUATIONS OF FIRST ORDER (x(t), y(t)) is follows that the field of directions (a1(x0, y0), a2(x0, y0)) defines the slope of these curves at (x(0), y(0)). Definition. The differential equations in (2.2) are called characteristic equations or characteristic system and solutions of the associated initial value problem are called characteristic curves. Definition. A function (x, y) is said to be an integral of the characteristic system if (x(t), y(t)) = const. for each characteristic curve. The constant depends on the characteristic curve considered. Proposition 2.1. Assume 2 C1 is an integral, then u = (x, y) is a solution of (2.1). Proof. Consider for given (x0, y0) the above initial value problem (2.2). Since (x(t), y(t)) = const. it follows xx0 + yy0 = 0 for |t| < t0, t0 > 0 and sufficiently small. Thus x(x0, y0)a1(x0, y0) + y(x0, y0)a2(x0, y0) = 0. 2 Remark. If (x, y) is a solution of equation (2.1) then also H((x, y)), where H(s) is a given C1-function. Examples 1. Consider a1ux + a2uy = 0, where a1, a2 are constants. The system of characteristic equations is x0 = a1, y0 = a2.
Thus the characteristic curves are parallel straight lines defined by x = a1t + A, y = a2t + B, 2.1. LINEAR EQUATIONS 29 where A, B are arbitrary constants. From these equations it follows that (x, y) := a2x a1y is constant along each characteristic curve. Consequently, see Proposition 2.1, u = a2x a1y is a solution of the differential equation. From an exercise it follows that u = H(a2x a1y), (2.3) where H(s) is an arbitrary C1-function, is also a solution. Since u is constant when a2x a1y is constant, equation (2.3) defines cylinder surfaces which are generated by parallel straight lines which are parallel to the (x, y)-plane, see Figure 2.2. y z x Figure 2.2: Cylinder surfaces 2. Consider the differential equation xux + yuy = 0. The characteristic equations are x0 = x, y0 = y, 30 CHAPTER 2. EQUATIONS OF FIRST ORDER and the characteristic curves are given by x = Aet, y = Bet, where A, B are arbitrary constants. Thus, an integral is y/x, x 6= 0, and for a given C1-function the function u = H(x/y) is a solution of the differential equation. If y/x = const., then u is constant. Suppose that H0(s) > 0, for example, then u defines right helicoids (in German: Wendelflachen), see Figure 2.3 Figure 2.3: Right helicoid, a2 < x2 + y2 < R2 (Museo Ideale Leonardo da Vinci, Italy) 3. Consider the differential equation yux xuy = 0. The associated characteristic system is x0 = y, y0 = x. If follows x0x + yy0 = 0, 2.2. QUASILINEAR EQUATIONS 31 or, equivalently, d dt (x2 + y2) = 0, which implies that x2 + y2 = const. along each characteristic. Thus, rotationally symmetric surfaces defined by u = H(x2 + y2), where H0 6= 0, are solutions of the differential equation. 4. The associated characteristic equations to ayux + bxuy = 0, where a, b are positive constants, are given by x0 = ay, y0 = bx. It follows bxx0 ayy0 = 0, or equivalently, d dt (bx2 ay2) = 0. Solutions of the differential equation are u = H(bx2 ay2), which define surfaces which have a hyperbola as the intersection with planes parallel to the (x, y)-plane. Here H(s) is an arbitrary C1-function, H0(s) 6= 0.
2.2 Quasilinear equations Here we consider the equation a1(x, y, u)ux + a2(x, y, u)uy = a3(x, y, u). (2.4) The inhomogeneous linear equation a1(x, y)ux + a2(x, y)uy = a3(x, y) is a special case of (2.4). One arrives at characteristic equations x0 = a1, y0 = a2, z0 = a3 from (2.4) by the same arguments as in the case of homogeneous linear equations in two variables. The additional equation z0 = a3 follows from z0( ) = p()x0( ) + q()y0( ) = pa1 + qa2 = a3, see also Section 2.3, where the general case of nonlinear equations in two variables is considered. 32 CHAPTER 2. EQUATIONS OF FIRST ORDER 2.2.1 A linearization method We can transform the inhomogeneous equation (2.4) into a homogeneous linear equation for an unknown function of three variables by the following trick. We are looking for a function (x, y, u) such that the solution u = u(x, y) of (2.4) is defined implicitly by (x, y, u) = const. Assume there is such a function and let u be a solution of (2.4), then x + uux = 0, y + uuy = 0. Assume u 6= 0, then ux = x u , uy = y u . From (2.4) we obtain a1(x, y, z)x + a2(x, y, z)y + a3(x, y, z)z = 0, (2.5) where z := u. We consider the associated system of characteristic equations x0(t) = a1(x, y, z) y0(t) = a2(x, y, z) z0(t) = a3(x, y, z). One arrives at this system by the same arguments as in the two-dimensional case above. Proposition 2.2. (i) Assume w 2 C1, w = w(x, y, z), is an integral, i. e., it is constant along each fixed solution of (2.5), then = w(x, y, z) is a solution of (2.5). (ii) The function z = u(x, y), implicitly defined through (x, u, z) = const., is a solution of (2.4), provided that z 6= 0. (iii) Let z = u(x, y) be a solution of (2.4) and let (x(t), y(t)) be a solution of x0(t) = a1(x, y, u(x, y)), y0(t) = a2(x, y, u(x, y)), then z(t) := u(x(t), y(t)) satisfies the third of the above characteristic equations. Proof. Exercise. 2.2. QUASILINEAR EQUATIONS 33 2.2.2 Initial value problem of Cauchy Consider again the quasilinear equation (?) a1(x, y, u)ux + a2(x, y, u)uy = a3(x, y, u). Let : x = x0(s), y = y0(s), z = z0(s), s1 s s2, 1 < s1 < s2 < +1 be a regular curve in R3 and denote by C the orthogonal projection of
onto the (x, y)-plane, i. e., C : x = x0(s), y = y0(s). Initial value problem of Cauchy: Find a C1-solution u = u(x, y) of (?) such that u(x0(s), y0(s)) = z0(s), i. e., we seek a surface S defined by z = u(x, y) which contains the curve . y z x C G Figure 2.4: Cauchy initial value problem Definition. The curve is said to be noncharacteristic if x00(s)a2(x0(s), y0(s)) y00 (s)a1(x0(s), y0(s)) 6= 0. Theorem 2.1. Assume a1, a2, a2 2 C1 in their arguments, the initial data x0, y0, z0 2 C1[s1, s2] and is noncharacteristic. 34 CHAPTER 2. EQUATIONS OF FIRST ORDER Then there is a neighbourhood of C such that there exists exactly one solution u of the Cauchy initial value problem. Proof. (i) Existence. Consider the following initial value problem for the system of characteristic equations to (?): x0(t) = a1(x, y, z) y0(t) = a2(x, y, z) z0(t) = a3(x, y, z) with the initial conditions x(s, 0) = x0(s) y(s, 0) = y0(s) z(s, 0) = z0(s). Let x = x(s, t), y = y(s, t), z = z(s, t) be the solution, s1 s s2, |t| < for an > 0. We will show that this set of strings sticked onto the curve , see Figure 2.4, defines a surface. To show this, we consider the inverse functions s = s(x, y), t = t(x, y) of x = x(s, t), y = y(s, t) and show that z(s(x, y), t(x, y)) is a solution of the initial problem of Cauchy. The inverse functions s and t exist in a neighbourhood of t = 0 since det @(x, y) @(s, t) t=0 = xs xt ys yt t=0 = x00(s)a2 y00 (s)a1 6= 0, and the initial curve is noncharacteristic by assumption. Set u(x, y) := z(s(x, y), t(x, y)), then u satisfies the initial condition since u(x, y)|t=0 = z(s, 0) = z0(s). The following calculation shows that u is also a solution of the differential equation (?). a1ux + a2uy = a1(zssx + zttx) + a2(zssy + ztty) = zs(a1sx + a2sy) + zt(a1tx + a2ty) = zs(sxxt + syyt) + zt(txxt + tyyt) = a3
2.2. QUASILINEAR EQUATIONS 35 since 0 = st = sxxt + syyt and 1 = tt = txxt + tyyt. (ii) Uniqueness. Suppose that v(x, y) is a second solution. Consider a point (x0, y0) in a neighbourhood of the curve (x0(s), y(s)), s1 s s2 + , > 0 small. The inverse parameters are s0 = s(x0, y0), t0 = t(x0, y0), see Figure 2.5. x y (x 0 (s),y0 (s)) (x,y) Figure 2.5: Uniqueness proof Let A : x(t) := x(s0, t), y(t) := y(s0, t), z(t) := z(s0, t) be the solution of the above initial value problem for the characteristic differential equations with the initial data x(s0, 0) = x0(s0), y(s0, 0) = y0(s0), z(s0, 0) = z0(s0). According to its construction this curve is on the surface S defined by u = u(x, y) and u(x0, y0) = z(s0, t0). Set (t) := v(x(t), y(t)) z(t), then 0(t) = vxx0 + vyy0 z0 = xxa1 + vya2 a3 = 0 and (0) = v(x(s0, 0), y(s0, 0)) z(s0, 0) = 0 since v is a solution of the differential equation and satisfies the initial condition by assumption. Thus, (t) 0, i. e., v(x(s0, t), y(s0, t)) z(s0, t) = 0. 36 CHAPTER 2. EQUATIONS OF FIRST ORDER Set t = t0, then v(x0, y0) z(s0, t0) = 0, which shows that v(x0, y0) = u(x0, y0) because of z(s0, t0) = u(x0, y0). 2 Remark. In general, there is no uniqueness if the initial curve is a characteristic curve, see an exercise and Figure 2.6 which illustrates this case. y z x u Sv S Figure 2.6: Multiple solutions Examples 1. Consider the Cauchy initial value problem ux + uy = 0 with the initial data x0(s) = s, y0(s) = 1, z0(s) is a given C1-function. These initial data are noncharacteristic since y00 a1x00a2 = 1. The solution of the associated system of characteristic equations x0(t) = 1, y0(t) = 1, u0(t) = 0 2.2. QUASILINEAR EQUATIONS 37 with the initial conditions x(s, 0) = x0(s), y(s, 0) = y0(s), z(s, 0) = z0(s) is given by x = t + x0(s), y = t + y0(s), z = z0(s), i. e., x = t + s, y = t + 1, z = z0(s).
It follows s = xy +1, t = y 1 and that u = z0(xy +1) is the solution of the Cauchy initial value problem. 2. A problem from kinetics in chemistry. Consider for x 0, y 0 the problem ux + uy = k0ek1x + k2 (1 u) with initial data u(x, 0) = 0, x > 0, and u(0, y) = u0(y), y > 0. Here the constants kj are positive, these constants define the velocity of the reactions in consideration, and the function u0(y) is given. The variable x is the time and y is the hight of a tube, for example, in which the chemical reaction takes place, and u is the concentration of the chemical substance. In contrast to our previous assumptions, the initial data are not in C1. The projection C1 [ C2 of the initial curve onto the (x, y)-plane has a corner at the origin, see Figure 2.7. x y x=y W W2 1 C C 1 2 Figure 2.7: Domains to the chemical kinetics example 38 CHAPTER 2. EQUATIONS OF FIRST ORDER The associated system of characteristic equations is x0(t) = 1, y0(t) = 1, z0(t) = k0ek1x + k2 (1 z). It follows x = t + c1, y = t + c2 with constants cj . Thus the projection of the characteristic curves on the (x, y)-plane are straight lines parallel to y = x. We will solve the initial value problems in the domains 1 and 2, see Figure 2.7, separately. (i)The initial value problem in 1. The initial data are x0(s) = s, y0(s) = 0, z0(0) = 0, s 0. It follows x = x(s, t) = t + s, y = y(s, t) = t. Thus z0(t) = (k0ek1(t+s) + k2)(1 z), z(0) = 0. The solution of this initial value problem is given by z(s, t) = 1 exp k0 k1 ek1(s+t) k2t k0 k1 ek1s
. Consequently u1(x, y) = 1 exp k0 k1 ek1x k2y k0k1ek1(xy) is the solution of the Cauchy initial value problem in 1. If time x tends to 1, we get the limit lim x!1 u1(x, y) = 1 ek2y. (ii) The initial value problem in 2. The initial data are here x0(s) = 0, y0(s) = s, z0(0) = u0(s), s 0. It follows x = x(s, t) = t, y = y(s, t) = t + s. Thus z0(t) = (k0ek1t + k2)(1 z), z(0) = 0. 2.2. QUASILINEAR EQUATIONS 39 The solution of this initial value problem is given by z(s, t) = 1 (1 u0(s)) exp k0 k1 ek1t k2t k0 k1 . Consequently u2(x, y) = 1 (1 u0(y x)) exp k0 k1 ek1x k2x k0 k1 is the solution in 2. If x = y, then u1(x, y) = 1 exp k0 k1 ek1x k2x k0 k1 u2(x, y) = 1 (1 u0(0)) exp k0 k1 ek1x k2x k0 k1
. If u0(0) > 0, then u1 < u2 if x = y, i. e., there is a jump of the concentration of the substrate along its burning front defined by x = y. Remark. Such a problem with discontinuous initial data is called Riemann problem. See an exercise for another Riemann problem. The case that a solution of the equation is known Here we will see that we get immediately a solution of the Cauchy initial value problem if a solution of the homogeneous linear equation a1(x, y)ux + a2(x, y)uy = 0 is known. Let x0(s), y0(s), z0(s), s1 < s < s2 be the initial data and let u = (x, y) be a solution of the differential equation. We assume that x(x0(s), y0(s))x00(s) + y(x0(s), y0(s))y00 (s) 6= 0 is satisfied. Set g(s) = (x0(s), y0(s)) and let s = h(g) be the inverse function. The solution of the Cauchy initial problem is given by u0 (h((x, y))). 40 CHAPTER 2. EQUATIONS OF FIRST ORDER This follows since in the problem considered a composition of a solution is a solution again, see an exercise, and since u0 (h((x0(s), y0(s))) = u0(h(g)) = u0(s). Example: Consider equation ux + uy = 0 with initial data x0(s) = s, y0(s) = 1, u0(s) is a given function. A solution of the differential equation is (x, y) = x y. Thus ((x0(s), y0(s)) = s 1 and u0( + 1) = u0(x y + 1) is the solution of the problem. 2.3 Nonlinear equations in two variables Here we consider equation F(x, y, z, p, q) = 0, (2.6) where z = u(x, y), p = ux(x, y), q = uy(x, y) and F 2 C2 is given such that F2 p + F2 q 6= 0. In contrast to the quasilinear case, this general nonlinear equation is more complicated. Together with (2.6) we will consider the following system of ordinary equations which follow from considerations below as necessary conditions, in particular from the assumption that there is a solution of (2.6). x0(t) = Fp (2.7) y0(t) = Fq (2.8) z0(t) = pFp + qFq (2.9) p0(t) = Fx Fup (2.10) q0(t) = Fy Fuq. (2.11) 2.3. NONLINEAR EQUATIONS IN TWO VARIABLES 41 Definition. Equations (2.7)(2.11) are said to be characteristic equations of equation (2.6) and a solution (x(t), y(t), z(t), p(t), q(t)) of the characteristic equations is called characteristic strip or Monge curve. Figure 2.8: Gaspard Monge (Pantheon, Paris) We will see, as in the quasilinear case, that the strips defined by the characteristic
equations build the solution surface of the Cauchy initial value problem. Let z = u(x, y) be a solution of the general nonlinear differential equation (2.6). Let (x0, y0, z0) be fixed, then equation (2.6) defines a set of planes given by (x0, y0, z0, p, q), i. e., planes given by z = v(x, y) which contain the point (x0, y0, z0) and for which vx = p, vy = q at (x0, y0). In the case of quasilinear equations these set of planes is a bundle of planes which all contain a fixed straight line, see Section 2.1. In the general case of this section the situation is more complicated. Consider the example p2 + q2 = f(x, y, z), (2.12) 42 CHAPTER 2. EQUATIONS OF FIRST ORDER where f is a given positive function. Let E be a plane defined by z = v(x, y) and which contains (x0, y0, z0). Then the normal on the plane E directed downward is N= 1p 1 + |rv|2 (p, q,1), where p = vx(x0, y0), q = vy(x0, y0). It follows from (2.12) that the normal N makes a constant angle with the z-axis, and the z-coordinate of N is constant, see Figure 2.9. y z x N P(l) (l) Figure 2.9: Monge cone in an example Thus the endpoints of the normals fixed at (x0, y0, z0) define a circle parallel to the (x, y)-plane, i. e., there is a cone which is the envelope of all these planes. We assume that the general equation (2.6) defines such a Monge cone at each point in R3. Then we seek a surface S which touches at each point its Monge cone, see Figure 2.10. More precisely, we assume there exists, as in the above example, a one parameter C1-family p() = p(; x, y, z), q() = q(; x, y, z) of solutions of (2.6). These (p(), q()) define a family () of planes. 2.3. NONLINEAR EQUATIONS IN TWO VARIABLES 43 y z x Figure 2.10: Monge cones Let x( ) = (x( ), y( ), z( )) be a curve on the surface S which touches at each point its Monge cone, see Figure 2.11. Thus we assume that at each point of the surface S the associated tangent plane coincides with a plane from the family () at this point. Consider the tangential plane Tx0 of the surface S at x0 = (x(0), y(0), z(0)). The straight line l() = x0 + x0(0), 1 < < 1, is an apothem (in German: Mantellinie) of the cone by assumption and is contained in the tangential plane Tx0 as the tangent of a curve on the surface S. It is defined through
x0(0) = l0(). (2.13) The straight line l() satisfies l3() z0 = (l1() x0)p(0) + (l2() y0)q(0), since it is contained in the tangential plane Tx0 defined by the slope (p, q). It follows l03() = p(0)l01 () + q(0)l02(). 44 CHAPTER 2. EQUATIONS OF FIRST ORDER y z x S Tx0 Figure 2.11: Monge cones along a curve on the surface Together with (2.13) we obtain z0( ) = p(0)x0( ) + q(0)y0( ). (2.14) The above straight line l is the limit of the intersection line of two neighbouring planes which envelopes the Monge cone: z z0 = (x x0)p(0) + (y y0)q(0) z z0 = (x x0)p(0 + h) + (y y0)q(0 + h). On the intersection one has (x x0)p() + (y y0)q(0) = (x x0)p(0 + h) + (y y0)q(0 + h). Let h ! 0, it follows (x x0)p0(0) + (y y0)q0(0) = 0. Since x = l1(), y = l2() in this limit position, we have p0(0)l01 () + q0(0)l02 () = 0, and it follows from (2.13) that p0(0)x0( ) + q0(0)y0( ) = 0. (2.15) 2.3. NONLINEAR EQUATIONS IN TWO VARIABLES 45 From the differential equation F(x0, y0, z0, p(), q()) = 0 we see that Fpp0() + Fqq0() = 0. (2.16) Assume x0(0) 6= 0 and Fp 6= 0, then we obtain from (2.15), (2.16) y0(0) x0(0) = Fq Fp , and from (2.14) (2.16) that z0(0) x0(0) =p+q Fq Fp . It follows, since 0 was an arbitrary fixed parameter, x0( ) = (x0( ), y0( ), z0( )) = x0( ), x0( ) Fq Fp , x0( ) p+q Fq Fp
= x0( ) Fp (Fp, Fq, pFp + qFq), i. e., the tangential vector x0( ) is proportional to (Fp, Fq, pFp + qFq). Set a( ) = x0( ) Fp , where F = F(x( ), y( ), z( ), p(( )), q(( ))). Introducing the new parameter t by the inverse of = (t), where t( ) = Z 0 a(s) ds, we obtain the characteristic equations (2.7)(2.9). Here we denote x( (t)) by x(t) again. From the differential equation (2.6) and from (2.7)(2.9) we get equations (2.10) and (2.11). Assume the surface z = u(x, y) under consideration is in C2, then Fx + Fzp + Fppx + Fqpy = 0, (qx = py) Fx + Fzp + x0(t)px + y0(t)py = 0 Fx + Fzp + p0(t) = 0 46 CHAPTER 2. EQUATIONS OF FIRST ORDER since p = p(x, y) = p(x(t), y(t)) on the curve x(t). Thus equation (2.10) of the characteristic system is shown. Differentiating the differential equation (2.6) with respect to y, we get finally equation (2.11). Remark. In the previous quasilinear case F(x, y, z, p, q) = a1(x, y, z)p + a2(x, y, z)q a3(x, y, z) the first three characteristic equations are the same: x0(t) = a1(x, y, z), y0(t) = a2(x, y, z), z0(t) = a3(x, y, z). The point is that the right hand sides are independent on p or q. It follows from Theorem 2.1 that there exists a solution of the Cauchy initial value problem provided the initial data are noncharacteristic. That is, we do not need the other remaining two characteristic equations. The other two equations (2.10) and (2.11) are satisfied in this quasilinear case automatically if there is a solution of the equation, see the above derivation of these equations. The geometric meaning of the first three characteristic differential equations (2.7)(2.11) is the following one. Each point of the curve A : (x(t), y(t), z(t)) corresponds a tangential plane with the normal direction (p,q, 1) such that z0(t) = p(t)x0(t) + q(t)y0(t). This equation is called strip condition. On the other hand, let z = u(x, y) defines a surface, then z(t) := u(x(t), y(t)) satisfies the strip condition, where p = ux and q = uy, that is, the scales defined by the normals fit together. Proposition 2.3. F(x, y, z, p, q) is an integral, i. e., it is constant along each characteristic curve. Proof. d dt F(x(t), y(t), z(t), p(t), q(t)) = Fxx0 + Fyy0 + Fzz0 + Fpp0 + Fqq0 = FxFp + FyFq + pFzFp + qFzFq Fpfx FpFzp FqFy FqFzq = 0. 2.3. NONLINEAR EQUATIONS IN TWO VARIABLES 47
2 Corollary. Assume F(x0, y0, z0, p0, q0) = 0, then F = 0 along characteristic curves with the initial data (x0, y0, z0, p0, q0). Proposition 2.4. Let z = u(x, y), u 2 C2, be a solution of the nonlinear equation (2.6). Set z0 = u(x0, y0, ) p0 = ux(x0, y0), q0 = uy(x0, y0). Then the associated characteristic strip is in the surface S, defined by z = u(x, y). Thus z(t) = u(x(t), y(t)) p(t) = ux(x(t), y(t)) q(t) = uy(x(t), y(t)), where (x(t), y(t), z(t), p(t), q(t)) is the solution of the characteristic system (2.7)(2.11) with initial data (x0, y0, z0, p0, q0) Proof. Consider the initial value problem x0(t) = Fp(x, y, u(x, y), ux(x, y), uy(x, y)) y0(t) = Fq(x, y, u(x, y), ux(x, y), uy(x, y)) with the initial data x(0) = x0, y(0) = y0. We will show that (x(t), y(t), u(x(t), y(t)), ux(x(t), y(t)), uy(x(t), y(t))) is a solution of the characteristic system. We recall that the solution exists and is uniquely determined. Set z(t) = u(x(t), y(t)), then (x(t), y(t), z(t)) S, and z0(t) = uxx0(t) + uyy0(t) = uxFp + uyFq. Set p(t) = ux(x(t), y(t)), q(t) = uy(x(t), y(t)), then p0(t) = uxxFp + uxyFq q0(t) = uyxFp + uyyFq. Finally, from the differential equation F(x, y, u(x, y), ux(x, y), uy(x, y)) = 0 it follows p0(t) = Fx Fup q0(t) = Fy Fuq. 2 48 CHAPTER 2. EQUATIONS OF FIRST ORDER 2.3.1 Initial value problem of Cauchy Let x = x0(s), y = y0(s), z = z0(s), p = p0(s), q = q0(s), s1 < s < s2, (2.17) be a given initial strip such that the strip condition z00 (s) = p0(s)x00(s) + q0(s)y00 (s) (2.18) is satisfied. Moreover, we assume that the initial strip satisfies the nonlinear equation, that is, F(x0(s), y0(s), z0(s), p0(s), q0(s)) = 0. (2.19) Initial value problem of Cauchy: Find a C2-solution z = u(x, y) of F(x, y, z, p, q) = 0 such that the surface S defined by z = u(x, y) contains the above initial strip. Similar to the quasilinear case we will show that the set of strips defined by the characteristic system which are sticked at the initial strip, see Figure 2.12, fit together and define the surface for which we are looking at. Definition. A strip (x( ), y( ), z( ), p( ), q( )), 1 < < 2, is said to be noncharacteristic if x0( )Fq(x( ), y( ), z( ), p( ), q( ))y0( )Fp(x( ), y( ), z( ), p( ), q( )) 6= 0. Theorem 2.2. For a given noncharacteristic initial strip (2.17), x0, y0, z0 2 C2 and p0, q0 2 C1 which satisfies the strip condition (2.18) and the differential equation (2.19) there exists exactly one solution z = u(x, y) of the Cauchy initial value problem in a neighbourhood of the initial curve (x0(s), y0(s), z0(s)), i. e., z = u(x, y) is the solution of the differential equation (2.6) and u(x0(s), y0(s)) = z0(s), ux(x0(s), y0(s)) = p0(s), uy(x0(s), y0(s)) = q0(s). Proof. Consider the system (2.7)(2.11) with initial data
x(s, 0) = x0(s), y(s, 0) = y0(s), z(s, 0) = z0(s), p(s, 0) = p0(s), q(s, 0) = q0(s). We will show that the surface defined by x = x(s, t), y(s, t) is the surface defined by z = u(x, y), where u is the solution of the Cauchy initial value 2.3. NONLINEAR EQUATIONS IN TWO VARIABLES 49 y z x t=0 t>0 Figure 2.12: Construction of the solution problem. It turns out that u(x, y) = z(s(x, y), t(x, y)), where s = s(x, y), t = t(x, y) is the inverse of x = x(s, t), y = y(s, t) in a neigbourhood of t = 0. This inverse exists since the initial strip is noncharacteristic by assumption: det @(x, y) @(s, t) t=0 = x0Fq y0Fq 6= 0. Set P(x, y) = p(s(x, y), t(x, y)), Q(x, y) = q(s(x, y), t(x, y)). From Proposition 2.3 and Proposition 2.4 it follows F(x, y, u, P,Q) = 0. We will show that P(x, y) = ux(x, y) and Q(x, y) = uy(x, y). To see this, we consider the function h(s, t) = zs pxs qys. One has h(s, 0) = z00(s) p0(s)x00(s) q0(s)y00 (s) = 0 since the initial strip satisfies the strip condition by assumption. In the following we will find that for fixed s the function h satisfies a linear homogeneous ordininary differential equation of first order. Consequently, 50 CHAPTER 2. EQUATIONS OF FIRST ORDER h(s, t) = 0 in a neighbourhood of t = 0. Thus the strip condition is also satisfied along strips transversally to the characteristic strips, see Figure 2.18. Thaen the set of scales fit together and define a surface like the scales of a fish. From the definition of h(s, t) and the characteristic equations we get ht(s, t) = zst ptxs qtys pxst qyst = @ @s (zt pxt qyt) + psxt + qsyt qtys ptxs = (pxs + qys)Fz + Fxxs + Fyzs + Fpps + Fqqs. Since F(x(s, t), y(s, t), z(s, t), p(s, t), q(s, t)) = 0, it follows after differentiation of this equation with respect to s the differential equation ht = Fzh. Hence h(s, t) 0, since h(s, 0) = 0. Thus we have zs = pxs + qys zt = pxt + qyt zs = uxxs + uyys zt = uxyt + uyyt. The first equation was shown above, the second is a characteristic equation and the last two follow from z(s, t) = u(x(s, t), y(s, t)). This system implies (P ux)xs + (Q uy)ys = 0 (P ux)xt + (Q uy)yt = 0. It follows P = ux and Q = uy.
The initial conditions u(x(s, 0), y(s, 0)) = z0(s) ux(x(s, 0), y(s, 0)) = p0(s) uy(x(s, 0), y(s, 0)) = q0(s) are satisfied since u(x(s, t), y(s, t)) = z(s(x, y), t(x, y)) = z(s, t) ux(x(s, t), y(s, t)) = p(s(x, y), t(x, y)) = p(s, t) uy(x(s, t), y(s, t)) = q(s(x, y), t(x, y)) = q(s, t). 2.4. NONLINEAR EQUATIONS IN RN 51 The uniqueness follows as in the proof of Theorem 2.1. 2 Example. A differential equation which occurs in the geometrical optic is u2 x + u2y = f(x, y), where the positive function f(x, y) is the index of refraction. The level sets defined by u(x, y) = const. are called wave fronts. The characteristic curves (x(t), y(t)) are the rays of light. If n is a constant, then the rays of light are straight lines. In R3 the equation is u2 x + u2y + u2z = f(x, y, z). Thus we have to extend the previous theory from R2 to Rn, n 3. 2.4 Nonlinear equations in Rn Here we consider the nonlinear differential equation F(x, z, p) = 0, (2.20) where x = (x1, . . . , xn), z = u(x) : Rn 7! R, p = ru. The following system of 2n+1 ordinary differential equations is called characteristic system. x0(t) = rpF z0(t) = p rpF p0(t) = rxF Fzp. Let x0(s) = (x01(s), . . . , x0n(s)), s = (s1, . . . , sn1), be a given regular (n-1)-dimensional C2-hypersurface in Rn, i. e., we assume rank @x0(s) @s = n 1. Here s 2 D is a parameter from an (n 1)-dimensional parameter domain D. For example, x = x0(s) defines in the three dimensional case a regular surface in R3. 52 CHAPTER 2. EQUATIONS OF FIRST ORDER Assume z0(s) : D 7! R, p0(s) = (p01(s), . . . , p0n(s)) are given sufficiently regular functions. The (2n + 1)-vector (x0(s), z0(s), p0(s)) is called initial strip manifold and the condition @z0 @sl = nX1 i=1
p0i(s) @x0i @sl , l = 1, . . . , n 1, strip condition. The initial strip manifold is said to be noncharacteristic if det 0 BBB@ Fp1 Fp2 Fpn @x01 @s1 @x02 @s1 @x0n @s1 ......................... @x01 @sn1 @x02 @sn1 @x0n @sn1 1 CCCA 6= 0, where the argument of Fpj is the initial strip manifold. Initial value problem of Cauchy. Seek a solution z = u(x) of the differential equation (2.20) such that the initial manifold is a subset of {(x, u(x),ru(x)) : x 2 }. As in the two dimensional case we have under additional regularity assumptions Theorem 2.3. Suppose the initial strip manifold is not characteristic and satisfies differential equation (2.20), that is, F(x0(s), z0(s), p0(s)) = 0. Then there is a neighbourhood of the initial manifold (x0(s), z0(s)) such that there exists a unique solution of the Cauchy initial value problem. Sketch of proof. Let x = x(s, t), z = z(s, t), p = p(s, t) be the solution of the characteristic system and let s = s(x), t = t(x) 2.5. HAMILTON-JACOBI THEORY 53 be the inverse of x = x(s, t) which exists in a neighbourhood of t = 0. Then, it turns out that z = u(x) := z(s1(x1, . . . , xn), . . . , sn1(x1, . . . , xn), t(x1, . . . , xn)) is the solution of the problem. 2.5 Hamilton-Jacobi theory The nonlinear equation (2.20) of previous section in one more dimension is F(x1, . . . , xn, xn+1, z, p1, . . . , pn, pn+1) = 0. The content of the Hamilton1-Jacobi2 theory is the theory of the special case F pn+1 + H(x1, . . . , xn, xn+1, p1, . . . , pn) = 0, (2.21) i. e., the equation is linear in pn+1 and does not depend on z explicitly. Remark. Formally, one can write equation (2.20) F(x1, . . . , xn, u, ux1 , . . . , uxn) = 0 as an equation of type (2.21). Set xn+1 = u and seek u implicitely from (x1, . . . , xn, xn+1) = const., where is a function which is defined by a differential equation. Assume xn+1 6= 0, then 0 = F(x1, . . . , xn, u, ux1 , . . . , uxn)
= F(x1, . . . , xn, xn+1, x1 xn+1 , . . . , xn xn+1 ) = : G(x1, . . . , xn+1, 1, . . . , xn+1). Suppose that Gxn+1 6= 0, then xn+1 = H(x1, . . . , xn, xn+1, x1 , . . . , xn+1). 1Hamilton, William Rowan, 18051865 2Jacobi, Carl Gustav, 18051851 54 CHAPTER 2. EQUATIONS OF FIRST ORDER The associated characteristic equations to (2.21) are x0n+1( ) = Fpn+1 = 1 x0k( ) = Fpk = Hpk , k = 1, . . . , n z0( ) = nX+1 l=1 plFpl = Xn l=1 plHpl + pn+1 = Xn l=1 plHpl H p0n+1( ) = Fxn+1 Fzpn+1 = Fxn+1 p0k( ) = Fxk Fzpk = Fxk , k = 1, . . . , n. Set t := xn+1, then we can write partial differential equation (2.21) as ut + H(x, t,rxu) = 0 (2.22) and 2n of the characteristic equations are x0(t) = rpH(x, t, p) (2.23) p0(t) = rxH(x, t, p). (2.24) Here is x = (x1, . . . , xn), p = (p1, . . . , pn). Let x(t), p(t) be a solution of (2.23) and (2.24), then it follows p0n+1(t) and z0(t) from the characteristic equations p0n+1(t) = Ht z0(t) = p rpH H. Definition. The function H(x, t, p) is called Hamilton function, equation (2.21) Hamilton-Jacobi equation and the system (2.23), (2.24) canonical system to H. There is an interesting interplay between the Hamilton-Jacobi equation and the canonical system. According to the previous theory we can construct a solution of the Hamilton-Jacobi equation by using solutions of the 2.5. HAMILTON-JACOBI THEORY 55 canonical system. On the other hand, one obtains from solutions of the Hamilton-Jacobi equation also solutions of the canonical system of ordinary differential equations. Definition. A solution (a; x, t) of the Hamilton-Jacobi equation, where a = (a1, . . . , an) is an n-tuple of real parameters, is called a complete integral of the Hamilton-Jacobi equation if det(xial)n
i,l=1 6= 0. Remark. If u is a solution of the Hamilton-Jacobi equation, then also u + const. Theorem 2.4 (Jacobi). Assume u = (a; x, t) + c, c = const., 2 C2 in its arguments, is a complete integral. Then one obtains by solving of bi = ai(a; x, t) with respect to xl = xl(a, b, t), where bi i = 1, . . . , n are given real constants, and then by setting pk = xk (a; x(a, b; t), t) a 2n-parameter family of solutions of the canonical system. Proof. Let xl(a, b; t), l = 1, . . . , n, be the solution of the above system. The solution exists since is a complete integral by assumption. Set pk(a, b; t) = xk (a; x(a, b; t), t), k = 1, . . . , n. We will show that x and p solves the canonical system. Differentiating ai = bi with respect to t and the Hamilton-Jacobi equation t+H(x, t,rx) = 0 with respect to ai, we obtain for i = 1, . . . , n tai + Xn k=1 xkai @xk @t =0 tai + Xn k=1 xkaiHpk = 0. 56 CHAPTER 2. EQUATIONS OF FIRST ORDER Since is a complete integral it follows for k = 1, . . . , n @xk @t = Hpk . Along a trajectory, i. e., where a, b are fixed, it is @xk @t = x0k(t). Thus x0k(t) = Hpk . Now we differentiate pi(a, b; t) with respect to t and t + H(x, t,rx) = 0 with respect to xi, and obtain p0i(t) = xit + Xn k=1 xixkx0k(t) 0 = xit + Xn k=1 xixkHpk + Hxi 0 = xit + Xn k=1 xixkx0k(t) + Hxi It follows finally that p0i(t) = Hxi . 2 Example: Kepler problem The motion of a mass point in a central field takes place in a plane, say the (x, y)-plane, see Figure 2.13, and satisfies the system of ordinary
differential equations of second order x00(t) = Ux, y00(t) = Uy, where U(x, y) = k2 p x2 + y2 . Here we assume that k2 is a positive constant and that the mass point is attracted of the origin. In the case that it is pushed one has to replace U by U. See Landau and Lifschitz [12], Vol 1, for example, for the related physics. Set p = x0, q = y0 and H= 1 2 (p2 + q2) U(x, y), 2.5. HAMILTON-JACOBI THEORY 57 x y (x(t),y(t)) (Ux ,U y ) q Figure 2.13: Motion in a central field then x0(t) = Hp, y0(t) = Hq p0(t) = Hx, q0(t) = Hy. The associated Hamilton-Jacobi equation is t + 1 2 (2 x + 2y )= k2 p x2 + y2 . which is in polar coordinates (r, ) t + 1 2 (2r + 1 r2 2 )= k2 r . (2.25) Now we will seek a complete integral of (2.25) by making the ansatz t = = const. = = const. (2.26) and obtain from (2.25) that =
Zr r0 s 2 + 2k2 2 2 d + c(t, ). From ansatz (2.26) it follows c(t, ) = t . Therefore we have a two parameter family of solutions = (, ; , r, t) 58 CHAPTER 2. EQUATIONS OF FIRST ORDER of the Hamilton-Jacobi equation. This solution is a complete integral, see an exercise. According to the theorem of Jacobi set = t0, = 0. Then t t0 = Zr r0 d q 2 + 2k2 2 2 . The inverse function r = r(t), r(0) = r0, is the r-coordinate depending on time t, and 0 = Zr r0 d 2 q 2 + 2k2 2 2 . Substitution = 1 yields 0 = Z 1/r 1/r0 d p 2 + 2k2 2 2 = arcsin 2 k2 1 r1q 1 + 22 k4 ! + arcsin 2 k2 1
r0 1 q 1 + 22 k4 ! . Set 1 = 0 + arcsin 2 k2 1 r0 1 q 1 + 22 k4 ! and p= 2 k2 , 2 = r 1+ 22 k4 , then 1 = arcsin p r1 2 . It follows r = r() = p 1 2 sin( 1) , which is the polar equation of conic sections. It defines an ellipse if 0 < 1, a parabola if = 1 and a hyperbola if > 1, see Figure 2.14 for the case of an ellipse, where the origin of the coordinate system is one of the focal points of the ellipse. For another application of the Jacobi theorem see Courant and Hilbert [4], Vol. 2, pp. 94, where geodedics on an ellipsoid are studied. 2.6. EXERCISES 59 q1 p p 1+ 1e e 2 2 p Figure 2.14: The case of an ellipse 2.6 Exercises 1. Suppose u : R2 7! R is a solution of a(x, y)ux + b(x, y)uy = 0.
Show that for arbitrary H 2 C1 also H(u) is a solution. 2. Find a solution u 6 const. of ux + uy = 0 such that graph(u) := {(x, y, z) 2 R3 : z = u(x, y), (x, y) 2 R2} contains the straight line (0, 0, 1) + s(1, 1, 0), s 2 R. 3. Let (x, y) be a solution of a1(x, y)ux + a2(x, y)uy = 0 . Prove that level curves SC := {(x, y) : (x, y) = C = const.} are characteristic curves, provided that r 6= 0 and (a1, a2) 6= (0, 0). 60 CHAPTER 2. EQUATIONS OF FIRST ORDER 4. Prove Proposition 2.2. 5. Find two different solutions of the initial value problem ux + uy = 1, where the initial data are x0(s) = s, y0(s) = s, z0(s) = s. Hint: (x0, y0) is a characteristic curve. 6. Solve the initial value problem xux + yuy = u with initial data x0(s) = s, y0(s) = 1, z0(s), where z0 is given. 7. Solve the initial value problem xux + yuy = xu2, x0(s) = s, y0(s) = 1, z0(s) = es. 8. Solve the initial value problem uux + uy = 1, x0(s) = s, y0(s) = s, z0(s) = s/2 if 0 < s < 1. 9. Solve the initial value problem uux + uuy = 2, x0(s) = s, y0(s) = 1, z0(s) = 1 + s if 0 < s < 1. 10. Solve the initial value problem u2 x +u2y = 1+x with given initial data x0(s) = 0, y0(s) = s, u0(s) = 1, p0(s) = 1, q0(s) = 0, 1 < s < 1. 11. Find the solution (x, y) of (x y)ux + 2yuy = 3x such that the surface defined by z = (x, y) contains the curve C : x0(s) = s, y0(s) = 1, z0(s) = 0, s 2 R. 2.6. EXERCISES 61 12. Solve the following initial problem of chemical kinetics. ux + uy = k0ek1x + k2 (1 u)2, x > 0, y > 0 with the initial data u(x, 0) = 0, u(0, y) = u0(y), where u0, 0 < u0 < 1, is given. 13. Solve the Riemann problem ux1 + ux2 = 0 u(x1, 0) = g(x1) in 1 = {(x1, x2) 2 R2 : x1 > x2} and in 2 = {(x1, x2) 2 R2 : x1 < x2}, where g(x1) = ul : x1 < 0 ur : x1 > 0 with constants ul 6= ur. 14. Determine the opening angle of the Monge cone, i. e., the angle between
the axis and the apothem (in German: Mantellinie) of the cone, for equation u2 x + u2y = f(x, y, u), where f > 0. 15. Solve the initial value problem u2 x + u2y = 1, where x0() = a cos , y0() = a sin , z0() = 1, p0() = cos , q0() = sin if 0 < 2, a = const. > 0. 16. Show that the integral (, ; , r, t), see the Kepler problem, is a complete integral. 17. a) Show that S = p x + p1 y + , , 2 R, 0 < < 1, is a complete integral of Sx q 1 S2 y = 0. b) Find the envelope of this family of solutions. 18. Determine the length of the half axis of the ellipse r= p 1 "2 sin( 0) , 0 " < 1. 62 CHAPTER 2. EQUATIONS OF FIRST ORDER 19. Find the Hamilton function H(x, p) of the Hamilton-Jacobi-Bellman differential equation if h = 0 and f = Ax + B, where A, B are constant and real matrices, A : Rm 7! Rn, B is an orthogonal real n n-Matrix and p 2 Rn is given. The set of admissible controls is given by U = { 2 Rn : Xn i=1 2 i 1} . Remark. The Hamilton-Jacobi-Bellman equation is formally the HamiltonJacobi equation ut + H(x,ru) = 0, where the Hamilton function is defined by H(x, p) := min 2U (f(x, ) p + h(x, )) , f(x, ) and h(x, ) are given. See for example, Evans [5], Chapter 10. Chapter 3 Classification Different types of problems in physics, for example, correspond different types of partial differential equations. The methods how to solve these equations differ from type to type. The classification of differential equations follows from one single question: Can we calculate formally the solution if sufficiently many initial data are given? Consider the initial problem for an ordinary differential equation y0(x) = f(x, y(x)), y(x0) = y0. Then one can determine formally the solution, provided the function f(x, y) is sufficiently regular. The solution of the initial value problem is formally given by a power series. This formal solution is a solution of the problem if f(x, y) is real analytic according to a theorem of Cauchy. In the case of partial differential equations the related
theorem is the Theorem of Cauchy-Kowalevskaya. Even in the case of ordinary differential equations the situation is more complicated if y0 is implicitly defined, i. e., the differential equation is F(x, y(x), y0(x)) = 0 for a given function F. 3.1 Linear equations of second order The general nonlinear partial differential equation of second order is F(x, u,Du,D2u) = 0, where x 2 Rn, u : Rn 7! R, Du ru and D2u stands for all second derivatives. The function F is given and sufficiently regular with respect to its 2n + 1 + n2 arguments. 63 64 CHAPTER 3. CLASSIFICATION In this section we consider the case Xn i,k=1 aik(x)uxixk + f(x, u,ru) = 0. (3.1) The equation is linear if f= Xn i=1 bi(x)uxi + c(x)u + d(x). Concerning the classification the main part Xn i,k=1 aik(x)uxixk plays the essential role. Suppose u 2 C2, then we can assume, without restriction of generality, that aik = aki, since Xn i,k=1 aikuxixk = Xn i,k=1 (aik)?uxixk , where (aik)? = 1 2 (aik + aki). Consider a hypersurface S in Rn defined implicitly by (x) = 0, r 6= 0, see Figure 3.1 Assume u and ru are given on S. Problem: Can we calculate all other derivatives of u on S by using differential equation (3.1) and the given data? We will find an answer if we map S onto a hyperplane S0 by a mapping n = (x1, . . . , xn) i = i(x1, . . . , xn), i = 1, . . . , n 1, for functions i such that det @(1, . . . , n) @(x1, . . . , xn) 6= 0 in Rn. It is assumed that and i are sufficiently regular. Such a mapping = (x) exists, see an exercise. 3.1. LINEAR EQUATIONS OF SECOND ORDER 65 x Sx x3
1 2 Figure 3.1: Initial manifold S The above transform maps S onto a subset of the hyperplane defined by n = 0, see Figure 3.2. We will write the differential equation in these new coordinates. Here we use Einsteins convention, i. e., we add terms with repeating indices. Since u(x) = u(x()) =: v() = v((x)), where x = (x1, . . . , xn) and = (1, . . . , n), we get uxj = vi @i @xj , (3.2) uxjxk = vil @i @xj @l @xk + vi @2i @xj@xk . Thus, differential equation (3.1) in the new coordinates is given by ajk(x) @i @xj @l @xk vil + terms known on S0 = 0. Since vk (1, . . . , n1, 0), k = 1, . . . , n, are known, see (3.2), it follows that vkl , l = 1, . . . , n1, are known on S0. Thus we know all second derivatives vij on S0 with the only exception of vnn. 66 CHAPTER 3. CLASSIFICATION 3 1 2 l l l S 0 Figure 3.2: Transformed flat manifold S0 We recall that, provided v is sufficiently regular, vkl(1, . . . , n1, 0) is the limit of vk (1, . . . , l + h, l+1, . . . , n1, 0) vk (1, . . . , l, l+1, . . . , n1, 0) h as h ! 0. Thus the differential equation can be written as Xn j,k=1 ajk(x) @n @xj @n @xk
vnn = terms known on S0. It follows that we can calculate vnn if Xn i,j=1 aij(x)xixj 6= 0 (3.3) on S. This is a condition for the given equation and for the given surface S. 3.1. LINEAR EQUATIONS OF SECOND ORDER 67 Definition. The differential equation Xn i,j=1 aij(x)xixj = 0 is called characteristic differential equation associated to the given differential equation (3.1). If , r 6= 0, is a solution of the characteristic differential equation, then the surface defined by = 0 is called characteristic surface. Remark. The condition (3.3) is satisfied for each with r 6= 0 if the quadratic matrix (aij(x)) is positive or negative definite for each x 2 , which is equivalent to the property that all eigenvalues are different from zero and have the same sign. This follows since there is a (x) > 0 such that, in the case that the matrix (aij) is poitive definite, Xn i,j=1 aij(x)ij (x)||2 for all 2 Rn. Here and in the following we assume that the matrix (aij) is real and symmetric. The characterization of differential equation (3.1) follows from the signs of the eigenvalues of (aij(x)). Definition. Differential equation (3.1) is said to be of type (, , ) at x 2 if eigenvalues of (aij)(x) are positive, eigenvalues are negative and eigenvalues are zero ( + + = n). In particular, equation is called elliptic if it is of type (n, 0, 0) or of type (0, n, 0), i. e., all eigenvalues are different from zero and have the same sign, parabolic if it is of type (n1, 0, 1) or of type (0, n1, 1), i. e., one eigenvalue is zero and all the others are different from zero and have the same sign, hyperbolic if it is of type (n 1, 1, 0) or of type (1, n 1, 0), i. e., all eigenvalues are different from zero and one eigenvalue has another sign than all the others. 68 CHAPTER 3. CLASSIFICATION Remarks: 1. According to this definition there are other types aside from elliptic, parabolic or hyperbolic equations. 2. The classification depends in general on x 2 . An example is the Tricomi equation, which appears in the theory of transsonic flows, yuxx + uyy = 0. This equation is elliptic if y > 0, parabolic if y = 0 and hyperbolic for y < 0. Examples: 1. The Laplace equation in R3 is 4u = 0, where 4u := uxx + uyy + uzz. This equation is elliptic. Thus for each manifold S given by {(x, y, z) : (x, y, z) = 0}, where is an arbitrary sufficiently regular function such that r 6= 0, all derivatives of u are known on S, provided u and ru are known on S. 2. The wave equation utt = uxx + uyy + uzz, where u = u(x, y, z, t), is hyperbolic. Such a type describes oscillations of mechanical structures, for example.
3. The heat equation ut = uxx+uyy+uzz, where u = u(x, y, z, t), is parabolic. It describes, for example, the propagation of heat in a domain. 4. Consider the case that the (real) coefficients aij in equation (3.1) are constant. We recall that the matrix A = (aij) is symmetric, i. e., AT = A. In this case, the transform to principle axis leads to a normal form from which the classification of the equation is obviously. Let U be the associated orthogonal matrix, then UTAU = 0 BB@ 1 0 0 0 2 0 ................ 0 0 n 1 CCA . 3.1. LINEAR EQUATIONS OF SECOND ORDER 69 Here is U = (z1, . . . , zn), where zl, l = 1, . . . , n, is an orthonormal system of eigenvectors to the eigenvalues l. Set y = UT x and v(y) = u(Uy), then Xn i,j=1 aijuxixj = Xn i=1 ivyiyj . (3.4) 3.1.1 Normal form in two variables Consider the differential equation a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + terms of lower order = 0 (3.5) in R2. The associated characteristic differential equation is a2 x + 2bxy + c2y = 0. (3.6) We show that an appropriate coordinate transform will simplify equation (3.5) sometimes in such a way that we can solve the transformed equation explicitly. Let z = (x, y) be a solution of (3.6). Consider the level sets {(x, y) : (x, y) = const.} and assume y 6= 0 at a point (x0, y0) of the level set. Then there is a function y(x) defined in a neighbourhood of x0 such that (x, y(x)) = const. It follows y0(x) = x y , which implies, see the characteristic equation (3.6), ay02 2by0 + c = 0. (3.7) Then, provided a 6= 0, we can calculate := y0 from the (known) coefficients a, b and c: 1,2 = 1 a b p b2 ac
. (3.8) These solutions are real if and only of ac b2 0. Equation (3.5) is hyperbolic if ac b2 < 0, parabolic if ac b2 = 0 and elliptic if acb2 > 0. This follows from an easy discussion of the eigenvalues of the matrix ab bc , see an exercise. 70 CHAPTER 3. CLASSIFICATION Normal form of a hyperbolic equation Let and are solutions of the characteristic equation (3.6) such that y01 1 = x y y02 2 = x y , where 1 and 2 are given by (3.8). Thus and are solutions of the linear homogeneous equations of first order x + 1(x, y)y = 0 (3.9) x + 2(x, y)y = 0. (3.10) Assume (x, y), (x, y) are solutions such that r 6= 0 and r 6= 0, see an exercise for the existence of such solutions. Consider two families of level sets defined by (x, y) = and (x, y) = , see Figure 3.3. y x (x,y)= (x,y)= (x,y)= (x,y)= ja ja yb yb 1 2 1 2 Figure 3.3: Level sets These level sets are characteristic curves of the partial differential equations (3.9) and (3.10), respectively, see an exercise of the previous chapter. Lemma. (i) Curves from different families can not touch each other. (ii) xy yx 6= 0. 3.1. LINEAR EQUATIONS OF SECOND ORDER 71 Proof. (i): y02 y01 2 1 = 2 a p b2 ac 6= 0. (ii): 2 1 =
x y x y . 2 Proposition 3.1. The mapping = (x, y), = (x, y) transforms equation (3.5) into v = lower order terms, (3.11) where v(, ) = u(x(, ), y(, )). Proof. The proof follows from a straightforward calculation. ux = vx + vx uy = vy + vy uxx = v2 x + 2vxx + v2x + lower order terms uxy = vxy + v(xy + yx) + vxy + lower order terms uyy = v2y + 2vyy + v2 y + lower order terms. Thus auxx + 2buxy + cuyy = v + 2v + v + l.o.t., where : = a2 x + 2bxy + c2y : = axx + b(xy + yx) + cyy : = a2x + 2bxy + c2 y. The coefficients and are zero since and are solutions of the characteristic equation. Since 2 = (ac b2)(xy yx)2, it follows from the above lemma that the coefficient is different from zero. 2 Example: Consider the differential equation uxx uyy = 0. 72 CHAPTER 3. CLASSIFICATION The associated characteristic differential equation is 2 x 2y = 0. Since 1 = 1 and 2 = 1, the functions and satisfy differential equations x + y = 0 x y = 0. Solutions with r 6= 0 and r 6= 0 are = x y, = x + y. Thus the mapping = x y, = x + y leads to the simple equation v(, ) = 0. Assume v 2 C2 is a solution, then v = f1() for an arbitrary C1 function f1(). It follows v(, ) = Z 0 f1() d + g(),
where g is an arbitrary C2 function. Thus each C2-solution of the differential equation can be written as (?) v(, ) = f() + g(), where f, g 2 C2. On the other hand, for arbitrary C2-functions f, g the function (?) is a solution of the differential equation v = 0. Consequently each C2-solution of the original equation uxx uyy = 0 is given by u(x, y) = f(x y) + g(x + y), where f, g 2 C2. 3.2. QUASILINEAR EQUATIONS OF SECOND ORDER 73 3.2 Quasilinear equations of second order Here we consider the equation Xn i,j=1 aij(x, u,ru)uxixj + b(x, u,ru) = 0 (3.12) in a domain Rn, where u : 7! R. We assume that aij = aji. As in the previous section we can derive the characteristic equation Xn i,j=1 aij(x, u,ru)xixj = 0. In contrast to linear equations, solutions of the characteristic equation depend on the solution considered. 3.2.1 Quasilinear elliptic equations There is a large class of quasilinear equations such that the associated characteristic equation has no solution , r 6= 0. Set U = {(x, z, p) : x 2 , z 2 R, p 2 Rn}. Definition. The quasilinear equation (3.12) is called elliptic if the matrix (aij(x, z, p)) is positive definite for each (x, z, p) 2 U. Assume equation (3.12) is elliptic and let (x, z, p) be the minimum and (x, z, p) the maximum of the eigenvalues of (aij), then 0 < (x, z, p)||2 Xn i,j=1 aij(x, z, p)ij (x, z, p)||2 for all 2 Rn. Definition. Equation (3.12) is called uniformly elliptic if / is uniformly bounded in U. An important class of elliptic equations which are not uniformly elliptic (nonuniformly elliptic) is Xn i=1 @ @xi p uxi 1 + |ru|2 ! + lower order terms = 0. (3.13) 74 CHAPTER 3. CLASSIFICATION The main part is the minimal surface operator (left hand side of the minimal surface equation). The coefficients aij are aij(x, z, p) = 1 + |p|2 1/2
ij pipj 1 + |p|2 , ij denotes the Kronecker delta symbol. It follows that = 1 (1 + |p|2)3/2 ,= 1 (1 + |p|2)1/2 . Thus equation (3.13) is not uniformly elliptic. The behaviour of solutions of uniformly elliptic equations is similar to linear elliptic equations in contrast to the behaviour of solutions of nonuniformly elliptic equations. Typical examples for nonuniformly elliptic equations are the minimal surface equation and the capillary equation. 3.3 Systems of first order Consider the quasilinear system Xn k=1 Ak(x, u)uuk + b(x, u) = 0, (3.14) where Ak are m m-matrices, sufficiently regular with respect to their arguments, and u= 0 B@ u1 ... um 1 CA , uxk = 0 B@ u1,xk ... um,xk 1 CA ,b= 0 B@ b1 ... bm 1 CA . We ask the same question as above: can we calculate all derivatives of u in a neighbourhood of a given hypersurface S in Rn defined by (x) = 0, r 6= 0, provided u(x) is given on S? For an answer we map S onto a flat surface S0 by using the mapping = (x) of Section 3.1 and write equation (3.14) in new coordinates. Set v() = u(x()), then
Xn k=1 Ak(x, u)xkvn = terms known on S0. 3.3. SYSTEMS OF FIRST ORDER 75 We can solve this system with respect to vn, provided that det Xn k=1 Ak(x, u)xk ! 6= 0 on S. Definition. Equation det Xn k=1 Ak(x, u)xk ! =0 is called characteristic equation associated to equation (3.14) and a surface S: (x) = 0, defined by a solution , r 6= 0, of this characteristic equation is said to be characteristic surface. Set C(x, u, ) = det Xn k=1 Ak(x, u)k ! for 2 Rn. Definition. (i) The system (3.14) is hyperbolic at (x, u(x)) if there is a regular linear mapping = Q, where = (1, . . . , n1, ), such that there exists m real roots k = k(x, u(x), 1, . . . , n1), k = 1, . . . ,m, of D(x, u(x), 1, . . . , n1, ) = 0 for all (1, . . . , n1), where D(x, u(x), 1, . . . , n1, ) = C(x, u(x), x,Q). (ii) System (3.14) is parabolic if there exists a regular linear mapping = Q such that D is independent of , i. e., D depends on less than n parameters. (iii) System (3.14) is elliptic if C(x, u, ) = 0 only if = 0. Remark. In the elliptic case all derivatives of the solution can be calculated from the given data and the given equation. 76 CHAPTER 3. CLASSIFICATION 3.3.1 Examples 1. Beltrami equations Wux bvx cvy = 0 (3.15) Wuy + avx + bvy = 0, (3.16) where W, a, b, c are given functions depending of (x, y), W 6= 0 and the matrix ab bc is positive definite. The Beltrami system is a generalization of Cauchy-Riemann equations. The function f(z) = u(x, y) + iv(x, y), where z = x + iy, is called a quasiconform
mapping, see for example [9], Chapter 12, for an application to partial differential equations. Set A1 = W b 0a , A2 = 0 c Wb . Then the system (3.15), (3.16) can be written as A1 ux vx + A2 uy vy = 0 0 . Thus, C(x, y, ) = W1 b1 c2 W2 a1 + b2 = W(a2 1 + 2b12 + c2 2 ), which is different from zero if 6= 0 according to the above assumptions. Thus the Beltrami system is elliptic. 2. Maxwell equations The Maxwell equations in the isotropic case are c rotx H = E + Et (3.17) c rotx E = Ht, (3.18) 3.3. SYSTEMS OF FIRST ORDER 77 where E = (e1, e2, e3)T electric field strength, ei = ei(x, t), x = (x1, x2, x3), H = (h1, h2, h3)T magnetic field strength, hi = hi(x, t), c speed of light, specific conductivity, dielectricity constant, magnetic permeability. Here c, , and are positive constants. Set p0 = t, pi = xi , i = 1, . . . 3, then the characteristic differential equation
is p0/c 0 0 0 p3 p2 0 p0/c 0 p3 0 p1 0 0 p0/c p2 p1 0 0 p3 p2 p0/c 0 0 p3 0 p1 0 p0/c 0 p2 p1 0 0 0 p0/c = 0. The following manipulations simplifies this equation: (i) multiply the first three columns with p0/c, (ii) multiply the 5th column with p3 and the the 6th column with p2 and add the sum to the 1st column, (iii) multiply the 4th column with p3 and the 6th column with p1 and add the sum to the 2th column, (iv) multiply the 4th column with p2 and the 5th column with p1 and add the sum to the 3th column, (v) expand the resulting determinant with respect to the elements of the 6th, 5th and 4th row. We obtain q + p21 p1p2 p1p3 p1p2 q + p22 p2p3 p1p3 p2p3 q + p23 = 0, where q := c2 p20 g2 with g2 := p21 + p22 + p23. The evaluation of the above equation leads to q2(q + g2) = 0, i. e., 2t c2 2t |rx|2 = 0. 78 CHAPTER 3. CLASSIFICATION It follows immediately that Maxwell equations are a hyperbolic system, see an exercise. There are two solutions of this characteristic equation. The first one are characteristic surfaces S(t), defined by (x, t) = 0, which satisfy t = 0. These surfaces are called stationary waves. The second type of characteristic surfaces are defined by solutions of c2 2t = |rx|2. Functions defined by = f(nxV t) are solutions of this equation. Here is f(s) an arbitrary function with f0(s) 6= 0, n is a unit vector and V = c/p. The associated characteristic surfaces S(t) are defined by (x, t) f(n x V t) = 0, here we assume that 0 is in he range of f : R 7! R. Thus, S(t) is defined by n xV t = c, where c is a fixed constant. It follows that the planes S(t)
with normal n move with speed V in direction of n, see Figure 3.4 x x2 n1 S(t) S(0) d(t) Figure 3.4: d0(t) is the speed of plane waves V is called speed of the plane wave S(t). Remark. According to the previous discussions, singularities of a solution of Maxwell equations are located at most on characteristic surfaces. A special case of Maxwell equations are the telegraph equations, which follow from Maxwell equations if div E = 0 and div H = 0, i. e., E and 3.3. SYSTEMS OF FIRST ORDER 79 H are fields free of sources. In fact, it is sufficient to assume that this assumption is satisfied at a fixed time t0 only, see an exercise. Since rotx rotx A = gradx divx A 4xA for each C2-vector field A, it follows from Maxwell equations the uncoupled system 4xE = c2 Ett + c2 Et 4xH = c2 Htt + c2 Ht. 3. Equations of gas dynamics Consider the following quasilinear equations of first order. vt + (v rx) v + 1 rxp = f (Euler equations). Here is v = (v1, v2, v3) the vector of speed, vi = vi(x, t), x = (x1, x2, x3), p pressure, p = (x, t), density, = (x, t), f = (f1, f2, f3) density of the external force, fi = fi(x, t), (v rx)v (v rxv1, v rxv2, v rxv3))T . The second equation is t + v rx + divx v = 0 (conservation of mass). Assume the gas is compressible and that there is a function (state equation) p = p(), where p0() > 0 if > 0. Then the above system of four equations is vt + (v r)v + 1 p0()r = f (3.19) t + div v + v r = 0, (3.20) where r rx and div divx, i. e., these operators apply on the spatial variables only. 80 CHAPTER 3. CLASSIFICATION The characteristic differential equation is here
d dt 0 0 1 p0x1 0 d dt 0 1 p0x2 0 0 d dt 1 p0x3 x1 x2 x3 d dt = 0, where d dt := t + (rx) v. Evaluating the determinant, we get the characteristic differential equation d dt 2 d dt 2 p0()|rx|2 ! = 0. (3.21) This equation implies consequences for the speed of the characteristic surfaces as the following consideration shows. Consider a family S(t) of surfaces in R3 defined by (x, t) = c, where x 2 R3 and c is a fixed constant. As usually, we assume that rx 6= 0. One of the two normals on S(t) at a point of the surface S(t) is given by, see an exercise, n = rx |rx| . (3.22) Let Q0 2 S(t0) and let Q1 2 S(t1) be a point on the line defined by Q0+sn, where n is the normal (3.22 on S(t0) at Q0 and t0 < t1, t1 t0 small, see Figure 3.5. S(t 0) S(t 1) n Q Q1 0 Figure 3.5: Definition of the speed of a surface 3.3. SYSTEMS OF FIRST ORDER 81 Definition. The limit P = lim t1!t0 |Q1 Q0| t1 t0
is called speed of the surface S(t). Proposition 3.2. The speed of the surface S(t) is P= t |rx| . (3.23) Proof. The proof follows from (Q0, t0) = 0 and (Q0 + dn, t0 + 4t) = 0, where d = |Q1 Q0| and 4t = t1 t0. 2 Set vn := v n which is the component of the velocity vector in direction n. From (3.22) we get vn = 1 |rx| v rx. Definition. V := P vn, the difference of the speed of the surface and the speed of liquid particles, is called relative speed. n v S Figure 3.6: Definition of relative speed Using the above formulas for P and vn it follows V = P vn = t |rx| v rx |rx| = 1 |rx| d dt . Then, we obtain from the characteristic equation (3.21) that V 2|rx|2 V 2|rx|2 p0()|rx|2 = 0. 82 CHAPTER 3. CLASSIFICATION An interesting conclusion is that there are two relative speeds: V = 0 or V 2 = p0(). Definition. p p0() is called speed of sound . 3.4 Systems of second order Here we consider the system Xn k,l=1 Akl(x, u,ru)uxkxl + lower order terms = 0, (3.24) where Akl are (m m) matrices and u = (u1, . . . , um)T . We assume Akl = Alk, which is no restriction of generality provided u 2 C2 is satisfied. As in the previous sections, the classification follows from the question whether or not we can calculate formally the solution from the differential equations, if sufficiently many data are given on an initial manifold. Let the initial manifold S be given by (x) = 0 and assume that r 6= 0. The mapping x = x(), see previous sections, leads to Xn
k,l=1 Aklxkxlvnn = terms known on S, where v() = u(x()). The characteristic equation is here det 0 @ Xn k,l=1 Aklxkxl 1 A = 0. If there is a solution with r 6= 0, then it is possible that second derivatives are not continuous in a neighbourhood of S. Definition. The system is called elliptic if det 0 @ Xn k,l=1 Aklkl 1 A 6= 0 for all 2 Rn, 6= 0. 3.4. SYSTEMS OF SECOND ORDER 83 3.4.1 Examples 1. Navier-Stokes equations The Navier-Stokes system for a viscous incompressible liquid is vt + (v rx)v = 1 rxp + 4xv divx v = 0, where is the (constant and positive) density of liquid, is the (constant and positive) viscosity of liquid, v = v(x, t) velocity vector of liquid particles, x 2 R3 or in R2, p = p(x, t) pressure. The problem is to find solutions v, p of the above system. 2. Linear elasticity Consider the system @2u @t2 = 4xu + ( + )rx(divx u) + f. (3.25) Here is, in the case of an elastic body in R3, u(x, t) = (u1(x, t), u2(x, t), u3(x, t)) displacement vector, f(x, t) density of external force, (constant) density, , (positive) Lame constants. The characteristic equation is detC = 0, where the entries of the matrix C are given by cij = ( + )xixj + ij |rx|2 2t . The characteristic equation is
( + 2)|rx|2 2t |rx|2 2t 2 = 0. It follows that two different speeds P of characteristic surfaces S(t), defined by (x, t) = const., are possible, namely P1 = s + 2 , and P2 = r . We recall that P = t/|rx|. 84 CHAPTER 3. CLASSIFICATION 3.5 Theorem of Cauchy-Kovalevskaya Consider the quasilinear system of first order (3.14) of Section 3.3. Assume an initial manifolds S is given by (x) = 0, r 6= 0, and suppose that is not characteristic. Then, see Section 3.3, the system (3.14) can be written as uxn = nX1 i=1 ai(x, u)uxi + b(x, u) (3.26) u(x1, . . . , xn1, 0) = f(x1, . . . , xn1) (3.27) Here is u = (u1, . . . , um)T , b = (b1, . . . , bn)T and ai are (mm)-matrices. We assume ai, b and f are in C1 with respect to their arguments. From (3.26) and (3.27) it follows that we can calculate formally all derivatives Du in a neigbourhood of the plane {x : xn = 0}, in particular in a neighbourhood of 0 2 Rn. Thus we have a formal power series of u(x) at x = 0: u(x) X1 ! Du(0)x. For notations and definitions used here and in the following see the appendix to this section. Then, as usually, two questions arise: (i) Does the power series converge in a neighbourhood of 0 2 Rn? (ii) Is a convergent power series a solution of the initial value problem (3.26), (3.27)? Remark. Quite different to this power series method is the method of asymptotic expansions. Here one is interested in a good approximation of an unknown solution of an equation by a finite sum PN i=0 i(x) of functions i. In general, the infinite sum P 1i=0 i(x) does not converge, in contrast to the power series method of this section. See [15] for some asymptotic formulas in capillarity. Theorem 3.1 (Cauchy-Kovalevskaya). There is a neighbourhood of 0 2 Rn such there is a real analytic solution of the initial value problem (3.26),
(3.27). This solution is unique in the class of real analytic functions. 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 85 Proof. The proof is taken from F. John [10]. We introduce uf as the new solution for which we are looking at and we add a new coordinate u? to the solution vector by setting u?(x) = xn. Then u? xn = 1, u? xk = 0, k = 1, . . . , n 1, u?(x1, . . . , xn1, 0) = 0 and the extended system (3.26), (3.27) is 0 BBB@ u1,xn ... um,xn u? xn 1 CCCA = nX1 i=1 ai 0 00 0 BBB@ u1,xi ... um,xi u? xi 1 CCCA + 0 BBB@ b1 ... bm 1 1 CCCA , where the associated initial condition is u(x1, . . . , xn1, 0) = 0. The new u is u = (u1, . . . , um)T , the new ai are ai(x1, . . . , xn1, u1, . . . , um, u?) and the new b is b = (x1, . . . , xn1, u1, . . . , um, u?)T . Thus we are led to an initial value problem of the type uj,xn = nX1 i=1 XN k=1 ai jk(z)uk,xi + bj(z), j = 1, . . . ,N (3.28) uj(x) = 0 if xn = 0, (3.29)
where j = 1, . . . ,N and z = (x1, . . . , xn1, u1, . . . , uN). The point here is that ai jk and bj are independent of xn. This fact simplifies the proof of the theorem. From (3.28) and (3.29) we can calculate formally all Duj . Then we have formal power series for uj : uj(x) X c(j) x, where c(j) = 1 ! Duj(0). We will show that these power series are (absolutely) convergent in a neighbourhood of 0 2 Rn, i. e., they are real analytic functions, see the appendix for the definition of real analytic functions. Inserting these functions into the left and into the right hand side of (3.28) we obtain on the right and on the left hand side real analytic functions. This follows since compositions of real analytic functions are real analytic again, see Proposition A7 of the appendix to this section. The resulting power series on the left and on the 86 CHAPTER 3. CLASSIFICATION right have the same coefficients caused by the calculation of the derivatives Duj(0) from (3.28). It follows that uj(x), j = 1, . . . , n, defined by its formal power series are solutions of the initial value problem (3.28), (3.29). Set d= @ @z1 ,..., @ @zN+n1 Lemma A. Assume u 2 C1 in a neighbourhood of 0 2 Rn. Then Duj(0) = P dai jk(0), dbj(0) , where ||, || || and P are polynomials in the indicated arguments with nonnegative integers as coefficients which are independent of ai and of b. Proof. It follows from equation (3.28) that DnDuj(0) = P(dai jk(0), dbj(0),Duk(0)). (3.30) Here is Dn = @/@xn and , , , satisfy the inequalities ||, || ||, || || + 1, and, which is essential in the proof, the last coordinates in the multi-indices = (1, . . . , n), = (1, . . . , n) satisfy n n since the right hand side of (3.28) is independent of xn. Moreover, it follows from (3.28) that the polynomials P have integers as coefficients. The initial condition (3.29)
implies Duj(0) = 0, (3.31) where = (1, . . . , n1, 0), that is, n = 0. Then, the proof is by induction with respect to n. The induction starts with n = 0, then we replace Duk(0) in the right hand side of (3.30) by (3.31), that is by zero. Then it follows from (3.30) that Duj(0) = P(dai jk(0), dbj(0),Duk(0)), where = (1, . . . , n1, 1). 2 Definition. Let f = (f1, . . . , fm), F = (F1, . . . , Fm), fi = fi(x), Fi = Fi(x), and f, F 2 C1. We say f is majorized by F if |Dfk(0)| DFk(0), k = 1, . . . ,m 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 87 for all . We write f << F, if f is majorized by F. Definition. The initial value problem Uj,xn = nX1 i=1 XN k=1 Ai jk(z)Uk,xi + Bj(z) (3.32) Uj(x) = 0 if xn = 0, (3.33) j = 1, . . . ,N, Ai jk, Bj real analytic, ist called majorizing problem to (3.28), (3.29) if ai jk << Ai jk and bj << Bj . Lemma B. The formal power series X 1 ! Duj(0)x, where Duj(0) are defined in Lemma A, is convergent in a neighbourhood of 0 2 Rn if there exists a majorizing problem which has a real analytic solution U in x = 0, and |Duj(0)| DUj(0). Proof. It follows from Lemma A and from the assumption of Lemma B that |Duj(0)| P |dai jk(0)|, |dbj(0)| P |dAi jk(0)|, |dBj(0)| DUj(0). The formal power series X 1 ! Duj(0)x,
is convergent since X 1 ! |Duj(0)x| X 1 ! DUj(0)|x|. The right hand side is convergent in a neighbourhood of x 2 Rn by assumption. 2 Lemma C. There is a majorising problem which has a real analytic solution. 88 CHAPTER 3. CLASSIFICATION Proof. Since ai ij(z), bj(z) are real analytic in a neighbourhood of z = 0 it follows from Proposition A5 of the appendix to this section that there are positive constants M and r such that all these functions are majorized by Mr r z1 . . . zN+n1 . Thus a majorizing problem is Uj,xn = Mr r x1 . . . xn1 U1 . . . UN 1+ nX1 i=1 XN k=1 Uk,xi ! Uj(x) = 0 if xn = 0, j = 1, . . . ,N. The solution of this problem is Uj(x1, . . . , xn1, xn) = V (x1 + . . . + xn1, xn), j = 1, . . . ,N, where V (s, t), s = x1 + . . . + xn1, t = xn, is the solution of the Cauchy initial value problem Vt = Mr r s NV (1 + N(n 1)Vs) , V (s, 0) = 0. which has the solution, see an exercise, V (s, t) = 1 Nn rs p (r s)2 2nMNrt . This function is real analytic in (s, t) at (0, 0). It follows that Uj(x) are also real analytic functions. Thus the Cauchy-Kovalevskaya theorem is shown. 2
3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 89 Examples: 1. Ordinary differential equations Consider the initial value problem y0(x) = f(x, y(x)) y(x0) = y0, where x0 2 R and y0 2 Rn are given. Assume f(x, y) is real analytic in a neighbourhood of (x0, y0) 2 RRn. Then it follows from the above theorem that there exists an analytic solution y(x) of the initial value problem in a neighbourhood of x0. This solution is unique in the class of analytic functions according to the theorem of Cauchy-Kovalevskaya. From the PicardLindelof theorem it follows that this analytic solution is unique even in the class of C1-functions. 2. Partial differential equations of second order Consider the boundary value problem for two variables uyy = f(x, y, u, ux, uy, uxx, uxy) u(x, 0) = (x) uy(x, 0) = (x). We assume that , are analytic in a neighbourhood of x = 0 and that f is real analytic in a neighbourhood of (0, 0, (0), 0(0), (0), 0(0)). There exists a real analytic solution in a neigbourhood of 0 2 R2 of the above initial value problem. In particular, there is a real analytic solution in a neigbourhood of 0 2 R2 of the initial value problem 4u = 1 u(x, 0) = 0 uy(x, 0) = 0. 90 CHAPTER 3. CLASSIFICATION The proof follows by writing the above problem as a system. Set p = ux, q = uy, r = uxx, s = uxy, t = uyy, then t = f(x, y, u, p, q, r, s). Set U = (u, p, q, r, s, t)T , b = (q, 0, t, 0, 0, fy + fuq + fqt)T and A= 0 BBBBBB@ 000000 001000 000000 000010 000001 0 0 fp 0 fr fs 1 CCCCCCA . Then the rewritten differential equation is the system Uy = AUx + b with the initial condition U(x, 0) = (x), 0(x), (x), 00(x), 0(x), f0(x) , where f0(x) = f(x, 0, (x), 0(x), (x), 00(x), 0(x)). 3.5.1 Appendix: Real analytic functions Multi-index notation The following multi-index notation simplifies many presentations of formulas.
Let x = (x1, . . . , xn) and u : Rn 7! R (or Rm for systems). The n-tuple of nonnegative integers (including zero) = (1, . . . , n) is called multi-index. Set || = 1 + . . . + n ! = 1!2! . . . n! x = x1 1 x2 2 . . . xn n (for a monom) Dk = @ @xk D = (D1, . . . ,Dn) Du = (D1u, . . . ,Dnu) ru grad u D = D1 1 D2 2 . . . Dn n @|| @x1 1 @x2 2 . . . @xn n . 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 91 Define a partial order by if and only if i i for all i. Sometimes we use the notations 0 = (0, 0 . . . , 0), 1 = (1, 1 . . . , 1), where 0, 1 2 Rn. Using this multi-index notion, we have 1. (x + y) = X , += ! !! xy, where x, y 2 Rn and , , are multi-indices. 2. Taylor expansion for a polynomial f(x) of degree m: f(x) = X ||m 1 ! (Df(0)) x, here is Df(0) := (Df(x)) |x=0. 3. Let x = (x1, . . . , xn) and m 0 an integer, then (x1 + . . . + xn)m = X ||=m m! !
x. 4. ! ||! n||!. 5. Leibnizs rule: D(fg) = X , += ! !! (Df)(Dg). 92 CHAPTER 3. CLASSIFICATION 6. Dx = ! ( )! x if , Dx = 0 otherwise. 7. Directional derivative: dm dtmf(x + ty) = X ||=m ||! ! (Df(x + ty)) y, where x, y 2 Rn and t 2 R. 8. Taylors theorem: Let u 2 Cm+1 in a neighbourhood N(y) of y, then, if x 2 N(y), u(x) = X ||m 1 ! (Du(y)) (x y) + Rm, where Rm = X ||=m+1 1 ! (Du(y + (x y))) x, 0 < < 1, = (u,m, x, y), or Rm = 1 m! Z1 0 (1 t)m(m+1)(t) dt, where (t) = u(y + t(x y)). It follows from 7. that Rm = (m + 1) X ||=m+1 1 ! Z 1
0 (1 t)Du(y + t(x y)) dt (x y). 9. Using multi-index notation, the general linear partial differential equation of order m can be written as X ||m a(x)Du = f(x) in Rn. 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 93 Power series Here we collect some definitions and results for power series in Rn. Definition. Let c 2 R (or 2 Rm). The series X c 1X m=0 0 @ X ||=m c 1 A is said to be convergent if X |c| 1X m=0 0 @ X ||=m |c| 1 A is convergent. Remark. According to the above definition, a convergent series is absolutely convergent. It follows that we can rearrange the order of summation. Using the above multi-index notation and keeping in mind that we can rearrange convergent series, we have 10. Let x 2 Rn, then X x = Yn i=1 1X i=0 xi i ! =
1 (1 x1)(1 x2) . . . (1 xn) = 1 (1 x)1 , provided |xi| < 1 is satisfied for each i. 11. Assume x 2 Rn and |x1| + |x2| + . . . + |xn| < 1, then X ||! ! x = 1X j=0 X ||=j ||! ! x = 1X j=0 (x1 + . . . + xn)j = 1 1 (x1 + . . . + xn) . 94 CHAPTER 3. CLASSIFICATION 12. Let x 2 Rn, |xi| < 1 for all i, and is a given multi-index. Then X ! ( )! x = D 1 (1 x)1 = ! (1 x)1+ . 13. Let x 2 Rn and |x1| + . . . + |xn| < 1. Then X ||! ( )! x = D 1 1 x1 . . . xn = ||! (1 x1 . . . xn)1+|| . Consider the power series X cx (3.34) and assume this series is convergent for a z 2 Rn. Then, by definition, := X
|c||z| < 1 and the series (3.34) is uniformly convergent for all x 2 Q(z), where Q(z) : |xi| |zi| for all i. Thus the power series (3.34) defines a continuous function defined on Q(z), according to a theorem of Weierstrass. The interior of Q(z) is not empty if and only if zi 6= 0 for all i, see Figure 3.7. For given x in a fixed compact subset D of Q(z) there is a q, 0 < q < 1, such that |xi| q|zi| for all i. Set f(x) = X cx. 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 95 z Q(z) D Figure 3.7: Definition of D 2 Q(z) Proposition A1. (i) In every compact subset D of Q(z) one has f 2 C1(D) and the formal differentiate series, that is P Dcx, is uniformly convergent on the closure of D and is equal to Df. (ii) |Df(x)| M||!r|| in D, where M= (1 q)n , r = (1 q)min i |zi|. Proof. See F. John [10], p. 64. Or an exercise. Hint: Use formula 12. where x is replaced by (q, . . . , q). Remark. From the proposition above it follows c = 1 ! Df(0). Definition. Assume f is defined on a domain Rn, then f is said to be real analytic in y 2 if there are c 2 R and if there is a neighbourhood N(y) of y such that f(x) = X c(x y) 96 CHAPTER 3. CLASSIFICATION for all x 2 N(y), and the series converges (absolutely) for each x 2 N(y). A function f is called real analytic in if it is real analytic for each y 2 . We will write f 2 C!() in the case that f is real analytic in the domain . A vector valued function f(x) = (f1(x), . . . , fm) is called real analytic if each coordinate is real analytic. Proposition A2. (i) Let f 2 C!(). Then f 2 C1(). (ii) Assume f 2 C!(). Then for each y 2 there exists a neighbourhood N(y) and positive constants M, r such that f(x) = X
1 ! (Df(y))(x y) for all x 2 N(y), and the series converges (absolutely) for each x 2 N(y), and |Df(x)| M||!r||. The proof follows from Proposition A1. An open set 2 Rn is called connected if is not a union of two nonempty open sets with empty intersection. An open set 2 Rn is connected if and only if its path connected, see [11], pp. 38, for example. We say that is path connected if for any x, y 2 there is a continuous curve (t) 2 , 0 t 1, with (0) = x and (1) = y. From the theory of one complex variable we know that a continuation of an analytic function is uniquely determined. The same is true for real analytic functions. Proposition A3. Assume f 2 C!() and is connected. Then f is uniquely determined if for one z 2 all Df(z) are known. Proof. See F. John [10], p. 65. Suppose g, h 2 C!() and Dg(z) = Dh(z) for every . Set f = g h and 1 = {x 2 : Df(x) = 0 for all }, 2 = {x 2 : Df(x) 6= 0 for at least one }. The set 2 is open since Df are continuous in . The set 1 is also open since f(x) = 0 in a neighbourhood of y 2 1. This follows from f(x) = X 1 ! (Df(y))(x y). 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 97 Since z 2 1, i. e., 1 6= ;, it follows 2 = ;. 2 It was shown in Proposition A2 that derivatives of a real analytic function satisfy estimates. On the other hand it follows, see the next proposition, that a function f 2 C1 is real analytic if these estimates are satisfied. Definition. Let y 2 and M, r positive constants. Then f is said to be in the class CM,r(y) if f 2 C1 in a neighbourhood of y and if |Df(y)| M||!r|| for all . Proposition A4. f 2 C!() if and only if f 2 C1() and for every compact subset S there are positive constants M, r such that f 2 CM,r(y) for all y 2 S. Proof. See F. John [10], pp. 65-66. We will prove the local version of the proposition, that is, we show it for each fixed y 2 . The general version follows from Heine-Borel theorem. Because of Proposition A3 it remains to show that the Taylor series X 1 ! Df(y)(x y) converges (absolutely) in a neighbourhood of y and that this series is equal to f(x). Define a neighbourhood of y by Nd(y) = {x 2 : |x1 y1| + . . . + |xn yn| < d}, where d is a sufficiently small positive constant. Set (t) = f(y +t(xy)). The one-dimensional Taylor theorem says
f(x) = (1) = Xj1 k=0 1 k! (k)(0) + rj , where rj = 1 (j 1)! Z1 0 (1 t)j1(j)(t) dt. 98 CHAPTER 3. CLASSIFICATION From formula 7. for directional derivatives it follows for x 2 Nd(y) that 1 j! dj dtj (t) = X ||=j 1 ! Df(y + t(x y))(x y). From the assumption and the multinomial formula 3. we get for 0 t 1 1 j! dj dtj (t) M X ||=j ||! ! r|| |(x y)| = Mrj (|x1 y1| + . . . + |xn yn|)j M d r j . Choose d > 0 such that d < r, then the Taylor series converges (absolutely) in Nd(y) and it is equal to f(x) since the remainder satisfies, see the above estimate, |rj | = 1 (j 1)! Z1 0 (1 t)j1j(t) dt M
d r j . 2 We recall that the notation f << F (f is majorized by F) was defined in the previous section. Proposition A5. (i) f = (f1, . . . , fm) 2 CM,r(0) if and only if f << (, . . . , ), where (x) = Mr r x1 . . . xn . (ii) f 2 CM,r(0) and f(0) = 0 if and only if f << ( M, . . . , M), where (x) = M(x1 + . . . + xn) r x1 . . . xn . Proof. D(0) = M||!r||. 3.5. THEOREM OF CAUCHY-KOVALEVSKAYA 99 2 Remark. The definition of f << F implies, trivially, that Df << DF. The next proposition shows that compositions majorize if the involved functions majorize. More precisely, we have Proposition A6. Let f, F : Rn 7! Rm and g, G maps a neighbourhood of 0 2 Rm into Rp. Assume all functions f(x), F(x), g(u), G(u) are in C1, f(0) = F(0) = 0, f << F and g << G. Then g(f(x)) << G(F(x)). Proof. See F. John [10], p. 68. Set h(x) = g(f(x)), H(x) = G(F(x)). For each coordinate hk of h we have, according to the chain rule, Dhk(0) = P(gl(0),Dfj(0)), where P are polynomials with nonnegative integers as coefficients, P are independent on g or f and := (@/@u1, . . . , @/@um). Thus, |Dhk(0)| P(|gl(0)|, |Dfj(0)|) P(Gl(0),DFj(0)) = DHk(0). 2 Using this result and Proposition A4, which characterizes real analytic functions, it follows that compositions of real analytic functions are real analytic functions again. Proposition A7. Assume f(x) and g(u) are real analytic, then g(f(x)) is real analytic if f(x) is in the domain of definition of g. Proof. See F. John [10], p. 68. Assume that f maps a neighbourhood of y 2 Rn in Rm and g maps a neighbourhood of v = f(y) in Rm. Then f 2 CM,r(y) and g 2 C,(v) implies h(x) := g(f(x)) 2 C,r/(mM+)(y). Once one has shown this inclusion, the proposition follows from Proposition A4. To show the inclusion, we set h(y + x) := g(f(y + x)) g(v + f(y + x) f(x)) =: g(f(x)), 100 CHAPTER 3. CLASSIFICATION where v = f(y) and g(u) : = g(v + u) 2 C,(0)
f(x) : = f(y + x) f(y) 2 CM,r(0). In the above formulas v, y are considered as fixed parameters. From Proposition A5 it follows f(x) << ( M, . . . , M) =: F g(u) << (, . . . , ) =: G, where (x) = Mr r x1 x2 . . . xn (u) = x1 x2 . . . xn . From Proposition A6 we get h(y + x) << ((x), . . . , (x)) G(F), where (x) = m((x) M) = (r x1 . . . xn) r ( + mM)(x1 + . . . + xn) << r r ( + mM)(x1 + . . . + xn) = r/( + mM) r/( + mM) (x1 + . . . xn) . See an exercise for the <<-inequality. 2 3.6. EXERCISES 101 3.6 Exercises 1. Let : Rn ! R in C1, r 6= 0. Show that for given x0 2 Rn there is in a neighbourhood of x0 a local diffeomorphism = (x), : (x1, . . . , xn) 7! (1, . . . , n), such that n = (x). 2. Show that the differential equation a(x, y)uxx + 2b(x, y)uxy + c(x, y)uyy + lower order terms = 0 is elliptic if ac b2 > 0, parabolic if ac b2 = 0 and hyperbolic if ac b2 < 0. 3. Show that in the hyperbolic case there exists a solution of x+1y = 0, see equation (3.9), such that r 6= 0. Hint: Consider an appropriate Cauchy initial value problem. 4. Show equation (3.4). 5. Find the type of Lu := 2uxx + 2uxy + 2uyy = 0 and transform this equation into an equation with vanishing mixed derivatives by using the orthogonal mapping (transform to principal axis) x = Uy, U orthogonal. 6. Determine the type of the following equation at (x, y) = (1, 1/2). Lu := xuxx + 2yuxy + 2xyuyy = 0. 7. Find all C2-solutions of uxx 4uxy + uyy = 0. Hint: Transform to principal axis and stretching of axis lead to the wave equation. 8. Oscillations of a beam are described by wx
1 E t = 0 x wt = 0, 102 CHAPTER 3. CLASSIFICATION where stresses, w deflection of the beam and E, are positive constants. a) Determine the type of the system. b) Transform the system into two uncoupled equations, that is, w, occur only in one equation, respectively. c) Find non-zero solutions. 9. Find nontrivial solutions (r 6= 0) of the characteristic equation to x2uxx uyy = f(x, y, u,ru), where f is given. 10. Determine the type of uxx xuyx + uyy + 3ux = 2x, where u = u(x, y). 11. Transform equation uxx + (1 y2)uxy = 0, u = u(x, y), into its normal form. 12. Transform the Tricomi-equation yuxx + uyy = 0, u = u(x, y), where y < 0, into its normal form. 13. Transform equation x2uxx y2uyy = 0, u = u(x, y), into its normal form. 14. Show that = 1 (1 + |p|2)3/2 ,= 1 (1 + |p|2)1/2 . are the minimum and maximum of eigenvalues of the matrix (aij), where aij = 1 + |p|2 1/2 ij pipj 1 + |p|2 . 15. Show that Maxwell equations are a hyperbolic system. 3.6. EXERCISES 103 16. Consider Maxwell equations and prove that div E = 0 and div H = 0 for all t if these equations are satisfied for a fixed time t0. Hint. div rot A = 0 for each C2-vector field A = (A1,A2,A3). 17. Assume a characteristic surface S(t) in R3 is defined by (x, y, z, t) = const. such that t = 0 and z 6= 0. Show that S(t) has a nonparametric representation z = u(x, y, t) with ut = 0, that is S(t) is independent of t. 18. Prove formula (3.22) for the normal on a surface. 19. Prove formula (3.23) for the speed of the surface S(t).
20. Write the Navier-Stokes system as a system of type (3.24). 21. Show that the following system (linear elasticity, stationary case of (3.25) in the two dimensional case) is elliptic 4u + ( + ) grad(div u) + f = 0, where u = (u1, u2). The vector f = (f1, f2) is given and , are positive constants. 22. Discuss the type of the following system in stationary gas dynamics (isentrop flow) in R2. uux + vuy + a2x = 0 uvx + vvy + a2y = 0 (ux + vy) + ux + vy = 0. Here are (u, v) velocity vector, density and a = p p0() the sound velocity. 23. Show formula 7. (directional derivative). Hint: Induction with respect to m. 24. Let y = y(x) be the solution of: y0(x) = f(x, y(x)) y(x0) = y0, 104 CHAPTER 3. CLASSIFICATION where f is real analytic in a neighbourhood of (x0, y0) 2 R2. Find the polynomial P of degree 2 such that y(x) = P(x x0) + O(|x x0|3) as x ! x0. 25. Let u be the solution of 4u = 1 u(x, 0) = uy(x, 0) = 0. Find the polynomial P of degree 2 such that u(x, y) = P(x, y) + O((x2 + y2)3/2) as (x, y) ! (0, 0). 26. Solve the Cauchy initial value problem Vt = Mr r s NV (1 + N(n 1)Vs) V (s, 0) = 0. Hint: Multiply the differential equation with (r s NV ). 27. Write 42u = u as a system of first order. Hint: 42u 4(4u). 28. Write the minimal surface equation @ @x 0 @q ux 1 + u2 x + u2y 1 A+ @ @y 0 @q uy 1 + u2 x + u2y 1
A=0 as a system of first order. Hint: v1 := ux/ q 1 + u2 x + u2y , v2 := uy/ q 1 + u2 x + u2y . 29. Let f : R Rm ! Rm be real analytic in (x0, y0). Show that a real analytic solution in a neighbourhood of x0 of the problem y0(x) = f(x, y) y(x0) = y0 exists and is equal to the unique C1[x0 , x0 + ]-solution from the Picard-Lindelof theorem, > 0 sufficiently small. 3.6. EXERCISES 105 30. Show (see the proof of Proposition A7) (r x1 . . . xn) r ( + mM)(x1 + . . . + xn) << r r ( + mM)(x1 + . . . + xn) . Hint: Leibnizs rule. 106 CHAPTER 3. CLASSIFICATION Chapter 4 Hyperbolic equations Here we consider hyperbolic equations of second order, mainly wave equations. 4.1 One-dimensional wave equation The one-dimensional wave equation is given by 1 c2 utt uxx = 0, (4.1) where u = u(x, t) is a scalar function of two variables and c is a positive constant. According to previous considerations, all C2-solutions of the wave equation are u(x, t) = f(x + ct) + g(x ct), (4.2) with arbitrary C2-functions f and g The Cauchy initial value problem for the wave equation is to find a C2solution of 1 c2 utt uxx = 0 u(x, 0) = (x) ut(x, 0) = (x), where , 2 C2(1,1) are given. 107 108 CHAPTER 4. HYPERBOLIC EQUATIONS Theorem 4.1. There exists a unique C2(R R)-solution of the Cauchy initial value problem, and this solution is given by dAlemberts1 formula u(x, t) = (x + ct) + (x ct) 2 + 1 2c
Z x+ct xct (s) ds. (4.3) Proof. Assume there is a solution u(x, t) of the Cauchy initial value problem, then it follows from (4.2) that u(x, 0) = f(x) + g(x) = (x) (4.4) ut(x, 0) = cf0(x) cg0(x) = (x). (4.5) From (4.4) we obtain f0(x) + g0(x) = 0(x), which implies, together with (4.5), that f0(x) = 0(x) + (x)/c 2 g0(x) = 0(x) (x)/c 2 . Then f(x) = (x) 2 + 1 2c Zx 0 (s) ds + C1 g(x) = (x) 2 1 2c Zx 0 (s) ds + C2. The constants C1, C2 satisfy C1 + C2 = f(x) + g(x) (x) = 0, see (4.4). Thus each C2-solution of the Cauchy initial value problem is given by dAlemberts formula. On the other hand, the function u(x, t) defined by the right hand side of (4.3) is a solution of the initial value problem. 2 Corollaries. 1. The solution u(x, t) of the initial value problem depends on the values of at the endpoints of the interval [xct, x+ct] and on the values of on this interval only, see Figure 4.1. The interval [x ct, x + ct] is called domain of dependence. 1dAlembert, Jean Babtiste le Rond, 1717-1783 4.2. HIGHER DIMENSIONS 109 x+ct=const. t x ct x x +ct x (x ,t ) 0 0 00000 Figure 4.1: Interval of dependence 2. Let P be a point on the x-axis. Then we ask which points (x, t) need values of or at P in order to calculate u(x, t)? From the dAlembert formula it follows that this domain is a cone, see Figure 4.2. This set is called domain of influence.
t x P xct=const. Figure 4.2: Domain of influence 4.2 Higher dimensions Set 2u = utt c24u, 4 4x = @2/@x21 + . . . + @2/@x2 n, 110 CHAPTER 4. HYPERBOLIC EQUATIONS and consider the initial value problem 2u = 0 in Rn R (4.6) u(x, 0) = f(x) (4.7) ut(x, 0) = g(x), (4.8) where f and g are given C2(R2)-functions. By using spherical means and the above dAlembert formula we will derive a formula for the solution of this initial value problem. Method of spherical means Define the spherical mean for a C2-solution u(x, t) of the initial value problem by M(r, t) = 1 !nrn1 Z @Br(x) u(y, t) dSy, (4.9) where !n = (2)n/2/(n/2) is the area of the n-dimensional sphere, !nrn1 is the area of a sphere with radius r. From the mean value theorem of the integral calculus we obtain the function u(x, t) for which we are looking at by u(x, t) = lim r!0 M(r, t). (4.10) Using the initial data, we have M(r, 0) = 1 !nrn1 Z @Br(x) f(y) dSy =: F(r) (4.11) Mt(r, 0) = 1 !nrn1 Z @Br(x) g(y) dSy =: G(r), (4.12) which are the spherical means of f and g. The next step is to derive a partial differential equation for the spherical mean. From definition (4.9) of the spherical mean we obtain, after the mapping = (y x)/r, where x and r are fixed, M(r, t) = 1 !n
Z @B1(0) u(x + r, t) dS. 4.2. HIGHER DIMENSIONS 111 It follows Mr(r, t) = 1 !n Z @B1(0) Xn i=1 uyi(x + r, t)i dS = 1 !nrn1 Z @Br(x) Xn i=1 uyi(y, t)i dSy. Integration by parts yields 1 !nrn1 Z Br(x) Xn i=1 uyiyi(y, t) dy since (yx)/r is the exterior normal at @Br(x). Assume u is a solution of the wave equation, then rn1Mr = 1 c2!n Z Br(x) utt(y, t) dy = 1 c2!n Zr 0 Z @Bc(x) utt(y, t) dSydc. The previous equation follows by using spherical coordinates. Consequently (rn1Mr)r = 1 c2!n Z @Br(x) utt(y, t) dSy = rn1 c2 @2
@t2 1 !nrn1 Z @Br(x) u(y, t) dSy ! = rn1 c2 Mtt. Thus we arrive at the differential equation (rn1Mr)r = c2rn1Mtt, which can be written as Mrr + n1 r Mr = c2Mtt. (4.13) This equation (4.13) is called Euler-Poisson-Darboux equation. 112 CHAPTER 4. HYPERBOLIC EQUATIONS 4.2.1 Case n=3 The Euler-Poisson-Darboux equation in this case is (rM)rr = c2(rM)tt. Thus rM is the solution of the one-dimensional wave equation with initial data (rM)(r, 0) = rF(r) (rM)t(r, 0) = rG(r). (4.14) From the dAlembert formula we get formally M(r, t) = (r + ct)F(r + ct) + (r ct)F(r ct) 2r + 1 2cr Z r+ct rct G() d. (4.15) The right hand side of the previous formula is well defined if the domain of dependence [x ct, x + ct] is a subset of (0,1). We can extend F and G to F0 and G0 which are defined on (1,1) such that rF0 and rG0 are C2(R)-functions as follows. Set F0(r) = 8< : F(r) : r > 0 f(x) : r = 0 F(r) : r < 0 The function G0(r) is given by the same definition where F and f are replaced by G and g, respectively. Lemma. rF0(r), rG0(r) 2 C2(R2). Proof. From definition of F(r) and G(r), r > 0, it follows from the mean value theorem lim r!+0 F(r) = f(x), lim r!+0 G(r) = g(x).
Thus rF0(r) and rG0(r) are C(R)-functions. These functions are also in 4.2. HIGHER DIMENSIONS 113 C1(R). This follows since F0 and G0 are in C1(R). We have, for example, F0(r) = 1 !n Z @B1(0) Xn j=1 fyj (x + r)j dS F0(+0) = 1 !n Z @B1(0) Xn j=1 fyj (x)j dS = 1 !n Xn j=1 fyj (x) Z @B1(0) nj dS = 0. Then, rF0(r) and rG0(r) are in C2(R), provided F00 and G00 are bounded as r ! +0. This property follows from F00(r) = 1 !n Z @B1(0) Xn i,j=1 fyiyj (x + r)ij dS. Thus F00(+0) = 1 !n Xn i,j=1 fyiyj (x) Z @B1(0) ninj dS. We recall that f, g 2 C2(R2) by assumption. 2 The solution of the above initial value problem, where F and G are replaced by F0 and G0, respectively, is M0(r, t) = (r + ct)F0(r + ct) + (r ct)F0(r ct) 2r +
1 2cr Z r+ct rct G0() d. Since F0 and G0 are even functions, we have Z ctr rct G0() d = 0. Thus M0(r, t) = (r + ct)F0(r + ct) (ct r)F0(ct r) 2r + 1 2cr Z ct+r ctr G0() d, (4.16) 114 CHAPTER 4. HYPERBOLIC EQUATIONS rct ctr ct+r Figure 4.3: Changed domain of integration see Figure 4.3. For fixed t > 0 and 0 < r < ct it follows that M0(r, t) is the solution of the initial value problem with given initially data (4.14) since F0(s) = F(s), G0(s) = G(s) if s > 0. Since for fixed t > 0 u(x, t) = lim r!0 M0(r, t), it follows from dHospitals rule that u(x, t) = ctF0(ct) + F(ct) + tG(ct) = d dt (tF(ct)) + tG(ct). Theorem 4.2. Assume f 2 C3(R3) and g 2 C2(R3) are given. Then there exists a unique solution u 2 C2(R3 [0,1)) of the initial value problem (4.6)-(4.7), where n = 3, and the solution is given by the Poissons formula u(x, t) = 1 4c2 @ @t 1 t Z @Bct(x) f(y) dSy ! + 1 4c2t Z @Bct(x) g(y) dSy. (4.17)
Proof. Above we have shown that a C2-solution is given by Poissons formula. Under the additional assumption f 2 C3 it follows from Poissons 4.2. HIGHER DIMENSIONS 115 formula that this formula defines a solution which is in C2, see F. John [10], p. 129. 2 Corollary. From Poissons formula we see that the domain of dependence for u(x, t0) is the intersection of the cone defined by |y x| = c|t t0| with the hyperplane defined by t = 0, see Figure 4.4 t x (x,t ) 0 |yx|=c| tt | 0 Figure 4.4: Domain of dependence, case n = 3 4.2.2 Case n = 2 Consider the initial value problem vxx + vyy = c2vtt (4.18) v(x, y, 0) = f(x, y) (4.19) vt(x, y, 0) = g(x, y), (4.20) where f 2 C3, g 2 C2. Using the formula for the solution of the three-dimensional initial value problem we will derive a formula for the two-dimensional case. The following consideration is called Hadamards method of decent. Let v(x, y, t) be a solution of (4.18)-(4.20), then u(x, y, z, t) := v(x, y, t) 116 CHAPTER 4. HYPERBOLIC EQUATIONS is a solution of the three-dimensional initial value problem with initial data f(x, y), g(x, y), independent of z, since u satisfies (4.18)-(4.20). Hence, since u(x, y, z, t) = u(x, y, 0, t) + uz(x, y, z, t)z, 0 < < 1, and uz = 0, we have v(x, y, t) = u(x, y, 0, t). Poissons formula in the three-dimensional case implies v(x, y, t) = 1 4c2 @ @t 1 t Z @Bct(x,y,0) f(, ) dS ! + 1 4c2t Z @Bct(x,y,0) g(, ) dS. (4.21) n dS S _ + S r x
h z dxdh Figure 4.5: Domains of integration The integrands are independent on . The surface S is defined by (, , ) := ( x)2 +( y)2 +2 c2t2 = 0. Then the exterior normal n at S is n = r/|r| and the surface element is given by dS = (1/|n3|)dd, where the third coordinate of n is n3 = p c2t2 ( x)2 ( y)2 ct . 4.3. INHOMOGENEOUS EQUATION 117 The positive sign applies on S+, where > 0 and the sign is negative on S where < 0, see Figure 4.5. We have S = S+ [ S. Set = p ( x)2 + ( y)2. Then it follows from (4.21) Theorem 4.3. The solution of the Cauchy initial value problem (4.18)(4.20) is given by v(x, y, t) = 1 2c @ @t Z Bct(x,y) f(, ) p c2t2 2 dd + 1 2c Z Bct(x,y) g(, ) p c2t2 2 dd. Corollary. In contrast to the three dimensional case, the domain of dependence is here the disk Bcto(x0, y0) and not the boundary only. Therefore, see formula of Theorem 4.3, if f, g have supports in a compact domain D R2, then these functions have influence on the value v(x, y, t) for all time t > T, T sufficiently large. 4.3 Inhomogeneous equation Here we consider the initial value problem 2u = w(x, t) on x 2 Rn, t 2 R (4.22) u(x, 0) = f(x) (4.23) ut(x, 0) = g(x), (4.24) where 2u := utt c24u. We assume f 2 C3, g 2 C2 and w 2 C1, which are given. Set u = u1 + u2, where u1 is a solution of problem (4.22)-(4.24) with w := 0 and u2 is the solution where f = 0 and g = 0 in (4.22)-(4.24). Since we have explicit solutions u1 in the cases n = 1, n = 2 and n = 3, it remains to solve 2u = w(x, t) on x 2 Rn, t 2 R (4.25)
u(x, 0) = 0 (4.26) ut(x, 0) = 0. (4.27) The following method is called Duhamels principle which can be considered as a generalization of the method of variations of constants in the theory of ordinary differential equations. 118 CHAPTER 4. HYPERBOLIC EQUATIONS To solve this problem, we make the ansatz u(x, t) = Zt 0 v(x, t, s) ds, (4.28) where v is a function satisfying 2v = 0 for all s (4.29) and v(x, s, s) = 0. (4.30) From ansatz (4.28) and assumption (4.30) we get ut = v(x, t, t) + Zt 0 vt(x, t, s) ds, = Zt 0 vt(x, t, s). (4.31) It follows ut(x, 0) = 0. The initial condition u(x, t) = 0 is satisfied because of the ansatz (4.28). From (4.31) and ansatz (4.28) we see that utt = vt(x, t, t) + Zt 0 vtt(x, t, s) ds, 4xu = Zt 0 4xv(x, t, s) ds. Therefore, since u is an ansatz for (4.25)-(4.27), utt c24xu = vt(x, t, t) + Zt 0 (2v)(x, t, s) ds = w(x, t). Thus necessarily vt(x, t, t) = w(x, t), see (4.29). We have seen that the ansatz provides a solution of (4.25)-(4.27) if for all s 2v = 0, v(x, s, s) = 0, vt(x, s, s) = w(x, s). (4.32) Let v(x, t, s) be a solution of 2v = 0, v(x, 0, s) = 0, vt(x, 0, s) = w(x, s), (4.33) then v(x, t, s) := v(x, t s, s) 4.3. INHOMOGENEOUS EQUATION 119 is a solution of (4.32). In the case n = 3, where v is given by, see Theorem 4.2, v(x, t, s) = 1 4c2t Z @Bct(x) w(, s) dS. Then
v(x, t, s) = v(x, t s, s) = 1 4c2(t s) Z @Bc(ts)(x) w(, s) dS. from ansatz (4.28) it follows u(x, t) = Zt 0 v(x, t, s) ds = 1 4c2 Zt 0 Z @Bc(ts)(x) w(, s) ts dSds. Changing variables by = c(t s) yields u(x, t) = 1 4c2 Z ct 0 Z @B (x) w(, t /c) dSd = 1 4c2 Z Bct(x) w(, t r/c) r d, where r = |x |. Formulas for the cases n = 1 and n = 2 follow from formulas for the associated homogeneous equation with inhomogeneous initial values for these cases. Theorem 4.4. The solution of 2u = w(x, t), u(x, 0) = 0, ut(x, 0) = 0, where w 2 C1, is given by: Case n = 3: u(x, t) = 1 4c2 Z Bct(x) w(, t r/c) r
d, where r = |x |, x = (x1, x2, x3), = (1, 2, 3). 120 CHAPTER 4. HYPERBOLIC EQUATIONS Case n = 2: u(x, t) = 1 4c Zt 0 Z Bc(t)(x) w(, ) p c2(t )2 r2 d ! d, x = (x1, x2), = (1, 2). Case n = 1: u(x, t) = 1 2c Zt 0 Z x+c(t) xc(t) w(, ) d ! d. Remark. The integrand on the right in formula for n = 3 is called retarded potential. The integrand is taken not at t, it is taken at an earlier time t r/c. 4.4 A method of Riemann Riemanns method provides a formula for the solution of the following Cauchy initial value problem for a hyperbolic equation of second order in two variables. Let S : x = x(t), y = y(t), t1 t t2, be a regular curve in R2, that is, we assume x, y 2 C1[t1, t2] and x02+y02 6= 0. Set Lu := uxy + a(x, y)ux + b(x, y)uy + c(x, y)u, where a, b 2 C1 and c, f 2 C in a neighbourhood of S. Consider the initial value problem Lu = f(x, y) (4.34) u0(t) = u(x(t), y(t)) (4.35) p0(t) = ux(x(t), y(t)) (4.36) q0(t) = uy(x(t), y(t)), (4.37) where f 2 C in a neighbourhood of S and u0, p0, q0 2 C1 are given. We assume: 4.4. A METHOD OF RIEMANN 121 (i) u00(t) = p0(t)x0(t) + q0(t)y0(t) (strip condition), (ii) S is not a characteristic curve. Moreover assume that the characteristic curves, which are lines here and are defined by x = const. and y = const., have at most one point of intersection with S, and such a point is not a touching point, i. e., tangents of the characteristic and S are different at this point. We recall that the characteristic equation to (4.34) is xy = 0 which is satisfied if x(x, y) = 0 or y(x, y) = 0. One family of characteristics
associated to these first partial differential of first order is defined by x0(t) = 1, y0(t) = 0, see Chapter 2. Assume u, v 2 C1 and that uxy, vxy exist and are continuous. Define the adjoint differential expression by Mv = vxy (av)x (bv)y + cv. We have 2(vLu uMv) = (uxv vxu + 2buv)y + (uyv vyu + 2auv)x. (4.38) Set P = (uxv xxu + 2buv) Q = uyv vyu + 2auv. From (4.38) it follows for a domain 2 R2 2 Z (vLu uMv) dxdy = Z (Py + Qx) dxdy = I Pdx + Qdy, (4.39) where integration in the line integral is anticlockwise. The previous equation follows from Gauss theorem or after integration by parts: Z (Py + Qx) dxdy = Z @ (Pn2 + Qn1) ds, where n = (dy/ds,dx/ds), s arc length, (x(s), y(s)) represents @. Assume u is a solution of the initial value problem (4.34)-(4.37) and suppose that v satisfies Mv = 0 in . 122 CHAPTER 4. HYPERBOLIC EQUATIONS A B S W x y 0 P=(x ,y ) 0 Figure 4.6: Riemanns method, domain of integration Then, if we integrate over a domain as shown in Figure 4.6, it follows from (4.39) that 2 Z vf dxdy = Z BA Pdx+Qdy+ Z AP Pdx+Qdy+ Z PB Pdx+Qdy. (4.40) The line integral from B to A is known from initial data, see the definition of P and Q. Since
uxv vxu + 2buv = (uv)x + 2u(bv vx), it follows Z AP Pdx + Qdy = Z AP ((uv)x + 2u(bv vx)) dx = (uv)(P) + (uv)(A) Z AP 2u(bv vx) dx. By the same reasoning we obtain for the third line integral Z PB Pdx + Qdy = Z PB ((uv)y + 2u(av vy)) dy = (uv)(B) (uv)(P) + Z PB 2u(av vy) dy. 4.4. A METHOD OF RIEMANN 123 Combining these equations with (4.39), we get 2v(P)u(P) = Z BA (uxv vx + 2buv) dx (uyv vyu + 2auv) dy +u(A)v(A) + u(B)v(B) + 2 Z AP u(bv vx) dx +2 Z PB u(av vy) dy 2 Z fv dxdy. (4.41) Let v be a solution of the initial value problem, see Figure 4.7 for the definition of domain D(P), x y 0 P=(x ,y ) 0 C C2 1 D(P) Figure 4.7: Definition of Riemanns function Mv = 0 in D(P) (4.42) bv vx = 0 on C1 (4.43) av vy = 0 on C2 (4.44) v(P) = 1. (4.45) Assume v satisfies (4.42)-(4.45), then 2u(P) = u(A)v(A) + u(B)v(B) 2 Z fv dxdy
= Z BA (uxv vx + 2buv) dx (uyv vyu + 2auv) dy, 124 CHAPTER 4. HYPERBOLIC EQUATIONS where the right hand side is known from given data. A function v = v(x, y; x0, y0) satisfying (4.42)-(4.45) is called Riemanns function. Remark. Set w(x, y) = v(x, y; x0, y0) for fixed x0, y0. Then (4.42)-(4.45) imply w(x, y0) = exp Z x x0 b(, y0) d on C1, w(x0, y) = exp Z y y0 a(x0, ) d on C2. Examples 1. uxy = f(x, y), then a Riemann function is v(x, y) 1. 2. Consider the telegraph equation of Chapter 3 "utt = c24xu ut, where u stands for one coordinate of electric or magnetic field. Introducing u = w(x, t)et, where = /(2"), we arrive at wtt = c2 "4xw 2 42 . Stretching the axis and transform the equation to the normal form we get finally the following equation, the new function is denoted by u and the new variables are denoted by x, y again, uxy + cu = 0, with a positive constant c. We make the ansatz for a Riemann function v(x, y; x0, y0) = w(s), s = (x x0)(y y0) and obtain sw00 + w0 + cw = 0. 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 125 Substitution = p4cs leads to Bessels differential equation 2z00() + z0() + 2z() = 0, where z() = w(2/(4c)). A solution is J0() = J0 p 4c(x x0)(y y0) which defines a Riemann function since J0(0) = 1. Remark. Bessels differential equation is x2y00(x) + xy0(x) + (x2 n2)y(x) = 0, where n 2 R. If n 2 N [ {0}, then solutions are given by Bessel functions. One of the two linearly independent solutions is bounded at 0. This bounded solution is the Bessel function Jn(x) of first kind and of order n, see [1], for
example. 4.5 Initial-boundary value problems In previous sections we looked at solutions defined for all x 2 Rn and t 2 R. In this and in the following section we seek solutions u(x, t) defined in a bounded domain Rn and for all t 2 R and which satisfy additional boundary conditions on @. 4.5.1 Oscillation of a string Let u(x, t), x 2 [a, b], t 2 R, be the deflection of a string, see Figure 1.4 from Chapter 1. Assume the deflection occurs in the (x, u)-plane. This problem is governed by the initial-boundary value problem utt(x, t) = uxx(x, t) on (0, l) (4.46) u(x, 0) = f(x) (4.47) ut(x, 0) = g(x) (4.48) u(0, t) = u(l, t) = 0. (4.49) Assume the initial data f, g are sufficiently regular. This implies compatibility conditions f(0) = f(l) = 0 and g(0) = g(l). 126 CHAPTER 4. HYPERBOLIC EQUATIONS Fouriers method To find solutions of differential equation (4.46) we make the separation of variables ansatz u(x, t) = v(x)w(t). Inserting the ansatz into (4.46) we obtain v(x)w00(t) = v00(x)w(t), or, if v(x)w(t) 6= 0, w00(t) w(t) = v00(x) v(x) . It follows, provided v(x)w(t) is a solution of differential equation (4.46) and v(x)w(t) 6= 0, w00(t) w(t) = const. =: and v00(x) v(x) = since x, t are independent variables. Assume v(0) = v(l) = 0, then v(x)w(t) satisfies the boundary condition (4.49). Thus we look for solutions of the eigenvalue problem v00(x) = v(x) in (0, l) (4.50) v(0) = v(l) = 0, (4.51) which has the eigenvalues n = l n 2 , n = 1, 2, . . . , and associated eigenfunctions are vn = sin l nx
. Solutions of w00(t) = nw(t) are sin( p nt), cos( p nt). Set wn(t) = n cos( p nt) + n sin( p nt), 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 127 where n, n 2 R. It is easily seen that wn(t)vn(x) is a solution of differential equation (4.46), and, since (4.46) is linear and homogeneous, also (principle of superposition) uN = XN n=1 wn(t)vn(x) which satisfies the differential equation (4.46) and the boundary conditions (4.49). Consider the formal solution of (4.46), (4.49) u(x, t) = 1X n=1 n cos( p nt) + n sin( p nt) sin p nx . (4.52) Formal means that we know here neither that the right hand side converges nor that it is a solution of the initial-boundary value problem. Formally, the unknown coefficients can be calculated from initial conditions (4.47), (4.48) as follows. We have u(x, 0) = 1X n=1 n sin( p nx) = f(x). Multiplying this equation by sin(pkx) and integrate over (0, l), we get n Zl 0 sin2(
p kx) dx = Zl 0 f(x) sin( p kx) dx. We recall that Z l 0 sin( p nx) sin( p kx) dx = l 2 nk. Then k = 2 l Zl 0 f(x) sin k l x dx. (4.53) By the same argument it follows from ut(x, 0) = 1X n=1 n p n sin( p nx) = g(x) that k = 2 k Zl 0 g(x) sin k l x dx. (4.54) Under additional assumptions f 2 C4 0 (0, l), g 2 C3 0 (0, l) it follows that the right hand side of (4.52), where n, n are given by (4.53) and (4.54),
128 CHAPTER 4. HYPERBOLIC EQUATIONS respectively, defines a classical solution of (4.46)-(4.49) since under these assumptions the series for u and the formal differentiate series for ut, utt, ux, uxx converges uniformly on 0 x l, 0 t T, 0 < T < 1 fixed, see an exercise. 4.5.2 Oscillation of a membrane Let R2 be a bounded domain. We consider the initial-boundary value problem utt(x, t) = 4xu in R, (4.55) u(x, 0) = f(x), x 2 , (4.56) ut(x, 0) = g(x), x 2 , (4.57) u(x, t) = 0 on @ R. (4.58) As in the previous subsection for the string, we make the ansatz (separation of variables) u(x, t) = w(t)v(x) which leads to the eigenvalue problem 4v = v in , (4.59) v = 0 on @. (4.60) Let n are the eigenvalues of (4.59), (4.60) and vn a complete associated orthonormal system of eigenfunctions. We assume is sufficiently regular such that the eigenvalues are countable, which is satisfied in the following examples. Then the formal solution of the above initial-boundary value problem is u(x, t) = 1X n=1 n cos( p nt) + n sin( p nt) vn(x), where n = Z f(x)vn(x) dx n = 1 pn Z g(x)vn(x) dx. Remark. In general, eigenvalues of (4.59), (4.59) are not known explicitly. There are numerical methods to calculate these values. In some special cases, see next examples, these values are known. 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 129 Examples 1. Rectangle membrane. Let = (0, a) (0, b). Using the method of separation of variables, we find all eigenvalues of (4.59), (4.60) which are given by kl = r k2 a2 +
l2 b2 , k, l = 1, 2, . . . and associated eigenfunctions, not normalized, are ukl(x) = sin k a x1 sin l b x2 . 2. Disk membrane. Set = {x 2 R2 : x21 + x22 < R2}. In polar coordinates, the eigenvalue problem (4.59), (4.60) is given by 1 r (rur)r + 1 r u = u (4.61) u(R, ) = 0, (4.62) here is u = u(r, ) := v(r cos , r sin ). We will find eigenvalues and eigenfunctions by separation of variables u(r, ) = v(r)q(), where v(R) = 0 and q() is periodic with period 2 since u(r, ) is single valued. This leads to 1 r (rv0)0q + 1 r vq00 = vq. Dividing by vq, provided vq 6= 0, we obtain 1 r (rv0(r))0 v(r) + 1
r q00() q() = , (4.63) which implies q00() q() = const. =: . 130 CHAPTER 4. HYPERBOLIC EQUATIONS Thus, we arrive at the eigenvalue problem q00() = q() q() = q( + 2). It follows that eigenvalues are real and nonnegative. All solutions of the differential equation are given by q() = Asin(p) + B cos(p), where A, B are arbitrary real constants. From the periodicity requirement Asin(p) + B cos(p) = Asin(p( + 2)) + B cos(p( + 2)) it follows2 sin(p) (Acos(p + p) B sin(p + p)) = 0, which implies, since A, B are not zero simultaneously, because we are looking for q not identically zero, sin(p) sin(p + ) = 0 for all and a = (A,B, ). Consequently the eigenvalues are n = n2, n = 0, 1, . . . . Inserting q00()/q() = n2 into (4.63), we obtain the boundary value problem r2v00(r) + rv0(r) + (r2 n2)v = 0 on (0,R) (4.64) v(R) = 0 (4.65) sup r2(0,R) |v(r)| < 1. (4.66) Set z = pr and v(r) = v(z/p) =: y(z), then, see (4.64), z2y00(z) + zy0(z) + (z2 n2)y(z) = 0, 2 sin x sin y = 2 cos x+y 2 sin xy 2 cos x cos y = 2 sin x+y 2 sin xy 2 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 131 where z > 0. Solutions of this differential equations which are bounded at zero are Bessel functions of first kind and n-th order Jn(z). The eigenvalues follows from boundary condition (4.65), i. e., from Jn(pR) = 0. Denote by nk the zeros of Jn(z), then the eigenvalues of (4.61)-(4.61) are nk = nk R 2 and the associated eigenfunctions are Jn(
p nkr) sin(n), n = 1, 2, . . . Jn( p nkr) cos(n), n = 0, 1, 2, . . . . Thus the eigenvalues 0k are simple and nk, n 1, are double eigenvalues. Remark. For tables with zeros of Jn(x) and for much more properties of Bessel functions see [25]. One has, in particular, the asymptotic formula Jn(x) = 2 x 1/2 cos(x n/2 /5) + O 1 x as x ! 1. It follows from this formula that there are infinitely many zeros of Jn(x). 4.5.3 Inhomogeneous wave equations Let Rn be a bounded and sufficiently regular domain. In this section we consider the initial-boundary value problem utt = Lu + f(x, t) in R (4.67) u(x, 0) = (x) x 2 (4.68) ut(x, 0) = (x) x 2 (4.69) u(x, t) = 0 for x 2 @ and t 2 Rn, (4.70) where u = u(x, t), x = (x1, . . . , xn), f, , are given and L is an elliptic differential operator. Examples for L are: 1. L = @2/@x2, oscillating string. 2. L = 4x, oscillating membrane. 132 CHAPTER 4. HYPERBOLIC EQUATIONS 3. Lu = Xn i,j=1 @ @xj aij(x)uxi , where aij = aji are given sufficiently regular functions defined on . We assume L is uniformly elliptic, that is, there is a constant > 0 such that Xn i,j=1 aijij ||2 for all x 2 and 2 Rn. 4. Let u = (u1, . . . , um) and Lu = Xn i,j=1 @ @xj Aij(x)uxi
, where Aij = Aji are given sufficiently regular (m m)-matrices on . We assume that L defines an elliptic system. An example for this case is the linear elasticity. Consider the eigenvalue problem Lv = v in (4.71) v = 0 on @. (4.72) Assume there are infinitely many eigenvalues 0 < 1 2 . . . ! 1 and a system of associated eigenfunctions v1, v2, . . . which is complete and orthonormal in L2(). This assumption is satisfied if is bounded and if @ is sufficiently regular. For the solution of (4.67)-(4.70) we make the ansatz u(x, t) = 1X k=1 vk(x)wk(t), (4.73) with functions wk(t) which will be determined later. It is assumed that all series are convergent and that following calculations make sense. Let f(x, t) = 1X k=1 ck(t)vk(x) (4.74) 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 133 be Fouriers decomposition of f with respect to the eigenfunctions vk. We have ck(t) = Z f(x, t)vk(x) dx, (4.75) which follows from (4.74) after multiplying with vl(x) and integrating over . Set h, vki = Z (x)vk(x) dx, then (x) = 1X k=1 h, vkivk(x) (x) = 1X k=1 h, vkivk(x) are Fouriers decomposition of and , respectively. In the following we will determine wk(t), which occurs in ansatz (4.73), from the requirement that u = vk(x)wk(t) is a solution of utt = Lu + ck(t)vk(x) and that the initial conditions wk(0) = h, vki, w0k (0) = h, vki are satisfied. From the above differential equation it follows w00 k(t) = kwk(t) + ck(t). Thus wk(t) = ak cos( p
kt) + bk sin( p kt) (4.76) + 1 pk Zt 0 ck( ) sin( p k(t )) d, where ak = h, vki, bk = 1 pk h, vki. Summarizing, we have 134 CHAPTER 4. HYPERBOLIC EQUATIONS Proposition 4.2. The (formal) solution of the initial-boundary value problem (4.67)-(4.70) is given by u(x, t) = 1X k=1 vk(x)wk(t), where vk is a complete orthonormal system of eigenfunctions of (4.71), (4.72) and the functions wk are defined by (4.76). The resonance phenomenon Set in (4.67)-(4.70) = 0, = 0 and assume that the external force f is periodic and is given by f(x, t) = Asin(!t)vn(x), where A, ! are real constants and vn is one of the eigenfunctions of (4.71), (4.72). It follows ck(t) = Z f(x, t)vk(x) dx = Ank sin(!t). Then the solution of the initial value problem (4.67)-(4.70) is u(x, t) = Avn(x) pn Zt 0 sin(! ) sin( p n(t )) d = Avn(x) 1 !2 n ! pn sin( p kt) sin(!t) , provided ! 6= pn. It follows u(x, t) !
A 2pn vn(x) sin(pnt) pn t cos( p nt) if ! ! pn. The right hand side is also the solution of the initial-boundary value problem if ! = pn. Consequently |u| can be arbitrarily large at some points x and at some times t if ! = pn. The frequencies pn are called critical frequencies at which resonance occurs. A uniqueness result The solution of of the initial-boundary value problem (4.67)-(4.70) is unique in the class C2( R). 4.5. INITIAL-BOUNDARY VALUE PROBLEMS 135 Proof. Let u1, u2 are two solutions, then u = u2 u1 satisfies utt = Lu in R u(x, 0) = 0 x 2 ut(x, 0) = 0 x 2 u(x, t) = 0 for x 2 @ and t 2 Rn. As an example we consider Example 3 from above and set E(t) = Z ( Xn i,j=1 aij(x)uxiuxj + utut) dx. Then E0(t) = 2 Z ( Xn i,j=1 aij(x)uxiuxj t + ututt) dx =2 Z @ ( Xn i,j=1 aij(x)uxiutnj) dS +2 Z ut(Lu + utt) dx = 0. It follows E(t) = const. From ut(x, 0) = 0 and u(x, 0) = 0 we get E(0) = 0. Consequently E(t) = 0 for all t, which implies, since L is elliptic, that u(x, t) = const. on R. Finally, the homogeneous initial and boundary value conditions lead to u(x, t) = 0 on R. 2 136 CHAPTER 4. HYPERBOLIC EQUATIONS 4.6 Exercises 1. Show that u(x, t) 2 C2(R2) is a solution of the one-dimensional wave equation
utt = c2uxx if and only if u(A) + u(C) = u(B) + u(D) holds for all parallelograms ABCD in the (x, t)-plane, which are bounded by characteristic lines, see Figure 4.8. x t B C A D Figure 4.8: Figure to the exercise 2. Method of separation of variables: Let vk(x) be an eigenfunction to the eigenvalue of the eigenvalue problem v00(x) = v(x) in (0, l), v(0) = v(l) = 0 and let wk(t) be a solution of differential equation w00(t) = kw(t). Prove that vk(x)wk(t) is a solution of the partial differential equation (wave equation) utt = uxx. 3. Solve for given f(x) and 2 R the initial value problem ut + ux + uxxx = 0 in R R+ u(x, 0) = f(x) . 4. Let S := {(x, t); t = x} be spacelike, i. e., || < 1/c2) in (x, t)-space, x = (x1, x2, x3). Show that the Cauchy initial value problem 2u = 0 4.6. EXERCISES 137 with data for u on S can be transformed using the Lorentz-transform x1 = x1 c2t p 1 2c2 , x02 = x2, x03 = x3, t0 = pt x1 1 2c2 into the initial value problem, in new coordinates, 2u = 0 u(x0, 0) = f(x0) ut0(x0, 0) = g(x0) . Here we denote the transformed function by u again. 5. (i) Show that u(x, t) := 1X n=1 n cos n l t sin n l x is a C2-solution of the wave equation utt = uxx if |n| c/n4, where the constant c is independent of n. (ii) Set n := Zl 0 f(x) sin
n l x dx. Prove |n| c/n4, provided f 2 C4 0 (0, l). 6. Let be the rectangle (0, a) (0, b). Find all eigenvalues and associated eigenfunctions of 4u = u in , u = 0 on @. Hint: Separation of variables. 7. Find a solution of Schrodingers equation i~t = ~2 2m4x + V (x) in Rn R, which satisfies the side condition Zn R |(x, t)|2dx = 1 , provided E 2 R is an (eigenvalue) of the elliptic equation 4u + 2m ~2 (E V (x))u = 0 in Rn 138 CHAPTER 4. HYPERBOLIC EQUATIONS under the side condition Rn R |u|2dx = 1, u : Rn 7! C. Here is : Rn R 7! C, ~ Plancks constant (a small positive constant), V (x) a given potential. Remark. In the case of a hydrogen atom the potential is V (x) = e/|x|, e is here a positive constant. Then eigenvalues are given by En = me4/(2~2n2), n 2 N, see [22], pp. 202. 8. Find nonzero solutions by using separation of variables of utt = 4xu in (0,1), u(x, t) = 0 on @, where is the circular cylinder = {(x1, x2, x3) 2 Rn : x21 + x22 < R2, 0 < x3 < h}. 9. Solve the initial value problem 3utt 4uxx = 0 u(x, 0) = sin x ut(x, 0) = 1 . 10. Solve the initial value problem utt c2uxx = x2, t > 0, x 2 R u(x, 0) = x ut(x, 0) = 0 . Hint: Find a solution of the differential equation independent on t, and transform the above problem into an initial value problem with homogeneous differential equation by using this solution. 11. Find with the method of separation of variables nonzero solutions u(x, t), 0 x 1, 0 t < 1, of utt uxx + u = 0 , such that u(0, t) = 0, and u(1, t) = 0 for all t 2 [0,1). 12. Find solutions of the equation utt c2uxx = 2u, = const. which can be written as u(x, t) = f(x2 c2t2) = f(s), s := x2 c2t2 4.6. EXERCISES 139 with f(0) = K, K a constant.
Hint: Transform equation for f(s) by using the substitution s := z2/A with an appropriate constant A into Bessels differential equation z2f00(z) + zf0(z) + (z2 n2)f = 0, z > 0 with n = 0. Remark. The above differential equation for u is the transformed telegraph equation (see Section 4.4). 13. Find the formula for the solution of the following Cauchy initial value problem uxy = f(x, y), where S: y = ax + b, a > 0, and the initial conditions on S are given by u = x + y + , ux = , uy = , a, b, , , constants. 14. Find all eigenvalues of q00() = q() q() = q( + 2) . 140 CHAPTER 4. HYPERBOLIC EQUATIONS Chapter 5 Fourier transform Fouriers transform is an integral transform which can simplify investigations for linear differential or integral equations since it transforms a differential operator into an algebraic equation. 5.1 Definition, properties Definition. Let f 2 Cs 0(Rn), s = 0, 1, . . . . The function f defined by b f() = (2)n/2 Z Rn eixf(x) dx, (5.1) where 2 Rn, is called Fourier transform of f, and the function eg given by eg(x) = (2)n/2 Z Rn eixg() d (5.2) is called inverse Fourier transform, provided the integrals on the right hand side exist. From (5.1) it follows by integration by parts that differentiation of a function is changed to multiplication of its Fourier transforms, or an analytical operation is converted into an algebraic operation. More precisely, we have Proposition 5.1. dDf() = i|| b f(), where || s. 141 142 CHAPTER 5. FOURIER TRANSFORM The following proposition shows that the Fourier transform of f decreases rapidly for || ! 1, provided f 2 Cs 0(Rn). In particular, the right hand side of (5.2) exists for g := f if f 2 Cn+1 0 (Rn). Proposition 5.2. Assume g 2 Cs 0(Rn), then there is a constant M = M(n, s, g) such that |bg()| M (1 + ||)s . Proof. Let = (1, . . . , n) be fixed and let j be an index such that |j | =
maxk |k|. Then || = Xn k=1 2 k !1/2 pn|j | which implies (1 + ||)s = Xs k=0 s k ||k 2s Xs k=0 nk/2|j |k 2sns/2 X ||s ||. This inequality and Proposition 5.1 imply (1 + ||)s|bg()| 2sns/2 X ||s |(i)bg()| 2sns/2 X ||s Z Rn |Dg(x)| dx =: M. 2 The notation inverse Fourier transform for (5.2) is justified by Theorem 5.1. eb f = f and be f = f. Proof. See [27], for example. We will prove the first assertion (2)n/2 Z Rn eix b f() d = f(x) (5.3) 5.1. DEFINITION, PROPERTIES 143 here. The proof of the other relation is left as an exercise. All integrals appearing in the following exist, see Proposition 5.2 and the special choice of g. (i) Formula Z Rn g() b f()eix d = Z Rn
bg(y)f(x + y) dy (5.4) follows by direct calculation: Z Rn g() (2)n/2 Z Rn eixyf(y) dy eix d = (2)n/2 Z Rn Z Rn g()ei(yx) d f(y) dy = Z Rn bg(y x)f(y) dy = Z Rn bg(y)f(x + y) dy. (ii) Formula (2)n/2 Z Rn eiyg(") d = "nbg(y/") (5.5) for each " > 0 follows after substitution z = " in the left hand side of (5.1). (iii) Equation Z Rn g(") b f()eix d = Z Rn bg(y)f(x + "y) dy (5.6) follows from (5.4) and (5.5). Set G() := g("), then (5.4) implies Z Rn G() b f()eix d = Z Rn bG (y)f(x + y) dy. Since, see (5.5), bG (y) = (2)n/2 Z Rn eiyg(") d = "nbg(y/"),
144 CHAPTER 5. FOURIER TRANSFORM we arrive at Z Rn g(") b f() = Z Rn "nbg(y/")f(x + y) dy = Z Rn bg(z)f(x + "z) dz. Letting " ! 0, we get g(0) Z Rn b f()eix d = f(x) Z Rn bg(y) dy. (5.7) Set g(x) := e|x|2/2, then Z Rn bg(y) dy = (2)n/2. (5.8) Since g(0) = 1, the first assertion of Theorem 5.1 follows from (5.7) and (5.8). It remains to show (5.8). (iv) Proof of (5.8). We will show bg(y) : = (2)n/2 Z Rn e|x|2/2eixx dx = e|y|2/2. The proof of Z Rn e|y|2/2 dy = (2)n/2 is left as an exercise. Since x p2 +i y p2 x p2 +i y p2 = |x|2
2 + ix y |y|2 2 it follows Z Rn e|x|2/2eixy dx = Z Rn e2 e|y|2/2 dx = e|y|2/2 Z Rn e2 dx = 2n/2e|y|2/2 Z Rn e2 d 5.1. DEFINITION, PROPERTIES 145 where := x p2 +i y p2 . Consider first the one-dimensional case. According to Cauchys theorem we have I C e2 d = 0, where the integration is along the curve C which is the union of four curves as indicated in Figure 5.1. Re Im C C C iC y 2 h h 4 3 1 2 R R Figure 5.1: Proof of (5.8) Consequently Z
C3 e2 d = 1 p2 ZR R ex2/2 dx Z C2 e2 d Z C4 e2 d. It follows lim R!1 Z C3 e2 d = p since lim R!1 Z Ck e2 d = 0, k = 2, 4. The case n > 1 can be reduced to the one-dimensional case as follows. Set = x p2 +i y p2 = (1, . . . , n), where l = xl p2 +i yl p2 . 146 CHAPTER 5. FOURIER TRANSFORM From d = d1 . . . dl and e2 = e Pn l=1 2 l= Yn l=1 e2 l it follows Z
Rn e2 d = Yn l=1 Z l e2 l dl, where for fixed y l = {z 2 C : z = xl p2 +i yl p2 ,1 < xl < +1}. 2 There is a useful class of functions for which the integrals in the definition of b f and e f exist. For u 2 C1(Rn) we set qj,k(u) := max : ||k sup Rn (1 + |x|2)j/2|Du(x)| . Definition. The Schwartz class of rapidly degreasing functions is S(Rn) = {u 2 C1(Rn) : qj,k(u) < 1 for any j, k 2 N [ {0}} . This space is a Frechet space. Proposition 5.3. Assume u 2 S(Rn), then bu and eu 2 S(Rn). Proof. See [24], Chapter 1.2, for example, or an exercise. 5.1.1 Pseudodifferential operators The properties of Fourier transform lead to a general theory for linear partial differential or integral equations. In this subsection we define Dk = 1 i @ @xk , k = 1, . . . , n, and for each multi-index as in Subsection 3.5.1 D = D1 1 . . .Dn n. 5.1. DEFINITION, PROPERTIES 147 Thus D = 1 i|| @|| @x1 1 . . . @xn n .
Let p(x,D) := X ||m a(x)D, be a linear partial differential of order m, where a are given sufficiently regular functions. According to Theorem 5.1 and Proposition 5.3, we have, at least for u 2 S(Rn), u(x) = (2)n/2 Z Rn eixbu() d, which implies Du(x) = (2)n/2 Z Rn eixbu() d. Consequently p(x,D)u(x) = (2)n/2 Z Rn eixp(x, )bu() d, (5.9) where p(x, ) = X ||m a(x). The right hand side of (5.9) makes sense also for more general functions p(x, ), not only for polynomials. Definition. The function p(x, ) is called symbol and (Pu)(x) := (2)n/2 Z Rn eixp(x, )bu() d is said to be pseudodifferential operator. An important class of symbols for which the right hand side in this definition of a pseudodifferential operator is defined is Sm which is the subset of p(x, ) 2 C1( Rn) such that |D xD p(x, )| CK,,(p) (1 + ||)m|| for each compact K . 148 CHAPTER 5. FOURIER TRANSFORM Above we have seen that linear differential operators define a class of pseudodifferential operators. Even integral operators can be written (formally) as pseudodifferential operators. Let (Pu)(x) = Z Rn K(x, y)u(y) dy be an integral operator. Then (Pu)(x) = (2)n/2 Z Rn K(x, y)
Z Rn eixbu() d = (2)n/2 Z Rn eix Z Rn ei(yx)K(x, y) dy bu(). Then the symbol associated to the above integral operator is p(x, ) = Z Rn ei(yx)K(x, y) dy. 5.2. EXERCISES 149 5.2 Exercises 1. Show Z Rn e|y|2/2 dy = (2)n/2. 2. Show that u 2 S(Rn) implies u, eu 2 S(Rn). 3. Give examples for functions p(x, ) which satisfy p(x, ) 2 Sm. 4. Find a formal solution of Cauchys initial value problem for the wave equation by using Fouriers transform. 150 CHAPTER 5. FOURIER TRANSFORM Chapter 6 Parabolic equations Here we consider linear parabolic equations of second order. An example is the heat equation ut = a24u, where u = u(x, t), x 2 R3, t 0, and a2 is a positive constant called conductivity coefficient. The heat equation has its origin in physics where u(x, t) is the temperature at x at time t, see [20], p. 394, for example. Remark 1. After scaling of axis we can assume a = 1. Remark 2. By setting t := t, the heat equation changes to an equation which is called backward equation. This is the reason for the fact that the heat equation describes irreversible processes in contrast to the wave equation 2u = 0 which is invariant with respect the mapping t 7! t. Mathematically, it means that it is not possible, in general, to find the distribution of temperature at an earlier time t < t0 if the distribution is given at t0. Consider the initial value problem for u = u(x, t), u 2 C1(Rn R+), ut = 4u in x 2 Rn, t 0, (6.1) u(x, 0) = (x), (6.2) where 2 C(Rn) is given and 4 4x. 151 152 CHAPTER 6. PARABOLIC EQUATIONS 6.1 Poissons formula Assume u is a solution of (6.1), then, since Fourier transform is a linear mapping, u\t 4u = 0. From properties of the Fourier transform, see Proposition 5.1, we have d4u = Xn
k=1 d@2u @x2 k = Xn k=1 i22 kbu(), provided the transforms exist. Thus we arrive at the ordinary differential equation for the Fourier transform of u dbu dt + ||2bu = 0, where is considered as a parameter. The solution is bu(, t) = b()e||2t since bu(, 0) = b(). From Theorem 5.1 it follows u(x, t) = (2)n/2 Z Rn b()e||2teix d = (2)n Z Rn (y) Z Rn ei(xy)||2t d dy. Set K(x, y, t) = (2)n Z Rn ei(xy)||2t d. By the same calculations as in the proof of Theorem 5.1, step (vi), we find K(x, y, t) = (4t)n/2e|xy|2/4t. (6.3) Thus we have u(x, t) = 1 2pt n Z Rn (z)e|xz|2/4t dz. (6.4) Definition. Formula (6.4) is called Poissons formula and the function K defined by (6.3) heat kernel or fundamental solution of the heat equation. 6.1. POISSONS FORMULA 153 r K K(x,y,t ) K(x,y,t ) 1 2 Figure 6.1: Kernel K(x, y, t), = |x y|, t1 < t2 Proposition 6.1 The kernel K has following properties:
(i) K(x, y, t) 2 C1(Rn Rn R+), (ii) (@/@t 4)K(x, y, t) = 0, t > 0, (iii) K(x, y, t) > 0, t > 0, (iv) R Rn K(x, y, t) dy = 1, x 2 Rn, t > 0, (v) For each fixed > 0: lim t!0 t>0 Z Rn\B(x) K(x, y, t) dy = 0 uniformly for x 2 Rn. Proof. (i) and (iii) are obviously, and (ii) follows from the definition of K. Equations (iv) and (v) hold since Z Rn\B(x) K(x, y, t) dy = Z Rn\B(x) (4t)n/2e|xy|2/4t dy = n/2 Z Rn\B/p4t(0) e||2 d 154 CHAPTER 6. PARABOLIC EQUATIONS by using the substitution y = x+(4t)1/2. For fixed > 0 it follows (v) and for := 0 we obtain (iv). 2 Theorem 6.1. Assume 2 C(Rn) and supRn |(x)| < 1. Then u(x, t) given by Poissons formula (6.4) is in C1(Rn R+), continuous on Rn [0,1) and a solution of the initial value problem (6.1), (6.2). Proof. It remains to show lim x! t!0 u(x, t) = (). Since is continuous there exists for given " > 0 a = (") such that |(y) x t x x+d x+2d Figure 6.2: Figure to the proof of Theorem 6.1 ()| < " if |y | < 2. Set M := supRn |(y)|. Then, see Proposition 6.1, u(x, t) () = Z Rn K(x, y, t) ((y) ()) dy. 6.2. INHOMOGENEOUS HEAT EQUATION 155 It follows, if |x | < and t > 0, that |u(x, t) ()| Z B(x) K(x, y, t) |(y) ()| dy + Z
Rn\B(x) K(x, y, t) |(y) ()| dy Z B2(x) K(x, y, t) |(y) ()| dy +2M Z Rn\B(x) K(x, y, t) dy " Z Rn K(x, y, t) dy + 2M Z Rn\B(x) K(x, y, t) dy < 2" if 0 < t t0, t0 sufficiently small. 2 Remarks. 1. Uniqueness follows under the additional growth assumption |u(x, t)| Mea|x|2 in DT , where M and a are positive constants, see Proposition 6.2 below. In the one-dimensional case, one has uniqueness in the class u(x, t) 0 in DT , see [10], pp. 222. 2. u(x, t) defined by Poissons formula depends on all values (y), y 2 Rn. That means, a perturbation of , even far from a fixed x, has influence to the value u(x, t). In physical terms, this means that heat travels with infinite speed, in contrast to the experience. 6.2 Inhomogeneous heat equation Here we consider the initial value problem for u = u(x, t), u 2 C1(RnR+), ut 4u = f(x, t) in x 2 Rn, t 0, u(x, 0) = (x), where and f are given. From u\t 4u =\f(x, t) 156 CHAPTER 6. PARABOLIC EQUATIONS we obtain an initial value problem for an ordinary differential equation: dbu dt + ||2bu = b f(, t) bu(, 0) = b(). The solution is given by bu(, t) = e||2t b() + Zt 0 e||2(t) b f(, ) d. Applying the inverse Fourier transform and a calculation as in the proof of Theorem 5.1, step (vi), we get u(x, t) = (2)n/2 Z Rn eix e||2t b() + Zt
0 e||2(t) b f(, ) d d. From the above calculation for the homogeneous problem and calculation as in the proof of Theorem 5.1, step (vi), we obtain the formula u(x, t) = 1 (2pt)n Z Rn (y)e|yx|2/(4t) dy + Zt 0 Z Rn f(y, ) 1 2 p (t ) n e|yx|2/(4(t)) dy d. This function u(x, t) is a solution of the above inhomogeneous initial value problem provided 2 C(Rn), sup Rn |(x)| < 1 and if f 2 C(Rn [0,1)), M( ) := sup Rn |f(y, )| < 1, 0 < 1. 6.3 Maximum principle Let Rn be a bounded domain. Set DT = (0, T), T > 0, ST = {(x, t) : (x, t) 2 {0} or (x, t) 2 @ [0, T]}, 6.3. MAXIMUM PRINCIPLE 157 T D ST x t T dW dW T e Figure 6.3: Notations to the maximum principle see Figure 6.3 Theorem 6.2. Assume u 2 C(DT ), that ut, uxixk exist and are continuous in DT , and ut 4u 0 in DT . Then max DT u(x, t) = max ST u. Proof. Assume initially ut 4u < 0 in DT . Let " > 0 be small and 0 < " < T. Since u 2 C(DT"), there is an (x0, t0) 2 DT" such that u(x0, t0) = max
DT" u(x, t). Case (i). Let (x0, t0) 2 DT". Hence, since DT" is open, ut(x0, t0) = 0, uxl(x0, t0) = 0, l = 1, . . . , n and Xn l,k=1 uxlxk (x0, t0)lk 0 for all 2 Rn. The previous inequality implies that uxkxk (x0, t0) 0 for each k. Thus we arrived at a contradiction to ut 4u < 0 in DT . 158 CHAPTER 6. PARABOLIC EQUATIONS Case (ii). Assume (x0, t0) 2 {T "}. Then it follows as above 4u 0 in (x0, t0), and from u(x0, t0) u(x0, t), t t0, one concludes that ut(x0, t0) 0. We arrived at a contradiction to ut 4u < 0 in DT again. Summarizing, we have shown that max DT" u(x, t) = max T" u(x, t). Thus there is an (x", t") 2 ST" such that u(x", t") = max DT" u(x, t). Since u is continuous on DT , we have lim "!0 max DT" u(x, t) = max DT u(x, t). It follows that there is (x, t) 2 ST such that u(x, t) = max DT u(x, t) since ST" ST and ST is compact. Thus, theorem is shown under the assumption ut 4u < 0 in DT . Now assume ut 4u 0 in DT . Set v(x, t) := u(x, t) kt, where k is a positive constant. Then vt 4v = ut 4u k < 0. From above we have max DT u(x, t) = max DT (v(x, t) + kt) max DT v(x, t) + kT = max ST v(x, t) + kT max ST u(x, t) + kT, Letting k ! 0, we obtain
max DT u(x, t) max ST u(x, t). 6.3. MAXIMUM PRINCIPLE 159 Since ST DT , the theorem is shown. 2 If we replace in the above theorem the bounded domain by Rn, then the result remains true provided we assume an additional growth assumption for u. More precisely, we have the following result which is a corollary of the previous theorem. Set for a fixed T, 0 < T < 1, DT = {(x, t) : x 2 Rn, 0 < t < T}. Proposition 6.2. Assume u 2 C(DT ), that ut, uxixk exist and are continuous in DT , ut 4u 0 in DT , and additionally that u satisfies the growth condition u(x, t) Mea|x|2 , where M and a are positive constants. Then max DT u(x, t) = max ST u. It follows immediately the Corollary. The initial value problem ut 4u = 0 in DT , u(x, 0) = f(x), x 2 Rn, has a unique solution in the class defined by u 2 C(DT ), ut, uxixk exist and are continuous in DT and |u(x, t)| Mea|x|2 . Proof of Proposition 6.2. See [10], pp. 217. We can assume that 4aT < 1, since the finite interval can be divided into finite many intervals of equal length with 4a < 1. Then we conclude successively for k that u(x, t) sup y2Rn u(y, k ) sup y2Rn u(y, 0) for k t (k + 1) , k = 0, . . . ,N 1, where N = T/ . There is an > 0 such that 4a(T + ) < 1. Consider the comparison function v(x, t) : = u(x, t) (4(T + t))n/2 e|xy|2/(4(T+t)) = u(x, t) K(ix, iy, T + t) 160 CHAPTER 6. PARABOLIC EQUATIONS for fixed y 2 Rn and for a constant > 0. Since the heat kernel K(ix, iy, t) satisfies Kt = 4Kx, we obtain @ @t v 4v = ut 4u 0. Set for a constant > 0 DT, = {(x, t) : |x y| < , 0 < t < T}. Then we obtain from Theorem 6.2 that v(y, t) max ST, v, where ST, ST of Theorem 6.2 with = B(y), see Figure 6.3. On the bottom of ST, we have, since K > 0,
v(x, 0) u(x, 0) sup z2Rn f(z). On the cylinder part |x y| = , 0 t T, of ST, it is v(x, t) Mea|x|2 (4(T + t))n/2 e2/(4(T+t)) Mea(|y|+)2 (4(T + ))n/2 e2/(4(T+)) sup z2Rn f(z) for all > 0(), 0 sufficiently large. We recall that 4a(T + ) < 1. Summarizing, we have max ST, v(x, t) sup z2Rn f(z) if > 0(). Thus v(y, t) max ST, v(x, t) sup z2Rn f(z) if > 0(). Since v(y, t) = u(y, t) (4(T + t))n/2 it follows u(y, t) (4(T + t))n/2 sup z2Rn f(z). 6.3. MAXIMUM PRINCIPLE 161 Letting ! 0, we obtain the assertion of the proposition. 2 The above maximum principle of Theorem 6.2 holds for a large class of parabolic differential operators, even for degenerate equations. Set Lu = Xn i,j=1 aij(x, t)uxixj , where aij 2 C(DT ) are real, aij = aji, and the matrix (aij) is nonnegative, that is, Xn i,j=1 aij(x, t)ij 0 for all 2 Rn, and (x, t) 2 DT . Theorem 6.3. Assume u 2 C(DT ), that ut, uxixk exist and are continuous in DT , and ut Lu 0 in DT . Then max DT u(x, t) = max ST u. Proof. (i) One proof is a consequence of the following lemma: Let A, B real, symmetric and nonnegative matrices. Nonnegative means that all eigenvalues are nonnegative. Then trace (AB)
Pn i,j=1 aijbij 0, see an exercise. (ii) Another proof exploits transform to principle axis directly: Let U = (z1, . . . , zn), where zl is an orthonormal system of eigenvectors to the eigenvalues l of the matrix A = (ai,j(x0, t0)). Set = U, x = UT (xx0)y and v(y) = u(x0 + Uy, t0), then 0 Xn i,j=1 aij(x0, t0)ij = Xn i=1 i2 i 0 Xn i,j=1 uxixj ij = Xn i=1 vyiyi2 i. It follows i 0 and vyiyi 0 for all i. Consequently Xn i,j=1 aij(x0, t0)uxixj (x0, t0) = Xn i=1 ivyiyi 0. 2 162 CHAPTER 6. PARABOLIC EQUATIONS 6.4 Initial-boundary value problem Consider the initial-boundary value problem for c = c(x, t) ct = D4c in (0,1) (6.5) c(x, 0) = c0(x) x 2 (6.6) @c @n = 0 on @ (0,1). (6.7) Here is Rn, n the exterior unit normal at the smooth parts of @, D a positive constant and c0(x) a given function. Remark. In application to diffusion problems, c(x, t) is the concentration of a substance in a solution, c0(x) its initial concentration and D the coefficient of diffusion. First Ficks rule says that w = D@c/@n, where w is the flow of the substance through the boundary @. Thus according to the Neumann boundary condition (6.7), we assume that there is no flow through the boundary. 6.4.1 Fouriers method Separation of variables ansatz c(x, t) = v(x)w(t) leads to the eigenvalue problem, see the arguments of Section 4.5, 4v = v in (6.8) @v @n = 0 on @, (6.9) and to the ordinary differential equation w0(t) + Dw(t) = 0. (6.10) Assume is bounded and @ sufficiently regular, then the eigenvalues
of (6.8), (6.9) are countable and 0 = 0 < 1 2 . . . ! 1. Let vj(x) be a complete system of orthonormal (in L2()) eigenfunctions. Solutions of (6.10) are wj(t) = CjeDj t, where Cj are arbitrary constants. 6.4. INITIAL-BOUNDARY VALUE PROBLEM 163 According to the superposition principle, cN(x, t) := XN j=0 CjeDj tvj(x) is a solution of the differential equation (6.8) and c(x, t) := 1X j=0 CjeDj tvj(x), with Cj = Z c0(x)vj(x) dx, is a formal solution of the initial-boundary value problem (6.5)-(6.7). Diffusion in a tube Consider a solution in a tube, see Figure 6.4. Assume the initial concentraW l Q x x x 3 1 2 Figure 6.4: Diffusion in a tube tion c0(x1, x2, x3) of the substrate in a solution is constant if x3 = const. It follows from a uniqueness result below that the solution of the initialboundary value problem c(x1, x2, x3, t) is independent of x1 and x2. 164 CHAPTER 6. PARABOLIC EQUATIONS Set z = x3, then the above initial-boundary value problem reduces to ct = Dczz c(z, 0) = c0(z) cz = 0, z = 0, z = l. The (formal) solution is c(z, t) = 1X n=0 CneD( l n)2 t cos l nz , where C0 =
1 l Zl 0 c0(z) dz Cn = 2 l Zl 0 c0(z) cos l nz dz, n 1. 6.4.2 Uniqueness Sufficiently regular solutions of the initial-boundary value problem (6.5)(6.7) are uniquely determined since from ct = D4c in (0,1) c(x, 0) = 0 @c @n = 0 on @ (0,1). it follows that for each > 0 0= Z 0 Z (ctc D(4c)c) dxdt = Z Z 0 1 2 @ @t (c2) dtdx + D Z Z 0 |rxc|2 dxdt = 1 2 Z c2(x, ) dx + D Z Z 0 |rxc|2 dxdt. 6.5 Black-Scholes equation Solutions of the Black-Scholes equation define the value of a derivative, for example of a call or put option, which is based on an asset. An asset 6.5. BLACK-SCHOLES EQUATION 165 can be a stock or a derivative again, for example. In principle, there are infinitely many such products, for example n-th derivatives. The Black-
Scholes equation for the value V (S, t) of a derivative is Vt + 1 2 2S2VSS + rSVS rV = 0 in , (6.11) where for a fixed T, 0 < T < 1, = {(S, t) 2 R2 : 0 < S < 1, 0 < t < T}, and , r are positive constants. More precisely, is the volatility of the underlying asset S, r is the guaranteed interest rate of a risk-free investment. If S(t) is the value of an asset at time t, then V (S(t), t) is the value of the derivative at time t, where V (S, t) is the solution of an appropriate initial-boundary value problem for the Black-Scholes equation, see below. The Black-Scholes equation follows from Itos Lemma under some assumptions on the random function associated to S(t), see [26], for example. Call option Here is V (S, t) := C(S, t), where C(S, t) is the value of the (European) call option. In this case we have following side conditions to (6.11): C(S, T) = max{S E, 0} (6.12) C(0, t) = 0 (6.13) C(S, t) = S + o(S) as S ! 1, uniformly in t, (6.14) where E and T are positive constants, E is the exercise price and T the expiry. Side condition (6.12) means that the value of the option has no value at time T if S(T) E, condition (6.13) says that it makes no sense to buy assets if the value of the asset is zero, condition (6.14) means that we buy assets if its value becomes large, see Figure 6.5, where the side conditions are indicated. Theorem 6.4 (Black-Scholes formula for European call options). The solution C(S, t), 0 S < 1, 0 t T, of the initial-boundary value problem (6.11)-(6.14) is explicitly known and is given by C(S, t) = SN(d1) Eer(Tt)N(d2), 166 CHAPTER 6. PARABOLIC EQUATIONS S t T C=max{SE,0} C=0 C~S Figure 6.5: Side conditions for a call option where N(x) = 1 p2 Zx 1 ey2/2 dy, d1 = ln(S/E) + (r + 2/2)(T t) pT t , d2 = ln(S/E) + (r 2/2)(T t) pT t . Proof. Substitutions
S = Eex, t = T 2/2 , C = Ev(x, ) change equation (6.11) to v = vxx + (k 1)vx kv, (6.15) where k= r 2/2 . Initial condition (6.15) implies v(x, 0) = max{ex 1, 0}. (6.16) For a solution of (6.15) we make the ansatz v = ex+u(x, ), 6.5. BLACK-SCHOLES EQUATION 167 where and are constants which we will determine as follows. Inserting the ansatz into differential equation (6.15), we get u + u = 2u + 2ux + uxx + (k 1)(u + ux) ku. Set = 2 + (k 1) k and choose such that 0 = 2 + (k 1), then u = uxx. Thus v(x, ) = e(k1)x/2(k+1)2/4u(x, ), (6.17) where u(x, ) is a solution of the initial value problem u = uxx, 1 < x < 1, > 0 u(x, 0) = u0(x), with u0(x) = max n e(k+1)x/2 e(k1)x/2, 0 o . A solution of this initial value problem is given by Poissons formula u(x, ) = 1 2p Z +1 1 u0(s)e(xs)2/(4) ds. Changing variable by q = (s x)/(p2 ), we get u(x, ) = 1 p2 Z +1 1 u0(qp2 + x)eq2/2 dq = I1 I2, where I1 = 1 p2 Z 1 x/(p2) e(k+1)(x+qp2)eq2/2 dq I2 = 1
p2 Z 1 x/(p2) e(k1)(x+qp2)eq2/2 dq. An elementary calculation shows that I1 = e(k+1)x/2+(k+1)2/4N(d1) I2 = e(k1)x/2+(k1)2/4N(d2), 168 CHAPTER 6. PARABOLIC EQUATIONS where d1 = x p2 + 1 2 (k + 1)p2 d2 = x p2 + 1 2 (k 1)p2 N(di) = 1 p2 Z di 1 es2/2 ds, i = 1, 2. Combining the formula for u(x, ), definition (6.17) of v(x, ) and the previous settings x = ln(S/E), = 2(T t)/2 and C = Ev(x, ), we get finally the formula of of Theorem 6.4. In general, the solution u of the initial value problem for the heat equation is not uniquely defined, see for example [10], pp. 206. Uniqueness. The uniqueness follows from the growth assumption (6.14). Assume there are two solutions of (6.11), (6.12)-(6.14), then the difference W(S, t) satisfies the differential equation (6.11) and the side conditions W(S, T) = 0, W(0, t) = 0, W(S, t) = O(S) as S ! 1 uniformly in 0 t T. From a maximum principle consideration, see an exercise, it follows that |W(S, t)| cS on S 0, 0 t T. The constant c is independent on S and t. From the definition of u we see that u(x, ) = 1 E exW(S, t), where S = Eex, t = T 2/(2). Thus we have the growth property |u(x, )| Mea|x|, x 2 R, (6.18) with positive constants M and a. Then the solution of u = uxx, in 1 < x < 1, 0 2T/2, with the initial condition u(x, 0) = 0 is uniquely defined in the class of functions satisfying the growth condition (6.18), see Proposition 6.2 of this chapter. That is, u(x, ) 0. 2 6.5. BLACK-SCHOLES EQUATION 169 Put option Here is V (S, t) := P(S, t), where P(S, t) is the value of the (European) put
option. In this case we have following side conditions to (6.11): P(S, T) = max{E S, 0} (6.19) P(0, t) = Eer(Tt) (6.20) P(S, t) = o(S) as S ! 1, uniformly in 0 t T. (6.21) Here E is the exercise price and T the expiry. Side condition (6.19) means that the value of the option has no value at time T if S(T) E, condition (6.20) says that it makes no sense to sell assets if the value of the asset is zero, condition (6.21) means that it makes no sense to sell assets if its value becomes large. Theorem 6.5 (Black-Scholes formula for European put options). The solution P(S, t), 0 < S < 1, t < T of the initial-boundary value problem (6.11), (6.19)-(6.21) is explicitly known and is given by P(S, t) = Eer(Tt)N(d2) SN(d1) where N(x), d1, d2 are the same as in Theorem 6.4. Proof. The formula for the put option follows by the same calculations as in the case of a call option or from the put-call parity C(S, t) P(S, t) = S Eer(Tt) and from N(x) + N(x) = 1. Concerning the put-call parity see an exercise. See also [26], pp. 40, for a heuristic argument which leads to the formula for the put-call parity. 2 170 CHAPTER 6. PARABOLIC EQUATIONS 6.6 Exercises 1. Show that the solution u(x, t) given by Poissons formula satisfies inf z2Rn '(z) u(x, t) sup z2Rn '(z) , provided '(x) is continuous and bounded on Rn. 2. Solve for given f(x) and 2 R the initial value problem ut + ux + uxxx = 0 in R R+ u(x, 0) = f(x) . 3. Show by using Poissons formula: (i) Each function f 2 C([a, b]) can be approximated uniformly by a sequence fn 2 C1[a, b] . (ii) In (i) we can choose polynomials fn (Weierstrasss approximation theorem). Hint: Concerning (ii), replace the kernel K = exp(|yx|2 4t ) by a sequence of Taylor polynomials in the variable z = |yx|2 4t . 4. Let u(x, t) be a positive solution of ut = uxx, t > 0, where is a constant. Show that := 2ux/u is a solution of Burgers equation t + x = xx, t > 0. 5. Assume u1(s, t), ..., un(s, t) are solutions of ut = uss. Show that Qn k=1 uk(xk, t) is a solution of the heat equation ut 4u = 0 in Rn (0,1). 6. Let A, B are real, symmetric and nonnegative matrices. Nonnegative means that all eigenvalues are nonnegative. P Prove that trace (AB) n i,j=1 aijbij 0.
Hint: (i) Let U = (z1, . . . , zn), where zl is an orthonormal system of eigenvectors to the eigenvalues l of the matrix B. Then X=U 0 BB@ p1 0 0 0 p2 0 ...................... 0 0 pn 1 CCA UT 6.6. EXERCISES 171 is a square root of B. We recall that UTBU = 0 BB@ 1 0 0 0 2 0 ................ 0 0 n 1 CCA . (ii) trace (QR) =trace (RQ). (iii) Let 1(C), . . . n(C) are the eigenvalues of a real symmetric nnmatrix. Then trace C = Pn l=1 l(C), which follows from the fundamental lemma of algebra: det (I C) = n (c11 + . . . + cnn)n1 + . . . ( 1) . . . ( n) = n (1 + . . . + n)n+1 + . . . 7. Assume is bounded, u is a solution of the heat equation and u satisfies the regularity assumptions of the maximum principle (Theorem 6.2). Show that u achieves its maximum and its minimum on ST . 8. Prove the following comparison principle: Assume is bounded and u, v satisfy the regularity assumptions of the maximum principle. Then ut 4u vt 4v in DT u v on ST imply that u v in DT . 9. Show that the comparison principle implies the maximum principle. 10. Consider the boundary-initial value problem ut 4u = f(x, t) in DT u(x, t) = (x, t) on ST , where f, are given. Prove uniqueness in the class u, ut, uxixk 2 C(DT ). 11. Assume u, v1, v2 2 C2(DT ) \ C(DT ), and u is a solution of the previous boundary-initial value problem and v1, v2 satisfy (v1)t 4v1 f(x, t) (v2)t 4v2 in DT v1 v2 on ST . 172 CHAPTER 6. PARABOLIC EQUATIONS Show that (inclusion theorem) v1(x, t) u(x, t) v2(x, t) on DT . 12. Show by using the comparison principle: let u be a sufficiently regular solution of
ut 4u = 1 in DT u = 0 on ST , then 0 u(x, t) t in DT . 13. Discuss the result of Theorem 6.3 for the case Lu = Xn i,j=1 aij(x, t)uxixj + Xn i bi(x, t)uxi + c(x, t)u(x, t). 14. Show that u(x, t) = 1X n=1 cnen2t sin(nx), where cn = 2 Z 0 f(x) sin(nx) dx, is a solution of the initial-boundary value problem ut = uxx, x 2 (0, ), t > 0, u(x, 0) = f(x), u(0, t) = 0, u(, t) = 0, if f 2 C4(R) is odd with respect to 0 and 2-periodic. 15. (i) Find the solution of the diffusion problem ct = Dczz in 0 z l, 0 t < 1, D = const. > 0, under the boundary conditions cz(z, t) = 0 if z = 0 and z = l and with the given initial concentration c(z, 0) = c0(z) := c0 = const. if 0 z h 0 if h < z l. (ii) Calculate limt!1 c(z, t). 6.6. EXERCISES 173 16. Prove the Black-Scholes Formel for an European put option. Hint: Put-call parity. 17. Prove the put-call parity for European options C(S, t) P(S, t) = S Eer(Tt) by using the following uniqueness result: Assume W is a solution of (6.11) under the side conditions W(S, T) = 0, W(0, t) = 0 and W(S, t) = O(S) as S ! 1, uniformly on 0 t T. Then W(S, t) 0. 18. Prove that a solution V (S, t) of the initial-boundary value problem (6.11) in under the side conditions (i) V (S, T) = 0, S 0, (ii) V (0, t) = 0, 0 t T, (iii) limS!1 V (S, t) = 0 uniformly in 0 t T, is uniquely determined in the class C2() \ C(). 19. Prove that a solution V (S, t) of the initial-boundary value problem (6.11) in , under the side conditions (i) V (S, T) = 0, S 0, (ii) V (0, t) = 0, 0 t T, (iii) V (S, t) = S+o(S) as S ! 1, uniformly on 0 t T, satisfies |V (S, t)| cS for all S 0 and 0 t T. 174 CHAPTER 6. PARABOLIC EQUATIONS Chapter 7
Elliptic equations of second order Here we consider linear elliptic equations of second order, mainly the Laplace equation 4u = 0. Solutions of the Laplace equation are called potential functions or harmonic functions. The Laplace equation is called also potential equation. The general elliptic equation for a scalar function u(x), x 2 Rn, is Lu := Xn i,j=1 aij(x)uxixj + Xn j=1 bj(x)uxj + c(x)u = f(x), where the matrix A = (aij) is real, symmetric and positive definite. If A is a constant matrix, then a transform to principal axis and stretching of axis leads to Xn i,j=1 aijuxixj = 4v, where v(y) := u(Ty), T stands for the above composition of mappings. 7.1 Fundamental solution Here we consider particular solutions of the Laplace equation in Rn of the type u(x) = f(|x y|), 175 176 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER where y 2 Rn is fixed and f is a function which we will determine such that u defines a solution if the Laplace equation. Set r = |x y|, then uxi = f0(r) xi yi r uxixi = f00(r) (xi yi)2 r2 + f0(r) 1 r (xi yi)2 r3 4u = f00(r) + n1 r f0(r). Thus a solution of 4u = 0 is given by f(r) = c1 ln r + c2 : n = 2 c1r2n + c2 : n 3 with constants c1, c2. Definition. Set r = |x y|. The function s(r) := (
1 2 ln r : n = 2 r2n (n2)!n :n3 is called singularity function associated to the Laplace equation. Here is !n the area of the n-dimensional unit sphere which is given by !n = 2n/2/(n/2), where (t) := Z 1 0 et1 d, t > 0, is the Gamma function. Definition. A function (x, y) = s(r) + (x, y) is called fundamental solution associated to the Laplace equation if 2 C2() and 4x = 0 for each fixed y 2 . Remark. The fundamental solution satisfies for each fixed y 2 the relation Z (x, y)4x(x) dx = (y) for all 2 C2 0 (), 7.2. REPRESENTATION FORMULA 177 see an exercise. This formula follows from considerations similar to the next section. In the language of distribution, this relation can be written by definition as 4x(x, y) = (x y), where is the Dirac distribution, which is called -function. 7.2 Representation formula In the following we assume that , the function which appears in the definition of the fundamental solution and the potential function u considered are sufficiently regular such that the following calculations make sense, see [6] for generalizations. This is the case if is bounded, @ is in C1, 2 C2() for each fixed y 2 and u 2 C2(). x y nx W dW Figure 7.1: Notations to Greens identity Theorem 7.1. Let u be a potential function and a fundamental solution, then for each fixed y 2 u(y) = Z @ (x, y) @u(x) @nx u(x) @(x, y) @nx dSx.
Proof. Let B(y) be a ball. Set (y) = \ B(y). See Figure 7.2 for notations. From Greens formula, for u, v 2 C2(), Z (y) (v4u u4v) dx = Z @(y) v @u @n u @v @n dS 178 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER y n W n r(y) r Figure 7.2: Notations to Theorem 7.1 we obtain, if v is a fundamental solution and u a potential function, Z @(y) v @u @n u @v @n dS = 0. Thus we have to consider Z @(y) v @u @n dS = Z @ v @u @n dS + Z @B(y) v @u @n dS Z @(y) u
@v @n dS = Z @ u @v @n dS + Z @B(y) u @v @n dS. We estimate the integrals over @B(y): (i) Z @B(y) v @u @n dS M Z @B(y) |v| dS M Z @B(y) s() dS + C!nn1 ! , where M = M(y) = sup B0 (y) |@u/@n|, 0, C = C(y) = sup x2B0 (y) |(x, y)|. 7.2. REPRESENTATION FORMULA 179 From the definition of s() we get the estimate as ! 0 Z @B(y) v @u @n dS = O(| ln |) : n = 2 O() : n 3. (7.1) (ii) Consider the case n 3, then Z @B(y) u @v
@n dS = 1 !n Z @B(y) u 1 n1 dS + Z @B(y) u @ @n dS = 1 !nn1 Z @B(y) u dS + O(n1) = 1 !nn1 u(x0) Z @B(y) dS + O(n1), = u(x0) + O(n1). for an x0 2 @B(y). Combining this estimate and (7.1), we obtain the representation formula of the theorem. 2 Corollary. Set 0 and r = |x y| in the representation formula of Theorem 7.1, then u(y) = 1 2 Z @ ln r @u @nx u @(ln r) @nx dSx, n = 2, (7.2) u(y) = 1 (n 2)!n Z @ 1 rn2 @u @nx u
@(r2n) @nx dSx, n 3. (7.3) 7.2.1 Conclusions from the representation formula Similar to the theory of functions of one complex variable, we obtain here results for harmonic functions from the representation formula, in particular from (7.2), (7.3). We recall that a function u is called harmonic if u 2 C2() and 4u = 0 in . Proposition 7.1. Assume u is harmonic in . Then u 2 C1(). Proof. Let 0 be a domain such that y 2 0. It follows from representation formulas (7.2), (7.3), where := 0, that Dlu(y) exist and 180 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER are continuous for all l since one can change differentiation with integration in right hand sides of the representation formulae. 2 Remark. In fact, a function which is harmonic in is even real analytic in , see an exercise. Proposition 7.2 (Mean value formula for harmonic functions). Assume u is harmonic in . Then for each B(x) u(x) = 1 !nn1 Z @B(x) u(y) dSy. Proof. Consider the case n 3. The assertion follows from (7.3) where := B(x) since r = and Z @B(x) 1 rn2 @u @ny dSy = 1 n2 Z @B(x) @u @ny dSy = 1 n2 Z B(x) 4u dy = 0. 2 We recall that a domain 2 Rn is called connected if is not the union of two nonempty open subsets 1, 2 such that 1 \ 2 = ;. A domain in Rn is connected if and only if its path connected. Proposition 7.3 (Maximum principle). Assume u is harmonic in a connected domain and achieves its supremum or infimum in . Then u const. in . Proof. Consider the case of the supremum. Let x0 2 such that u(x0) = sup
u(x) =: M. Set 1 := {x 2 : u(x) = M} and 2 := {x 2 : u(x) < M}. The set 1 is not empty since x0 2 1. The set 2 is open since u 2 C2(). Consequently, 2 is empty if we can show that 1 is open. Let x 2 1, then there is a 0 > 0 such that B0(x) and u(x) = M for all x 2 B0(x). 7.3. BOUNDARY VALUE PROBLEMS 181 If not, then there exists > 0 and bx such that |bx x| = , 0 < < 0 and u(bx) < M. From the mean value formula, see Proposition 7.2, it follows M= 1 !nn1 Z @B(x) u(x) dS < M !nn1 Z @B(x) dS = M, which is a contradiction. Thus, the set 2 is empty since 1 is open. 2 Corollary. Assume is connected and bounded, and u 2 C2() \ C() is harmonic in . Then u achieves its minimum and its maximum on the boundary @. Remark. The previous corollary fails if is not bounded as simple counterexamples show. 7.3 Boundary value problems Assume Rn is a connected domain. 7.3.1 Dirichlet problem The Dirichlet problem (first boundary value problem) is to find a solution u 2 C2() \ C() of 4u = 0 in (7.4) u = on @, (7.5) where is given and continuous on @. Proposition 7.4. Assume is bounded, then a solution to the Dirichlet problem is uniquely determined. Proof. Maximum principle. Remark. The previous result fails if we take away in the boundary condition (7.5) one point from the the boundary as the following example shows. Let R2 be the domain = {x 2 B1(0) : x2 > 0}, 182 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER x2 1 x1 Figure 7.3: Counterexample Assume u 2 C2() \ C( \ {0}) is a solution of 4u = 0 in u = 0 on @ \ {0}. This problem has solutions u 0 and u = Im(z +z1), where z = x1 +ix2. Another example see an exercise. In contrast to this behaviour of the Laplace equation, one has uniqueness if 4u = 0 is replaced by the minimal surface equation @ @x1 p ux1 1 + |ru|2
! + @ @x2 p ux2 1 + |ru|2 ! = 0. 7.3.2 Neumann problem The Neumann problem (second boundary value problem) is to find a solution u 2 C2() \ C1() of 4u = 0 in (7.6) @u @n = on @, (7.7) where is given and continuous on @. Proposition 7.5. Assume is bounded, then a solution to the Dirichlet problem is in the class u 2 C2() uniquely determined up to a constant. Proof. Exercise. Hint: Multiply the differential equation 4w = 0 by w and integrate the result over . 7.4. GREENS FUNCTION FOR 4 183 Another proof under the weaker assumption u 2 C1() \ C2() follows from the Hopf boundary point lemma, see Lecture Notes: Linear Elliptic Equations of Second Order, for example. 7.3.3 Mixed boundary value problem The Mixed boundary value problem (third boundary value problem) is to find a solution u 2 C2() \ C1() of 4u = 0 in (7.8) @u @n + hu = on @, (7.9) where and h are given and continuous on @.e and h are given and continuous on @. Proposition 7.6. Assume is bounded and sufficiently regular, then a solution to the mixed problem is uniquely determined in the class u 2 C2() provided h(x) 0 on @ and h(x) > 0 for at least one point x 2 @. Proof. Exercise. Hint: Multiply the differential equation 4w = 0 by w and integrate the result over . 7.4 Greens function for 4 Theorem 7.1 says that each harmonic function satisfies u(x) = Z @ (y, x) @u(y) @ny u(y) @(y, x) @ny dSy, (7.10) where (y, x) is a fundamental solution. In general, u does not satisfies the boundary condition in the above boundary value problems. Since = s+, see Section 7.2, where is an arbitrary harmonic function for each fixed x, we try to find a such that u satisfies also the boundary condition.
Consider the Dirichlet problem, then we look for a such that (y, x) = 0, y 2 @, x 2 . (7.11) Then u(x) = Z @ @(y, x) @ny u(y) dSy, x 2 . 184 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER Suppose that u achieves its boundary values of the Dirichlet problem, then u(x) = Z @ @(y, x) @ny (y) dSy, (7.12) We claim that this function solves the Dirichlet problem (7.4), (7.5). A function (y, x) which satisfies (7.11), and some additional assumptions, is called Greens function. More precisely, we define a Green function as follows. Definition. A function G(y, x), y, x 2 , x 6= y, is called Green function associated to and to the Dirichlet problem (7.4), (7.5) if for fixed x 2 , that is we consider G(y, x) as a function of y, the following properties hold: (i) G(y, x) 2 C2( \ {x}) \ C( \ {x}), 4yG(y, x) = 0, x 6= y. (ii) G(y, x) s(|x y|) 2 C2() \ C(). (iii) G(y, x) = 0 if y 2 @, x 6= y. Remark. We will see in the next section that a Green function exists at least for some domains of simple geometry. Concerning the existence of a Green function for more general domains see [13]. It is an interesting fact that we get from (i)-(iii) of the above definition two further important properties. We assume that is bounded, sufficiently regular and connected. Proposition 7.7. A Green function has the following properties. In the case n = 2 we assume diam < 1. (A) G(x, y) = G(y, x) (symmetry). (B) 0 < G(x, y) < s(|x y|), x, y 2 , x 6= y. Proof. (A) Let x(1), x(2) 2 . Set Bi = B(x(i)), i = 1, 2. We assume Bi and B1 \ B2 = ;. Since G(y, x(1)) and G(y, x(2)) are harmonic in \ B1 [ B2 we obtain from Greens identity, see Figure 7.4 for notations, 7.4. GREENS FUNCTION FOR 4 185 n nn x x(1) (2) Figure 7.4: Proof of Proposition 7.7 0= Z @(\(B1[B2))
G(y, x(1)) @ @ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy = Z @ G(y, x(1)) @ @ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy + Z @B1 G(y, x(1)) @ @ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy + Z @B2 G(y, x(1)) @ @ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy. The integral over @ is zero because of property (iii) of a Green function, and Z @B1 G(y, x(1)) @
@ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy ! G(x(1), x(2)), Z @B2 G(y, x(1)) @ @ny G(y, x(2)) G(y, x(2)) @ @ny G(y, x(1)) dSy ! G(x(2), x(1)) as ! 0. This follows by considerations as in the proof of Theorem 7.1. (B) Since G(y, x) = s(|x y|) + (y, x) 186 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER and G(y, x) = 0 if y 2 @ and x 2 we have for y 2 @ (y, x) = s(|x y|). From the definition of s(|x y|) it follows that (y, x) < 0 if y 2 @. Thus, since 4y = 0 in , the maximum-minimum principle implies that (y, x) < 0 for all y, x 2 . Consequently G(y, x) < s(|x y|), x, y 2 , x 6= y. It remains to show that G(y, x) > 0, x, y 2 , x 6= y. Fix x 2 and let B(x) be a ball such that B(x) for all 0 < < 0. There is a sufficiently small 0 > 0 such that for each , 0 < < 0, G(y, x) > 0 for all y 2 B(x), x 6= y, see property (iii) of a Green function. Since 4yG(y, x) = 0 in \ B(x) G(y, x) > 0 if y 2 @B(x) G(y, x) = 0 if y 2 @ it follows from the maximum-minimum principle that G(y, x) > 0 on \ B(x). 2 7.4.1 Greens function for a ball If = BR(0) is a ball, then Greens function is explicitly known. Let = BR(0) be a ball in Rn with radius R and the center at the origin. Let x, y 2 BR(0) and let y0 the reflected point of y on the sphere @BR(0), that is, in particular |y||y0| = R2, see Figure 7.5 for notations. Set G(x, y) = s(r) s R r1 , where s is the singularity function of Section 7.1, r = |x y| and 2 = Xn
i=1 y2 i , r1 = Xn i=1 xi R2 2 yi 2 . 7.4. GREENS FUNCTION FOR 4 187 0y r R y r1 x Figure 7.5: Reflection on @BR(0) This function G(x, y) satisfies (i)-(iii) of the definition of a Green function. We claim that u(x) = Z @BR(0) @ @ny G(x, y) dSy is a solution of the Dirichlet problem (7.4), (7.5). This formula is also true for a large class of domains Rn, see [13]. Lemma. @ @ny G(x, y) |y|=R = 1 R!n R2 |x|2 |y x|n . Proof. Exercise. Set H(x, y) = 1 R!n R2 |x|2 |y x|n , (7.13) which is called Poissons kernel. Theorem 7.2. Assume 2 C(@). Then u(x) = Z @BR(0) H(x, y)(y) dSy is the solution of the first boundary value problem (7.4), (7.5) in the class C2() \ C().
188 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER Proof. The proof follows from following properties of H(x, y): (i) H(x, y) 2 C1, |y| = R, |x| < R, x 6= y, (ii) 4xH(x, y) = 0, |x| < R, |y| = R, (iii) R @BR(0) H(x, y) dSy = 1, |x| < R, (iv) H(x, y) > 0, |y| = R, |x| < R, (v) Fix 2 @BR(0) and > 0, then limx!,|x|<R H(x, y) = 0 uniformly in y 2 @BR(0), |y | > . (i), (iv) and (v) follow from the definition (7.13) of H and (ii) from (7.13) or from H= @G(x, y) @ny y2@BR(0) , G harmonic and G(x, y) = G(y, x). Property (iii) is a consequence of formula u(x) = Z @BR(0) H(x, y)u(y) dSy, for each harmonic function u, see calculations to the representation formula above. We obtain (ii) if we set u 1. It remains to show that u, given by Poissons formula, is in C(BR(0)) and that u achieves the prescribed boundary values. Fix 2 @BR(0) and let x 2 BR(0). Then u(x) () = Z @BR(0) H(x, y) ((y) ()) dSy = I1 + I2, where I1 = Z @BR(0), |y|< H(x, y) ((y) ()) dSy I2 = Z @BR(0), |y| H(x, y) ((y) ()) dSy. 7.4. GREENS FUNCTION FOR 4 189 For given (small) > 0 there is a = () > 0 such that |(y) ()| < for all y 2 @BR(0) with |y | < . It follows |I1| because of (iii) and (iv). Set M = max@BR(0) ||. From (v) we conclude that there is a 0 > 0 such that H(x, y) < 2M!nRn1 if x and y satisfy |x| < 0, |y| > , see Figure 7.6 for notations. Thus R x
dz d Figure 7.6: Proof of Theorem 7.2 |I2| < and the inequality |u(x) ()| < 2 for x 2 BR(0) such that |x | < 0 is shown. 2 Remark. Define 2 [0, ] through cos = x y/(|x||y|), then we write Poissons formula of Theorem 7.2 as u(x) = R2 |x|2 !nR Z @BR(0) (y) 1 (|x|2 + R2 2|x|Rcos )n/2 dSy. 190 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER In the case n = 2 we can expand this integral in a power series with respect to := |x|/R if |x| < R, since R2 |x|2 |x| + R2 2|x|Rcos = 1 2 2 2 cos + 1 =1+2 1X n=1 n cos(n), see [16], pp. 18 for an easy proof of this formula, or [4], Vol. II, p. 246. 7.4.2 Greens function and conformal mapping For two-dimensional domains there is a beautiful connection between conformal mapping and Greens function. Let w = f(z) be a conformal mapping from a sufficiently regular connected domain in R2 onto the interior of the unit circle, see Figure 7.7. Then the Green function of is, see for examw= f(z) z w x y 1. W. E Figure 7.7: Conformal mapping ple [16] or other text books about the theory of functions of one complex variable, G(z, z0) = 1 2 ln 1 f(z)f(z0) f(z) f(z0) , where z = x1 + ix2, z0 = y1 + iy2.
7.5 Inhomogeneous equation Here we consider solutions u 2 C2() \ C() of 4u = f(x) in (7.14) u = 0 on @, (7.15) 7.5. INHOMOGENEOUS EQUATION 191 where f is given. We need the following lemma concerning volume potentials. We assume that is bounded and sufficiently regular such that all the following integrals exist. See [6] for generalizations concerning these assumptions. Let for x 2 Rn, n 3, V (x) = Z f(y) 1 |x y|n2 dy and set in the two-dimensional case V (x) = Z f(y) ln 1 |x y| dy. We recall that !n = |@B1(0)|. Lemma. (i) Assume f 2 C(). Then V 2 C1(Rn) and Vxi(x) = Z f(y) @ @xi 1 |x y|n2 dy, if n 3, Vxi(x) = Z f(y) @ @xi ln 1 |x y| dy if n = 2. (ii) If f 2 C1(), then V 2 C2() and 4V = (n 2)!nf(x), x 2 , n 3 4V = 2f(x), x 2 , n = 2. Proof. To simplify the presentation, we consider the case n = 3. (i) The first assertion follows since we can change differentiation with integration since the differentiate integrand is weakly singular, see an exercise. (ii) We will differentiate at x 2 . Let B be a fixed ball such that x 2 B,
sufficiently small such that B . Then, according to (i) and since we have the identity @ @xi 1 |x y| = @ @yi 1 |x y| which implies that f(y) @ @xi 1 |x y| = @ @yi f(y) 1 |x y| + fyi(y) 1 |x y| , 192 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER we obtain Vxi(x) = Z f(y) @ @xi 1 |x y| dy = Z \B f(y) @ @xi 1 |x y|
dy + Z B f(y) @ @xi 1 |x y| dy = Z \B f(y) @ @xi 1 |x y| dy + Z B @ @yi f(y) 1 |x y| + fyi(y) 1 |x y| dy = Z \B f(y) @ @xi 1 |x y| dy + Z B fyi(y) 1
|x y| dy Z @B f(y) 1 |x y| ni dSy, where n is the exterior unit normal at @B. It follows that the first and second integral is in C1(). The second integral is also in C1() according to (i) and since f 2 C1() by assumption. Because of 4x(|x y|1) = 0, x 6= y, it follows 4V = Z B Xn i=1 fyi(y) @ @xi 1 |x y| dy Z @B f(y) Xn i=1 @ @xi 1 |x y| ni dSy. Now we choose for B a ball with the center at x, then 4V = I1 + I2, where I1 = Z B(x) Xn i=1 fyi(y) yi xi |x y|3 dy I2 = Z @B(x) f(y) 1 2 dSy. We recall that n (y x) = if y 2 @B(x). It is I1 = O() as ! 0 and
for I2 we obtain from the mean value theorem of the integral calculus that 7.5. INHOMOGENEOUS EQUATION 193 for a y 2 @B(x) I2 = 1 2 f(y) Z @B(x) dSy = !nf(y), which implies that lim!0 I2 = !nf(x). 2 In the following we assume that Greens function exists for the domain , which is the case if is a ball. Theorem 7.3. Assume f 2 C1() \ C(). Then u(x) = Z G(x, y)f(y) dy is the solution of the inhomogeneous problem (7.14), (7.15). Proof. For simplicity of the presentation let n = 3. We will show that u(x) := Z G(x, y)f(y) dy is a solution of (7.4), (7.5). Since G(x, y) = 1 4|x y| + (x, y), where is a potential function with respect to x or y, we obtain from the above lemma that 4u = 1 44 Z f(y) 1 |x y| dy + Z 4x(x, y)f(y) dy = f(x), where x 2 . It remains to show that u achieves its boundary values. That is, for fixed x0 2 @ we will prove that lim x!x0, x2 u(x) = 0. Set u(x) = I1 + I2, 194 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER where I1(x) = Z \B(x0) G(x, y)f(y) dy, I2(x) = Z \B(x0)
G(x, y)f(y) dy. Let M = max |f(x)|. Since G(x, y) = 1 4 1 |x y| + (x, y), we obtain, if x 2 B(x0) \ , |I2| M 4 Z \B(x0) dy |x y| + O(2) M 4 Z B2(x) dy |x y| + O(2) = O(2) as ! 0. Consequently for given there is a 0 = 0() > 0 such that |I2| < 2 for all 0 < 0. For each fixed , 0 < 0, we have lim x!x0, x2 I1(x) = 0 since G(x0, y) = 0 if y 2 \ B(x0) and G(x, y) is uniformly continuous in x 2 B/2(x0) \ and y 2 \ B(x0), see Figure 7.8. 2 Remark. For the proof of (ii) in the above lemma it is sufficient to assume that f is Holder continuous. More precisely, let f 2 C(), 0 < < 1, then V 2 C2,(), see for example [9]. 7.6. EXERCISES 195 y x x r0 Figure 7.8: Proof of Theorem 7.3 7.6 Exercises 1. Let (x, y) be a fundamental solution to 4, y 2 . Show that Z (x, y) 4(x) dx = (y) for all 2 C2 0 () . Hint: See the proof of the representation formula. 2. Show that |x|1 sin(k|x|) is a solution of the Helmholtz equation 4u + k2u = 0 in Rn \ {0}. 3. Assume u 2 C2(), bounded and sufficiently regular, is a solution
of 4u = u3 in u = 0 on @. Show that u = 0 in . 4. Let = {x 2 R2 : x1 > 0, 0 < x2 < x1 tan }, 0 < . Show that u(x) = r k sin k is a harmonic function in satisfying u = 0 on @, provided k is an integer. Here (r, ) are polar coordinates with the center at (0, 0). 196 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER 5. Let u 2 C2() be a solution of 4u = 0 on the quadrangle = (0, 1) (0, 1) satisfying the boundary conditions u(0, y) = u(1, y) = 0 for all y 2 [0, 1] and uy(x, 0) = uy(x, 1) = 0 for all x 2 [0, 1]. Prove that u 0 in . 6. Let u 2 C2(Rn) be a solution of 4u = 0 in Rn satisfying u 2 L2(Rn), i. e., R Rn u2(x) dx < 1. Show that u 0 in Rn. Hint: Prove Z BR(0) |ru|2 dx const. R2 Z B2R(0) |u|2 dx, where c is a constant independent of R. To show this inequality, multiply the differential equation by := 2u, where 2 C1 is a cut-off function with properties: 1 in BR(0), 0 in the exterior of B2R(0), 0 1, |r| C/R. Integrate the product, apply integration by parts and use the formula 2ab a2 + 1 b2, > 0. 7. Show that a bounded harmonic function defined on Rn must be a constant (a theorem of Liouville). 8. Assume u 2 C2(B1(0)) \ C(B1(0) \ {(1, 0)}) is a solution of 4u = 0 in B1(0) u = 0 on @B1(0) \ {(1, 0)}. Show that there are at least two solutions. Hint: Consider u(x, y) = 1 (x2 + y2) (1 x)2 + y2 . 9. Assume Rn is bounded and u, v 2 C2()\C() satisfy 4u = 4v and max@ |u v| for given > 0. Show that max |u v| . 10. Set = Rn \ B1(0) and let u 2 C2() be a harmonic function in satisfying lim|x|!1 u(x) = 0. Prove that max |u| = max @ |u| .
Hint: Apply the maximum principle to \ BR(0), R large. 7.6. EXERCISES 197 11. Let = {x 2 R2 : x1 > 0, 0 < x2 < x1 tan }, 0 < , ,R = \ BR(0), and assume f is given and bounded on ,R. Show that for each solution u 2 C1(,R) \ C2(,R) of 4u = f in ,R satisfying u = 0 on @,R \ BR(0), holds: For given > 0 there is a constant C() such that |u(x)| C() |x| in ,R. Hint: (a) Comparison principle (a consequence from the maximum principle): Assume is bounded, u, v 2 C2() \ C() satisfying 4u 4v in and u v on @. Then u v in . (b) An appropriate comparison function is v = Ar sin(B( + )) , A, B, appropriate constants, B, positive. 12. Let be the quadrangle (1, 1) (1, 1) and u 2 C2() \ C() a solution of the boundary value problem 4u = 1 in , u = 0 on @. Find a lower and an upper bound for u(0, 0). Hint: Consider the comparison function v = A(x2 + y2), A = const. 13. Let u 2 C2(Ba(0)) \ C(Ba(0)) satisfying u 0, 4u = 0 in Ba(0). Prove (Harnacks inequality): an2(a ||) (a + ||)n1 u(0) u() an2(a + ||) (a ||)n1 u(0) . Hint: Use the formula (see Theorem 7.2) u(y) = a2 |y|2 a!n Z |x|=a u(x) |x y|n dSx for y = and y = 0. 14. Let () be a 2-periodic C4-function with the Fourier series () = 1X n=0 (an cos(n) + bn sin(n)) . Show that u= 1X n=0 (an cos(n) + bn sin(n)) rn solves the Dirichlet problem in B1(0). 198 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER 15. Assume u 2 C2() satisfies 4u = 0 in . Let Ba() be a ball such that its closure is in . Show that |Du()| M ||n a
|| , where M = supx2Ba() |u(x)| and n = 2n!n1/((n 1)!n). Hint: Use the formula of Theorem 7.2, successively to the k th derivatives in balls with radius a(|| k)/m, k = o, 1, . . . ,m 1. 16. Use the result of the previous exercise to show that u 2 C2() satisfying 4u = 0 in is real analytic in . Hint: Use Stirlings formula n! = nnen p2n + O 1 pn as n ! 1, to show that u is in the class CK,r(), where K = cM and r = a/(en). The constant c is the constant in the estimate nn cenn! which follows from Stirlings formula. See Section 3.5 for the definition of a real analytic function. 17. Assume is connected and u 2 C2() is a solution of 4u = 0 in . Prove that u 0 in if Du() = 0 for all , for a point 2 . In particular, u 0 in if u 0 in an open subset of . 18. Let = {(x1, x2, x3) 2 R3 : x3 > 0}, which is a half-space of R3. Show that G(x, y) = 1 4|x y| 1 4|x y| , where y = (y1, y2,y3), is the Green function to . 19. Let = {(x1, x2, x3) 2 R3 : x21 +x22 +x23 < R2, x3 > 0}, which is half of a ball in R3. Show that G(x, y) = 1 4|x y| R 4|y||x y?| 1 4|x y| + R 4|y||x y?| , where y = (y1, y2,y3), y? = R2y/(|y|2) and y? = R2y/(|y|2), is the Green function to . 7.6. EXERCISES 199 20. Let = {(x1, x2, x3) 2 R3 : x2 > 0, x3 > 0}, which is a wedge in R3. Show that G(x, y) = 1 4|x y|
1 4|x y| 1 4|x y0| + 1 4|x y0| , where y = (y1, y2,y3), y0 = (y1,y2, y3) and y0 = (y1,y2,y3), is the Green function to . 21. Find Greens function for the exterior of a disk, i. e., of the domain = {x 2 R2 : |x| > R}. 22. Find Greens function for the angle domain = {z 2 C : 0 < arg z < }, 0 < < . 23. Find Greens function for the slit domain = {z 2 C : 0 < arg z < 2}. 24. Let for a sufficiently regular domain 2 Rn, a ball or a quadrangle for example, F(x) = Z K(x, y) dy, where K(x, y) is continuous in where x 6= y, and which satisfies |K(x, y)| c |x y| with a constants c and , < n. Show that F(x) is continuous on . 25. Prove (i) of the lemma of Section 7.5. Hint: Consider the case n 3. Fix a function 2 C1(R) satisfying 0 1, 0 0 2, (t) = 0 for t 1, (t) = 1 for t 2 and consider for > 0 the regularized integral V(x) := Z f(y) dy |x y|n2 , where = (|x y|/). Show that V converges uniformly to V on compact subsets of Rn as ! 0, and that @V(x)/@xi converges uniformly on compact subsets of Rn to Z f(y) @ @xi 1 |x y|n2 dy 200 CHAPTER 7. ELLIPTIC EQUATIONS OF SECOND ORDER as ! 0. 26. Consider the inhomogeneous Dirichlet problem 4u = f in , u = on @. Transform this problem into a Dirichlet problem for the Laplace equation. Hint: Set u = w + v, where w(x) := R
s(|x y|)f(y) dy. Bibliography [1] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical tables. Vol. 55, National Bureau of Standards Applied Mathematics Series, U.S. Government Printing Office,Washington, DC, 1964. Reprinted by Dover, New York, 1972. [2] S. Bernstein, Sur un theor`eme de geometrie et son application aux derivees partielles du type elliptique. Comm. Soc. Math. de Kharkov (2)15,(19151917), 3845. German translation: Math. Z. 26 (1927), 551558. [3] E. Bombieri, E. De Giorgi and E. Giusti, Minimal cones and the Bernstein problem. Inv. Math. 7 (1969), 243268. [4] R. Courant und D. Hilbert, Methoden der Mathematischen Physik. Band 1 und Band 2. Springer-Verlag, Berlin, 1968. English translation: Methods of Mathematical Physics. Vol. 1 and Vol. 2, Wiley-Interscience, 1962. [5] L. C. Evans, Partial Differential Equations. Graduate Studies in Mathematics , Vol. 19, AMS, Providence, 1991. [6] L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics, CRC Press, Boca Raton, 1992. [7] R. Finn, Equilibrium Capillary Surfaces. Grundlehren, Vol. 284, Springer-Verlag, New York, 1986. [8] P. R. Garabedian, Partial Differential Equations. Chelsia Publishing Company, New York, 1986. [9] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order. Grundlehren, Vol. 224, Springer-Verlag, Berlin, 1983. 201 202 BIBLIOGRAPHY [10] F. John, Partial Differential Equations. Springer-Verlag, New York, 1982. [11] K. Konigsberger, Analysis 2. Springer-Verlag, Berlin, 1993. [12] L. D. Landau and E. M. Lifschitz, Lehrbuch der Theoretischen Physik. Vol. 1., Akademie-Verlag, Berlin, 1964. German translation from Russian. English translation: Course of Theoretical Physics. Vol. 1, Pergamon Press, Oxford, 1976. [13] R. Leis, Vorlesungen uber partielle Differentialgleichungen zweiter Ordnung. B. I.-Hochschultaschenbucher 165/165a, Mannheim, 1967. [14] J.-L. Lions and E. Magenes, Problemes aux limites non homogenes et applications. Dunod, Paris, 1968. [15] E. Miersemann, Kapillarflachen. Ber. Verh. Sachs. Akad. Wiss. Leipzig, Math.-Natur. Kl. 130 (2008), Heft 4, S. Hirzel, Leipzig, 2008. [16] Z. Nehari, Conformal Mapping. Reprinted by Dover, New York, 1975. [17] I. G. Petrowski, Vorlesungen uber Partielle Differentialgleichungen. Teubner, Leipzig, 1955. Translation from Russian. Englisch translation: Lectures on Partial Differential Equations. Wiley-Interscience, 1954. [18] H. Sagan, Introduction to the Calculus of Variations. Dover, New York, 1992. [19] J. Simons, Minimal varieties in riemannian manifolds. Ann. of Math(2) 88 (1968), 62105. [20] W. I. Smirnow, Lehrgang der Hoheren Mathematik., Teil II. VEB Verlag der Wiss., Berlin, 1975. Translation from Russian. English translation: Course of Higher Mathematics, Vol. 2., Elsevier, 1964. [21] W. I. Smirnow, Lehrgang der Hoheren Mathematik., Teil IV. VEB Verlag der Wiss., Berlin, 1975. Translation from Russian. English translation: Course of Higher Mathematics, Vol. 4., Elsevier, 1964.
[22] A. Sommerfeld, Partielle Differentialgleichungen. Geest & Portig, Leipzig, 1954. [23] W. A. Strauss, Partial Differential equations. An Introduction. Second edition, Wiley-Interscience, 2008. German translation: Partielle Differentialgleichungen. Vieweg, 1995. BIBLIOGRAPHY 203 [24] M. E. Taylor, Pseudodifferential operators. Princeton, New Jersey, 1981. [25] G. N.Watson, A treatise on the Theory of Bessel Functions. Cambridge, 1952. [26] P.Wilmott, S. Howison and J. Dewynne, The Mathematics of Financial Derivatives, A Student Introduction. Cambridge University Press, 1996. [27] K. Yosida, Functional Analysis. Grundlehren, Vol. 123, Springer-Verlag, Berlin, 1965. Index dAlembert formula 108 asymptotic expansion 84 basic lemma 16 Beltrami equations Black-Scholes equation 164 Black-Scholes formulae 165, 169 boundary condition 14, 15 capillary equation 21 Cauchy-Kowalevskaya theorem 63, 84 Cauchy-Riemann equations 13 characteristic equation 28, 33, 41, 47, 74 characteristic curve 28 characteristic strip 47 classification linear equations second order 63 quasilinear equations second order 73 systems first order 74 cylinder surface 29 diffusion 163 Dirac distribution 177 Dirichlet integral 17 Dirichlet problem 181 domain of dependence 108, 115 domain of influence 109 elliptic 73, 75 nonuniformly elliptic 73 second order 175 system 75, 82 uniformly elliptic 73 Euler-Poisson-Darboux equation 111 Euler equation 15, 17 first order equations 25 two variables 40 Rn 51 Fourier transform 141 inverse Fourier transform 142 Fouriers method 126, 162 functionally dependent 13
fundamental solution 175, 176 Gamma function 176 gas dynamics 79 Greens function 183 ball 186 conformal mapping 190 Hamilton function 54 Hamilton-Jacobi theory 53 harmonic function 179 heat equation 14 inhomogeneous 155 heat kernel 152, 153 helicoid 30 hyperbolic equation 107 inhomogeneous equation 117 one dimensional 107 higher dimension 109 system 74 initial conditions 15 initial-boundary value problem uniqueness 134 204 INDEX 205 string 125 membrane 128, 129 initial value problem Cauchy 33, 48 integral of a system 28 Jacobi theorem 55 Kepler 56 Laplace equation 13, 20 linear elasticity 83 linear equation 11, 25 second order 63 maximum principle heat equation 156 parabolic 161 harmonic function 180 Maxwell equations 76 mean value formula 180 minimal surface equation 18 Monge cone 42, 43 multi-index 90 multiplier 12 Navier Stokes 83 Neumann problem 20, 182 Newton potential 13 normal form 69 noncharacteristic curve 33 option call 165 put 169 parabolic equation 151 system 75
Picard-Lindelof theorem 9 Poissons formula 152 Poissons kernel 187 pseudodifferential operators 146, 147 quasiconform mapping 76 quasilinear equation 11, 31 real analytic function 90 resonance 134 Riemanns method 120 Riemann problem 61 Schrodinger equation 137 separation of variables 126 singularity function 176 speed plane 78 relative 81 surface 81, 83 sound 82 spherical mean 110 strip condition 46 telegraph equation 78 wave equation 14, 107, 131 wave front 51 volume potential 191