STAT 243 Autumn 2024

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

STAT 24300 - Numerical Linear Algebra

Assignment 1: Solving linear systems of equations

Question 1: Gaussian Elimination and Back-Substitution


Consider the linear system:
5x1 + 3x2 + 4x3 − x4 = 0
−10x1 − 2x2 − 6x3 + 2x4 = 4
−2x2 + 2x3 − 3x4 = 1
5x1 + 3x2 − 2x3 = 4
Solve the linear system via elimination. That is, complete the following steps:

1. Write the linear system in the form Ax = b and construct the augmented matrix [A|b].

2. Apply row operations (add multiples of the rows together) to the augmented matrix until it is upper
triangular (this is row-reduction). Write down the operations used in each step.

3. Solve the reduced system via back-substitution.

4. Compute Ax to check your solution.

Question 2: Properties of the Products.


This is a proof question. Don’t panic. These proofs are simple. When lost, start by writing the equality
that needs to hold, then expand each side according to their definitions. Continue expanding until the
two sides match. Note that these proofs are much easier if you use the properties established for each
simpler operation to prove the corresponding property for more complicated products. (I.e. think about
matrix vector multiplication as a sequence of inner products, or a linear combination. Now, if you know
things about inner products/linear combination, you know the same thing about matrix vector products.)
You must do these proofs in general. Do not check specific special cases unless stuck.
Use the standard properties of scalar addition and multiplication to prove the following statements:

1. Commutativity of Inner Products: For any compatible vectors v and w, show that the inner
product v ⊺ w commutes.

2. Distributivity of Products: For any compatible vectors v, w, z show that v ⊺ (w + z) distributes.


Show that, for any compatible A, the product A(v + w) distributes.

3. Associativity of Scalar Products: For any scalar λ, and compatible vectors v, w, show that
(λv)⊺ w = v ⊺ (λw) = λ(v ⊺ w). For any compatible matrix A show that (λA)v = A(λv) = λ(Av).

Question 3: Applications of Linear Systems (Polynomial Interpolation.)


A maxim to start: linear systems are everywhere in practical problems - even when the original problems
are far from linear.
Consider the following interpolation problem. An unknown function f (x) is sampled at x1 = −1, x2 = 0,
x3 = 1, and x4 = 2. The function returns f (x1 ) = −5, f (x2 ) = 1, f (x3 ) = 5 and f (x4 ) = 25. We
aim to find an interpolating function, that is, some other function g(x) such that g(xi ) = f (xi ) at each
i = 1, 2, 3, 4. In other words, we want g to match f at each sample.

1
To constrain the problem, we will restrict the family of allowed g. Suppose, for example, that g is a cubic
function. Then:
g(x) = c0 + c1 x + c2 x2 + c3 x3 (1)
for some choice of the coefficients c = [c1 , c2 , c3 , c4 ]. Can we find a coefficient vector c such that g
interpolates f at the samples? How can we guarantee whether such a set of coefficients exist? We can
answer both questions by translating the cubic interpolation problem into a linear system. We’ll focus on
the first.

1. Set up a linear system relating the coefficients to the samples. (Hint: plug xi into g(xi ) = f (xi ) for
each i. This will produce a system of equations which is linear in the coefficients c.)

2. Solve the system using Gaussian elimination.

3. Check that your interpolant g(x) correctly interpolates the samples (check g(xi ) = f (xi )).

A note on this problem: interpolation is an important method for estimating intermediate values of a
function that is only known up to samples. At first glance, polynomial interpolation is far from a linear
problem. Here, we are trying to mold a cubic function to a set of fixed samples. Yet, with a little
work, the problem is linear. In fact, all polynomial interpolation is linear. *How does your work in
step 1. extend to general polynomials and general sample sets? Even more generally, if a function is
of the form g(x) = c1 a1 (x) + c2 a2 (x) + . . . cn an (x) for some fixed set of functions {ai (x)}ni=1 , then the
corresponding interpolation problem will be linear in the coefficients c. The fundamental idea here is that
the set of all functions of this kind is a vector space. In this case, a vector space of functions. Then, g
is a linear combination of a set of basis functions (analogous to a linear combination of a collection of
vectors). In the standard setting, we attempt to express a vector b as a linear combination of a collection
of column vectors {ai }ni=1 . Here, we aim to express an interpolant g (constrained by the samples), as a
linear combination of the functions {ai (x)}ni=1 .

Question 4: Singular Systems from the Row Perspective.

1. Explain how to test whether a system is singular during row reduction (you may assume the system
has as many equations as unknowns)? If a system is singular, how would you test whether a solution
exists, or whether there are infinitely many? *Using the row-perspective, explain why singularity is
a property of A, not b, given the linear system Ax = b?

2. Solve for x such that Ax = b, where


   
1 1 2 0 3
 2 1 3 1   4 
A=
 −1
, b= 
0 −1 −1   −1 
1 1 2 0 3

Question 5: Nullspace
Let:  
1 0 3 −2
 −2 1 0 2 
A= . (2)
 1 1 9 −4 
−1 1 3 0

1. Solve the homogeneous equation Ax = 0 where 0 is the 4-entry vector of all zeros.

2
2. What is the dimension of the nullspace (nullity) of A? (Hint: how many free variables did you
need?)

3. Let:  
−6
 2 
 −16  .
b= 

−4
Find all solutions to the system Ax = b and express your solution as a linear combination of a
particular solution and the nullspace of A.

4. A⊤ is defined as the transpose of A. If A is an m × n matrix with columns given by {a1 , a2 , · · · , an }


with each ai ∈ Rm , i = 1, · · · , n, A⊤ is given by the n × m matrix with rows given by a⊤ ⊤ ⊤
1 , a2 , · · · , an .
Show that all vectors in null(A) are orthogonal to all vectors in range(A⊺ ).
Hint: First, prove that (Ax)⊤ = x⊤ A⊤ .

You might also like