## Numerical Analysis

###
MATH-UA0252-1,
Spring 2014,

Lectures: Tuesdays and Thursdays, 12:30 - 1:45 pm. CIWWH 201

Instructor:
Olof Widlund

Office hours: 30 minutes before and after each class or by appointment.

### Recitation sessions: Wednesdays, 8:00 - 9:15 am. CIWWH 201.

Instructor: Juan Calvo

Office hours: Wednesdays 10-11am, WWH605.

** Coordinates of Olof Widlund **

Office: WWH 612

Telephone: 998-3110

Email: widlund at cims.nyu.edu

** Coordinates of Juan Calvo **

Office: WWH 607

Telephone: 998-3179

Email: calvo at cims.nyu.edu

** Required Text: **
An Introduction to Numerical Analysis by Endre Suli and David Mayers.
Cambridge University Press.

** Exams **

There will be a final and a midterm exam and
also relatively frequent and short quizzes. Scores will be made
available on NYU Classes.
The final exam will be held on May 15, 12noon to 1:50pm,
in the same class room WWH 201.
** To the final you can bring two regular-sized papers
covered on both sides with notes written by yourself. **
The textbook and other notes will not be allowed.
Here is a collection of
nine questions from earlier finals.
The midterm exam was held on March 6, 12:30-1:45pm.
Here it is with solutions.
Here are five sample questions
that could be given on a midterm exam.
You can bring one regular-sized paper covered on both sides with notes
written by yourself to the midterm exam. The textbook and other notes will
not be allowed. To understand what topics will be covered by the midterm
and final exams, have a look at the Course Content section of this home page.
The first quiz was held on Thursday 2/13.
Here it is
with solutions.
The second quiz was held on Thursday 4/3.
Here it is
with solutions.
The third quiz was held on Thursday 4/24.
Here it is
with solutions.
** Homework **

There will be regular homework assignments, including programming
using Matlab, Octave, or any reasonable alternative.
Scores will be made available on NYU Classes.
When turning in programming assignments, please provide a listing of
your program and output from runs with actual data that shows that
your program works as intended.
It is important that you do the homework yourself, but if you get stuck,
I encourage you to consult with other students.
However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is called plagiarism and is not acceptable.
Please staple everything together and order the problems in the
same order as given.
The best way of turning in homework is to give it to me
personally, in class or in my office. If I am not in my office, you can
slide it under my door. Do not put homework in my mailbox.

** Homework Assignments: **

Homework set 1
Assigned January 31, due at 12:00 noon on Friday February 14.
Homework set 2
Assigned February 17, due at 12:00 noon on Friday February 28.
Homework set 3
Assigned March 11, due at 12:30 pm on Friday March 28.
Homework set 4
Assigned March 31, due at 12:30 pm on Friday April 11.
Homework set 5
Assigned April 13, due at 12:30 pm on Friday April 25.
Homework set 6
Assigned April 24, due at 12:30 pm on May 6.

** Course Content. All page and section
references are to the
textbook. **

For each week, there will be a short description of the content
of the lectures.
Note that you should make sure that you obtain and save
all handouts; they should be an important part of your reading.
Request copies if you miss any of them.
Week 1: Chapter 1 of textbook. Finding roots of one nonlinear
equation of one real variable. Brouwer's fixed point theorem. Contractions
and the contraction mapping theorem. Rates of convergence. Linear and
super-linear convergence. The bisection algorithm and its rate of convergence.
Newton's method and its convergence rate in case of a simple root and a
double root. The secant method. The Illinois method; this method will be
further described in the first homework set. Sections not covered: 1.7 and 1.8.
Week 2: Review of linear system of algebraic equations for the case
of n unknowns and n equations; linear independence of the columns (and the
rows) of the matrix guarantees that the there is a unique solution. Gaussian
elimination. What operations can be done at the same time? Expressing the
process in terms of multiplication of the coefficient matrix from the left
by (n-1) special unit lower triangular matrices. How to find the elements
of the unit lower triangular matrix in the factorization PA=LU. The need
for pivoting. Partial pivoting and an example of a matrix of order 2 for
which partial pivoting is required in order to avoid very poor accuracy of
the solution. Sections not covered: 2.8 and 2.10.
Week 3: How to decide if we can compute the values of a function with
good relative accuracy. What matters is xdf(x)/dx/f(x) known as the
condition number of the problem. Norms of vectors in R^n. A family of ell_p
norms for 1 \leq p \leq infinity. A proof that all these norms satisfy
the three requirements; to prove the triangle inequality is the hardest task.
While doing these proofs, we develop the Cauchy-Schwarz and Hoelder
inequalities. Norms of matrices for the case of p=1, p=2, and p=infinity.
Week 4: More on the norm of matrices defined by vector norms. The norm
of a product of matrices can be bounded by the product of their norms. The
condition number of the solution of a linear system of equations: the relative
change of the norm of the solution as a function of the relative change of
the right hand side and of the relative change of the coefficient matrix.
Symmetric, positive definite matrices; no pivoting is necessary when factoring
them by Cholesky's algorithm. This algorithm can also be used to decide
if a symmetric matrix is positive definite. How to save arithmetic and
storage when using Cholesky. Diagonally dominant matrices; another family
for which pivoting is not needed. Linear least squares problems when we
have more linear equations than unknowns. The normal equations. How
to fit data to a model defined by a polynomial. Why to try to develop an
algorithm which does not require the computation of the matrix of the
normal equations; to be continued. Sections not covered: 3.4 and 3.5.
Week 5: Linear independence and the uniqueness of the solutions of
least squares problems. Householder transformations as in Section 5.5.
The orthogonality of any Householder matrix; it can most conveniently be stored
by just one vector. The basic geometry of Householder transformations.
The QR factorization of an m-by-n matrix in terms of a product of
(n-1) Householder matrices and an m-by-n matrix with a lead-off n-by-n
upper triangular matrix; this matrix is nonsingular if A has linearly
independent columns. How to solve a least least squares problem using
a QR factorization. How Householder matrices can be used to compute
a symmetric tridiagonal matrix with the same eigenvalues as a given
symmetric m-by-m matrix. Further comments on Householder transformations
and on manipulating block matrices. Solving systems of non-linear equations
where the number of unknowns and the number of equations are the same.
Contraction mapping and a derivation of Newton's method; a linear system
of equations must be solved in each iteration where the matrix elements
are partial derivatives of the different functions with respect to the
different variable. This matrix is known as the Jacobian. If it is well
behaved close to the solution, we will have fast convergence from any
starting approximation close to the solution. A review of the algebraic
eigenvalue problem, in particular for symmetric matrices.
Week 6: Using eigenvalues and eigenvectors to solve initial value problems
for simple linear systems of ordinary differential equations. The danger of
using the roots of the characteristic equation to compute the eigenvalues
of a matrix. Two theorems by Gerschgorin. Determinants of symmetric tridiagonal
matrices. Cauchy's interlace theorem and how it can be combined with a
bisection algorithm to located eigenvalues of a tridiagonal matrix.
Week 7: Sturm sequences and how we can use Cauchy's theorem to determine
the number of eigenvalues of a tridiagonal matrix that are below a given number.
Combining this idea with a bisection algorithm. An example of Wilkinson, which
shows that the roots of a polynomial can be very poorly determined by its
coefficients; this rules out the use of the characteristic polynomial to
find the eigenvalues of a matrix numerically. Inverse iterations and the use
of eigenvector expansions. Two perturbation theorems including that of Bauer
and Fike. Rayleigh quotients and how to improve the approximation of an
eigenvalue using an approximation of its eigenvector. Jacobi's algorithm to
find all the eigenvalues of a symmetric matrix.
Week 8: Remarks on the midterm exam.
A short introduction to the QR algorithm to find eigenvalues of
matrices. Polynomial interpolation; there is a unique polynomial of degree
n that matches any set of given values y_k at (n+1) distinct points x_k,
k =0. 1, ...,n. The Lagrange formula. Barycentric interpolation formulas;
see ** handout marked Chapter 5. **
These interpolation formulas greatly reduce
the amount of arithmetic. Interpolating at equidistant point and at Chebyshev
points. Runge's example and a discussion of Lebesgue constants;
see ** handout marked Chapter 15. **
Hermite interpolation and a formula more complicated but similar to the
Lagrange formula.
Week 9: Numerical differentiation using polynomial interpolation. Error
bounds for the resulting formula. Numerical quadrature, in particular the
trapezoidal rule and Simpson's formula. Composite numerical quadrature.
Error bounds, in particular for Simpson's formula. Adaptive Simpson and
how to avoid repeating function evaluations at the same points; see
** handout on adaptive Simpson. **
The trapezoidal
formula with end point corrections, which can be obtained by integrating
the Hermite interpolation polynomial using two points. If we use it for
a composite formula and use the same subinterval lengths, only two values
of the derivative are required. Extrapolation using the composite
trapezoidal rule with different number of points. Spline interpolation, which
provides twice continuously differentiable piece-wise cubics matching
tridiagonal linear system of equations with a positive definite symmetric
matrix, which allows the use of Cholesky's algorithm and results in a
linear algorithm.
Week 10: More on linear and cubic spline interpolation; they can also be
characterized by minimizing certain integrals subject to the interpolation
conditions at the prescribed points. Inner products of functions defined
by an integral and a weight function. Cauchy-Schwarz inequality for these
inner products; the proof is almost identical to the proof for the inner
products in R^n. Chebyshev polynomials defined by using cosine and arccos
functions. A proof that we obtain polynomials of the correct degree. A
three-term recurrence. The orthogonality of the Chebyshev polynomials
with respect to a specific inner product. How to define a norm from an
inner product. The best approximation of a function f(x) by polynomials
and with respect to norms defined by weighted integrals; the use of
orthogonal polynomials is very convenient. Three term recurrences for
all sets of orthogonal polynomials. Legendre and Laguerre polynomials.
A proof that all roots of the orthogonal polynomials are simple and
located in the interval (a,b).
Week 11: Gaussian quadrature; there are different numerical quadrature
rules for each set of orthogonal polynomials. A proof that by using
n+1 quadrature nodes and a good choice of weights, we can integrate
all polynomials of degree 2n+1 exactly. The quadrature nodes must be chosen
as the zeros of the orthogonal polynomial of degree n+1. Details about the
Laguerre case: The Gauss quadrature rule and the Radau numerical quadrature
rule.
A ** handout on Floating Point Arithmetic **
and a few first comments on
different ways of representing real numbers on modern computers.
Week 12: Representing integers using 32 bits and the 2's complement
idea. More on IEEE floating point numbers.
Week 13: More about IEEE floating point numbers. The outcome of
simple arithmetic operations. A recollection of how a choice is made
when designing Householder transformations. The subnormal numbers and
how they can prevent underflow. Norms of function spaces in particular
the maximum norm of continuous functions on finite intervals. Existence
of best "min-max" polynomials. Two theorems characterizing the min-max
polynomials. Uniqueness of the min-max polynomial. Best approximation
of x^{n+1} by a polynomial of degree n.
Week 14: Review and problem solving including problems from old
exams.