Numerical Analysis
MATH-UA0252-1,
Spring 2013,
Lectures: Tuesdays and Thursdays, 12:30 - 1:45 pm. WWH 312
Instructor:
Olof Widlund
Office hours: 30 minutes before and after each class or by appointment.
Recitation sessions, Wednesdays, 8:00 - 9:15 am. Goddard Hall B03
Instructor: Juan Calvo
Office hours: WWH605, Wednesdays 1:00 - 1:45 pm
and Thursdays 3:00 - 3:45 pm.
Coordinates of Olof Widlund
Office: WWH 612
Telephone: 998-3110
Email: widlund at cims.nyu.edu
Coordinates of Juan Calvo
Office: WWH 607
Telephone: 998-3179
Email: calvo at cims.nyu.edu
Exams
The first quiz was given on 2/21. Here
it is with answers.
The second quiz was given on 3/12. Here
it is with answers.
The midterm exam was given on Thursday March 28. It covered material
covered in class before the Spring break.
Here are some questions from old exams.
And here is the midterm with solutions.
The third quiz was given on 4/16. Here
it is with answers.
The fourth quiz was given on 5/2.
Here it is with answers.
It will cover floating point
arithmetic and chapter 8.
The final exam, which will cover material from the whole semester,
will be held
on Thursday May 16 between 12noon and 1:50pm in WWH312. To the final you can
bring two pages of notes covered on both sides with notes written by you.
Here are some questions from old exams.
Homework
There will be regular homework assignments, including programming
using Matlab or any reasonable alternative. Scores will be made
available on NYU Classes.
When turning in programming assignments, please provide a listing of
your program and output from runs with actual data that shows that
your program works as intended.
It is important that you do the homework yourself, but if you get stuck,
I encourage you to consult with other students.
However, when you get help,
it's important to acknowledge it in writing. Passing off other people's
work as your own is called plagiarism and is not acceptable.
Please staple everything together and order the problems in the
same order as given.
The best way of turning in homework is to give it to me
personally, in class or in my office. If I am not in my office, you can
slide it under my door. Do not put homework in my mailbox.
Homework Assignments:
Homework set 1
Assigned February 6, due at 12:30 pm on Tuesday February 19.
Homework set 2
Assigned February 18, due at 12:30 pm on Tuesday March 5.
Homework set 3
Assigned February 28, due at 12:30 pm on Thursday March 14.
Homework set 4
Assigned March 26, due at 12:30 pm on Tuesday April 9 at 12:30 pm.
Homework set 5
Assigned April 9, due at 12:30 pm on Thursday April 18 at 12:30 pm.
Homework set 6
Assigned April 19, due at 12:30 pm on Thursday May 2 at 12:30 pm.
Required Text:
An Introduction to Numerical Analysis by Endre Suli and David Mayers.
Cambridge University Press.
Course content. All page and section
references are to the
textbook.
Note that you should make sure that you obtain and save
all handouts; they should be an important part of your reading.
Request copies if you miss any of them.
Week 1: Chapter 1 of the textbook. Finding roots of one nonlinear
equation of one real variable. Brouwer's fixed point theorem. Contractions
and the contraction mapping theorem. Rates of convergence. Linear and
super-linear convergence. The bisection algorithm and its rate of convergence.
Newton's method and its convergence rate in case of a simple root and a
double root. The secant method. The Illinois method (handout).
Week 2: Further comments on the convergence of Newton's method,
the secant method, and the Illinois method. Chapter 2: Solving linear
systems of algebraic equations by Gaussian elimination. LU factorization
for the case when pivoting is not required. An example of a two-by-two
linear system where pivoting results in a greatly improved quality of the
solution.
Week 3: LU factorization of non-singular matrices in the general case
when pivoting is necessary or recommended. Partial pivoting. The conditioning
of function evaluation: how does the relative error of the function values
compare with the relative accuracy of the argument of the function. An introduction
to vector norms. Proofs that the proposed ell-1 and ell-infinity norms are vector norms.
Cauchy-Schwarz's inequality and the triangle inequality for the ell-2 norm. The
case of general p, 1 < p < infinity: to establish the triangle inequality
for those values of p, we formulate and prove Young's, Hoelder's, and
Minkowski's inequalities. Those inequalities are also useful in many other
contexts. Norms of matrices generated by different vector norms.
Explicit formulas for p=1 and p=infinity in terms of the elements of the
matrix. The case of p=2: the matrix norm is given by the square root of the
largest eigenvalue of A^{T}A, a quantity much harder to compute.
Conditioning of matrix-vector multiplication: ||A||*||A^{-1}|| appears.
Week 4: What linear systems of equations are difficult to solve?
Those that are ill-conditioned, i.e., those for which ||A||*||A^{-1}||
is very large. Linear least squares problem; motivation and mathematical
formulation. The normal equations. Fitting quadratic polynomials to
given experimental data. How to solve a linear least squares problem
if we have a QR factorization of the system matrix. Householder
transformations; see subsection 5.5 of the text book. How to choose the
vector v in the Householder transformation so that we eliminate the
non-zeros below the first element in the transformed matrix? A choice of
sign. Note the similarities and differences compared to Gaussian
elimination. Solving tridiagonal systems of linear equations with and
without pivoting. An appropriate storage scheme for this fast special
version of Gaussian elimination. A few words on positive definite,
symmetric matrices and how we can exploit this when developing Cholesky's
algorithm, which is a variant of Gaussian elimination.
Week 5: Solving linear systems of equations with special matrices.
Cholesky's algorithm; we save half the work and half the storage and
do not need to pivot. We can also decide if a symmetric matrix is positive
definite by attempting to use Cholesky's algorithm. Diagonal row dominant
matrices; again no pivoting is needed. A proof that in the tridiagonal case,
the matrices that remain to be factored are diagonally row dominant. Solving
nonlinear systems of equations. Derivation of Newton's method in the case
of two equations and two unknowns. The Jacobian. Brouwer's fixed point theorem
and the method of simultaneous iterations. Eigenvalues and eigenvectors
of symmetric real matrices; a review of basic results. First ideas of
Jacobi's eigenvalue algorithm. Comments on the tridiagonal case; we can
compute the eigenvalues of a symmetric matrix by first using a similarity
transform which turns it into tridiagonal form and then solves the
tridiagonal eigenvalue problem.
Week 6: Details on Jacobi's algorithm and a proof of its convergence.
The Frobenius norm of a matrix. This norm does not change if a matrix is
multiplied on the left or the right by an orthogonal matrix. Gerschgorin's
circle theorem. Using a diagonal similarity transformation and Gerschgorin's
theorem to locate eigenvalues of matrices with small off-diagonal elements
such as those that arise after running the Jacobi method for sufficiently
many steps. Using Householder transformations to reduce a real symmetric matrix
to a tridiagonal matrix with the same eigenvalues. Cauchy's interlace theorem;
cf. pp. 157-158 in the text book.
Week 7: Inverse iteration to find eigenvectors of symmetric matrices given
an approximation of its eigenvalue. Rayleigh quotients and how to compute an
accurate eigenvalue approximation from an approximation of its eigenvector.
A bound for the eigenvalue in terms of the Euclidean norm of (A - \muI)u = w
where u is a unit vector. The Bauer-Fike theorem. Sturm sequences based on
Cauchy's interlace theorem and the bisection algorithm. A short introduction
to the QR algorithm.
Week 8: Preparations for the midterm; some of the sample questions from
older exams. A few words on polynomial interpolation. The midterm exam.
Week 9: Polynomial interpolation a la Lagrange and a la Newton. Handout
on polynomial interpolation and numerical quadrature. The Newton table. A
comparison with Taylor series. The coefficients in the Newton table in terms
of derivatives of the function being interpolated. An error bound for polynomial
interpolation. Runge's example of divergence of polynomial interpolation
when using an increasing number of equidistant interpolation points.
Further comments on Runge's example and on the use of an alternative
set of points. Hermite interpolation: explicit formulas for the Hermite
interpolation polynomials and how we also can use Newton's interpolation
algorithm. Numerical differentiation using interpolation polynomials.
Two numerical integration formulas which are exact for all linear functions;
one uses the value at the midpoint of the interval and the other uses values
at the two endpoints of the integration interval.
Week 10: Additional numerical quadrature rules, in particular Simpson's
rule. How to determine the weights by using the Lagrange polynomial interpolation
formula and by the method of undetermined coefficients. Error bounds for
numerical quadrature using different error formulas for polynomial interpolation.
The basis for adaptive Simpson. How to organize adaptive Simpson so that
no function values are computed more than once. Comments on the examples
in the homework. Using Hermite cubic interpolation to derive a numerical
quadrature formula; the weights for the values of the first derivative
are the same except for sign. In a composite schemes using subintervals of the
same length, values of the first derivative are therefore
only required at the end points
of the overall interval. The basic idea underlying Romberg integration.
Approximation using piece-wise polynomials; see hand-out and chapter 11 of
the text book. Piece-wise linears and piece-wise Hermite interpolation. The
basic idea behind cubic spline interpolation.
Week 11: Handout on computer arithmetic. More about linear and cubic
spline interpolation. The special norms defined in terms of integrals of
squares of first and second derivatives of functions; they give us
alternative problems, which also give us the same piece-wise linear
and cubic interpolants. The solution of the cubic spline interpolation problem requires the solution
of a linear system with a positive definite, symmetric, tridiagonal matrix.
Real numbers: integers, rational numbers, and irrational numbers. Base two
systems. Bytes, words, and double words. Fixed point systems and their
disadvantages. The Toy number system and the introduction of subnormal numbers.
IEEE Single Format. The special meaning of the first and last lines of the
table. Representing +/-infinity and NaN.
Week 12: More on IEEE floating point systems, in particular double
precision, which is used in most numerical computations including Matlab.
Rounding to nearest. Subnormal numbers. Computing Fibonacci numbers and
trying to recover the initial values: why it works up to a point with relatively
many steps of the recursion, when
do we get overflow, and why do we get such extremely poor results in between.
Normed linear space of infinite dimension; a pair of norms are generally not
comparable. The best approximation in the maximum norm, over a bounded
interval, of a continuous function by polynomials of degree n. Weierstrass'
theorem. Existence of a min-max polynomial. Theorems 8.3 and 8.4 from the
text book.
Week 13: The best approximation using polynomials of degree n,
in the maximum norm, of x^{n+1}. Chebyshev polynomials. A three term recurrence.
The error of polynomial
interpolation revisited; why the set of zeros of a Chebyshev polynomial is a
good choice. Orthogonality of the Chebyshev polynomials in an inner product
with weight (1-x^2)^{-1/2}. Inner product spaces in particular those defined
by a weight function and an interval of integration. Orthogonal polynomials
with respect to these inner products. Three term recursion, which decreases
the effort when computing these sets of polynomials. The Legendre polynomials
and an alternative way of defining them as (d/dx)^n(x^2-1)^n.
Week 14: Examples of different sets of orthogonal polynomials. How to
compute Laguerre polynomials by taking higher derivatives of certain functions;
cf. problem 9.5 of the text book. A proof that all roots of orthogonal
polynomials are real, simple, and lie in the open interval (a,b). Gaussian
quadrature: counting the number of parameters and how to find the n+1 nodes
that define the quadrature rule; they must be the zeros of the (n+1)th
orthogonal polynomial for the relevant inner product. How to compute the
weights for Gaussian quadrature. A discussion of one recent homework problem
and some of the problems from previous exams; see the final line of "Exams".