CS510-Fall 2018. WHAT TO READ BEFORE CLASS
The following entries show what I intend to cover during each
class. After the class, I will revise the entry to record what was
A C T U A L L Y covered. This web-page will thus become a lecture diary.
If you try to read about this material in advance, it should make it easier
to follow the class. I will offer some course notes (handouts page)
that could reduce note-taking.
1. Sept. 5, 2018: Introduction to the course and how it will be
run. Taylor's theorem; k-digit normalized-floating-point representation of reals,
arithmetic and algebra in this system, errors under rounding and chopping.
Began Nonlnear Equations topic and the bisection method.
2. Sept. 12, 2018: Reprise. Began Regula-Falsi (false position) and gave
(without proof) facts about when it converges, and how fast. Described Newtons
method the chord method, and the Secant method and discussed (again,
no proofs) their convergence. In preparation for upcoming proofs, began topic
of fixed point iteration (FPI), and the contraction mapping principle,
a theorem that will be used to show when Newtons method converges.
3. Sept. 26, 2018: (The Sept. 19, class was cancelled by illness).
To study the convergence, stated and proved the Contraction Mapping
Principle. Began topic of rates of convergence in FPI and showed
that the chord method has linear convergence. that Newton's method has
quadratic (or better) convergence at non-tangency roots, and began to
study the secant method. Gave some facts for the secant iterations
from which we derived the relation that if e_j denotes the error if we
stop converging secant iterations at step j, then |e_{n+1}|/|e_n|^t|->K
where t=(1+sqrt(5))/2. Described the idea of accelleration of convergence,
(Aitkin, Steffanson) for linearly converging methods
(e.g. the chord method). Mentioned the multivariate context and
systems of non-linear equations via a small example and the
possibility we return to this topic later. Finally we began the new
topic Linear Systems and Numerical Linear Algebra. Suggest
downloading and reading Notes on Linear Equations available from the
Handouts page.
4. Oct. 3, 2018: Lightning review of matrices, vectors and linear
algebra facts, moving on to linear equation systems and the leading algorithm
for them - Gaussian elimination with backsolving (GEBS) in gory
detail, followed by Gauss-Jordan (G-J) reduction. Observed that G-J
will do roughly 50% more numerical computation than GEBS. Next, by concrete example we
observed the phenomenon of roundoff errors and their propogation
during Gaussian elimination in fixed, finite precision. This motivated
the idea of "pivoting strategies" (partial pivoting and scaled partial
pivoting) as attempts to reduce roundoff errors.
5. Oct. 10, 2018: Showed how to compute A^{-1}: follow
Gauss-Jordan reduction on an invertible matrix A, reducing it to the
identity. The same sequence of row operations on I produces A^{-1}.
Next, explained the LU factorization of an invertible matrix A: U is the result of
Gaussian elimination on A (with no row interchanges!) and L is a
unit-lower triangular matrix appropriately recording the pivot values
used. Ax=b can then be solved using (LU)x=b: FIRST let y=Ux and solve
Ly=b for y (this is forward-substitution); THEN solve Ux=y for x (this
is back-solving). The computational cost of this process is exactly
the same as Gaussian elimination and backsolving for the system Ax=b.
Not all invertible matrices have LU factorizations but there is ALWAYS
a permutation p of the n rows of A so that A(p) can be LU
factored.
6. Oct. 17, 2018: Described the compact notation for the LUp
algorithm, with or without use of a pivot strategy. Described
the n by n Hilbert matrix H_n and showed the curious property it seems to
exhibit as a very ill-conditioned coefficient matrix in a linear
system, solved using fixed precision computation: (i) the solution
vector x of the linear system where H_n is the coefficient matrix is
very likely to be VERY far from the true solution; nevertheless (ii)
H_n maps that erroneous solution very close to the right-hand-side
vector of the system. Described the iterative-improvement algorithm
for computed solutions (fixed precision) of linear systems, an
introduction to the topic of iterative methods for Ax=b.
7. Oct. 24, 2018: A small review of matrix norms and then begin topic
of iterative methods for Ax=b - the idea in general (multivariate
analogue of fixed-point-iteration), and then the specifics for
Jacobi's method. We gave conditions under which these
iterations must converge to the solution. Next we described the
Gauss-Seidel FPI method and gave an example.
8. Oct. 31, 2018: Showed how to execute the Gauss-Seidel iterations
by using the Jacobi method ingredients in a special way, and began
discussing the computation of eigen values and eigen vectors,
starting with the power method.
9. Nov 7, 2018: Finished (for now) topic - eigen values/eigen vextors
- but may return later. Began topic of approximation of functions via
polynomials, reviewing Taylor polynomials and Taylors' thm. Then began
the idea of interpolating polynomials and proving that given a
continuous function f and n+1 distinct collocation points u_i,
i=0,...,n, there is a unique polynomial I_n of degree at most n that
interpolates f at the u_i: i.e., f(u_i)=I_n(u_i), i=0,...,n: (at most
one by the fundamental thm. of algebra; at least one via construction
of Lagranges form of I_n). Gave the error thm for interpolation - no
proof. Began to study the computational cost to find and to evaluate
Lagranges form of I_n compared to the cost of determining I_n as
a_0+a_1*x+... +a_nx^n and then using Horners method to evaluate it cheaply at
a given x.
10. Nov. 14, 2018: Begin study of Newton's form of the
interpolating polynomial - much easier to find than the standard form
and as easy to evaluate. Begin the topic of "best" or minimax
approximation and Tchebycheff's characterization of some if its
properties, with some examples.
** Nov. 21, 2018 was FRIDAY AT RUTGERS ... so NO CLASS
11. Nov. 28, 2018: Finish minimax poly approx and begin LSQ poly
approx. first the continuous version, then the discrete version.