As mentioned in the notes/lecture,
The term order of convergence is used in (at least) two ways:
(a) to describe the speed at which a sequence converges, for example, the sequence of errors arising from an iterative method to solve a nonlinear equation, and
(b) to describe the speed at which the error
of discretization methods converges to zero
as the discretization is refined, for example,
in A1, we had an approximation to the first derivative of a function
using nearby values of the function and we said it is of second order (O(h^2)).
Although the two definitions have similarities, they are different
and should not be confused.
In the (a) case, the index of the sequence element is increased by one,
and the new error is compared to the previous in some power.
In the (b) case, the discretization is refined (h becomes smaller, n becomes larger), and the order (rate) is calculated from the rate of error reduction over the rate at which the discretization is refined.
I will talk a little more about the (b) case in the last class.
You may also want to take a look at tutorial 8.
Then, you may ask, why don’t we refine the discretization only a little,
for example, from n to n+1 and try to calculate the order of convergence.
The reason why we do not do that is that the reduction of error in discretization methods when we refine from n to n+1
will most likely be so small that the order of convergence calculated in this way is not reliable.
We usually half h (or double n), and calculate the order of convergence of discretization methods. In Q4, this is what we do.