Chapter 5 Taylor Series

One of the most useful tools for solving problems in mathematics is the capability to approximate an arbitrary function using polynomials. There are a number of different methods for approximating functions that you will find out about in your degree including - Taylor series - Fourier series - Interpolation - wavelets (maybe) - radial basis functions (Jeremy’s research area) - neural networks (Alexander Gorban’s and Ivan Tyukin’s research areas).

5.1 Taylor polynomials

We use polynomials because they can be computed on a machine only using multiplication and addition.

We have already met one Taylor polynomial, in Section 3.2 when we were looking at approximating a function by its tangent in Newton’s method. The idea for Taylor series is that we will match the each derivative of a polynomial with the derivative of the function at a particular point b (this is called Hermite interpolation’’ in the trade).

Approximation with tangent

We are going to start with the simple case b=0. Let us try with a degree 2 polynomial and an arbitrary function f. Let (5.1)p2(x)=a0+a1x+a2x2, and we want p2(0)=f(0),p2(0)=f(0),andp2(0)=f(0). If we let x=0 in (5.1) we get a0=f(0). Let us now differentiate p2(x)=a1+2a2x. If we substitute x=0 we get p2(0)=a1=f(0). Differentiating a second time we get p2(x)=2a2. Substituting in x=0 gives p2(0)=2a2=f(0), so that a2=f(0)2. Therefore our approximating polynomial is (5.2)p2(x)=f(0)+f(0)x+f(0)2x2.

Example 5.1 Let us test the idea out on a particular function f(x)=1+x. Then f(0)=1=1f(x)=121+xf(0)=12f(x)=14(1+x)3f(0)=14. Substituting into (5.2) we get P2(x)=1+x2x28x2.

Now let us see if this is a good approximation. Let us use the last equation to approximate 1.1=1.0488088 to 7 decimal places. Our approximation is P2(0.1)=1+0.12(0.1)28=1.04875. Hence our estimate is out by approximately 0.00005.

Suppose we use p2 to approximate 1.2=1.0954451 to 7 decimal places. Our approximation is P2(0.2)=1+0.22(0.2)28=1.095. Hence our estimate is out by approximately 0.0004, so the error has increased almost 10 times. We will return to theoretical error estimates for Taylor series next semester.

We can see above that the higher the degree of the polynomial approximation, the better it gets. In this link

(http://mathfaculty.fullerton.edu/mathews/a2001/Animations/Interpolation/Series/Series.html)

you can examples of Taylor series approximations for a variety of functions and observe how they improve as you increase the degree of the polynomial.

More generally, we can look at the Taylor series for a point b. In order to compute this it is more convenient to use a different basis for the polynomials. A basis is an idea that you will see more of in linear algebra. It is a set of objects which we use to represent our information. For the point b I would like my polynomial to be written in the form pn(x)=a0+a1(xb)+a2(xb)2++an(xb)n=i=0nai(xb)i. It will become clear in a second why this is a good idea. We will find ai by matching the ith derivative of pn with the ith derivative of our target function f, which we are trying to approximate. If we differentiate pn j times we get pn(j)(x)=j!aj+(j+1)!aj+1(xb)+(j+2)!2(xb)2++ann!(nj)!(xb)nj. If we put x=b in this we see that pn(j)(b)=j!aj. Since we wish to make pn(j)(b)=f(j)(b), we obtain aj=f(j)(b)j!. If we had used pn(x)=a0+a1x+a2x2++anxn, then all of the terms greater than the jth term would not have become 0 and we would have got a horrible mess. Thus we have the following definition for the Taylor polynomial:

In the next code we plot degree 0, 1, 2, and 3 Taylor polynomials.

        fun <- function(x){
            sin(x) # The function we are approximating
        }

        dfun <-function(x){
            cos(x) # the derivative of the function
        }

        d2fun <- function(x){

            -sin(x) # second derivative
        }

        d3fun <- function(x){

            -cos(x) # third derivative
        }

        line <- function(x,b,fb,dfb){ # This is the equation of a line of gradient dfb through (b,fb)

            fb+dfb*(x-b)

        }

        quad <-function(x,b,fb,dfb,d2fb){ # inputs are the derivatives at the point b

            fb+(x-b)*dfb+(x-b)^2/2*d2fb # degree 2 Taylor polynomial

        }

        cubic <-function(x,b,fb,dfb,d2fb,d3fb){ # inputs are the derivatives at the point b

            fb+(x-b)*dfb+(x-b)^2/2*d2fb+(x-b)^3/6*d3fb # degree 2 Taylor polynomial

        }

        b <- pi/6 # the point at which we compute the tangent

        # Plot the function and the tangent on an interval around the point

        x <- seq(b-2,b+2,0.01)
        plot(x,line(x,b,fun(b),dfun(b)),main="Degree 1, 2 and 3 Taylor poynomials",
             ylab="f(x)",
             type="l",
             col="blue")
        lines(x,fun(x),col="green")
        lines(x,quad(x,b,fun(b),dfun(b),d2fun(b)),col="red")
        lines(x,cubic(x,b,fun(b),dfun(b),d2fun(b),d3fun(b)),col="black")

Remark. The selection of a good basis is one of the most important mathematical skills to develop. A bad choice of basis can make the problem almost impossible to solve. A good choice can make it trivial.

Example 5.2 The Taylor series for cosx and sinx at b=0 are cosx=j=0(1)j(2j)!x2j, sinx=j=0(1)2j+1(2j+1)!x2j+1.

Example 5.3 Compute the Taylor series for f(x)=11+x at x=1. How many terms in the series do you need to get 2 decimal places of accuracy for estimating f(1.1)?

The formula for the Taylor series at x=1 is f(x)=j=0f(j)(1)j!(x1)j. The jth derivative of f in this case (it is easy to check) f(j)(x)=(1)jj!(1+x)j+1. Hence f(j)(1)=(1)jj!2j+1. Substituting this into the formula for the Taylor series we have f(x)=j=0(1)j2j+1(x1)j. Thus f(1.1)=j=0(1)j2j+1(0.1)j. In order to approximate the value at x=1.1 to 2 decimal places we look for the first term in the expansion which is less than 0.005, as this will not change the calculation to 2 d.p. When j=2 the term in the expansion is (1)223(0.1)2=1/800. This is less than 0.005 in size. Thus we need only two terms: f(1.1)12140=0.475. The actual value is 10/21=0.476190476190476 and so we have two decimal places.

5.1.1 Test yourself

5.1.1.1 Functions of the form f(x)=(A+Bx)m/n.

5.1.1.2 Functions of the form y=Aexp(mx).

5.2 Polar form for complex numbers

Richard Feynman, the famous physicist called the following equation ‘’our jewel’’ and ‘’the most remarkable formula in mathematics’’.

Richard Feynmann

Thus exp(iθ) is a complex number with real part cosθ and imaginary part sinθ. If plot this we can see that exp(θθ) is a complex number of length 1 at angle θ to the real axis.

Polar coordinates

Example 5.4 Write z=2+4i in the form z=rexp(iθ).

The modulus of z r=22+42=20. We have θ=argz=arctan(4/2)=1.11 radians.

In this code you can change the real and imaginary parts of the input complex number and see what the modulus and argument are. The code is written so that π<θπ.

x <- -2
y <- -3

r <- (x^2+y^2)^(1/2)
theta=atan(y/x)
if (x<0) { # if x is negative we will have computed the wrong anlge and need to add pi.
  theta<-theta+pi
}

if (theta>=pi) { # Put theta in the range -pi to pi
  theta<-theta-2*pi
}

cat('Modulus = ',r,'   Argument = ',theta)
## Modulus =  3.60555127546    Argument =  -2.15879893034

5.2.1 Test yourself

Test your knowledge of polar and cartesian forms of complex numbers.

5.3 Challenge yourself

  1. Use Taylor’s theorem to prove the Binomial expansion (1+x)α=1+αx+α(α1)2x2+. Test this formula out for α=1/2, x=0,1/2,1,1/2,1. Is there any place that the formula does not work. Can you explain this.
  2. Can you use the Taylor series for exp(x) and the fact that logx is the inverse function for exp(x) to compute the Taylor series for log(1+x)? This question is exploratory.
  3. Use Euler’s formula exp(iθ)=cosθ+isinθ to show the two formulae cos(θ+ϕ)=cosθcosϕsinθsinϕsin(θ+ϕ)=sinθcosϕ+cosθsinϕ. Try to prove these formulae another way.
  4. How many cube roots of 1 are there if we allow complex numbers. Plot these roots on the complex plane. What shape do these roots make? How about fourth roots and fifth roots?