Chapter 4 The Exponential Function
Since the dawn of commerce humans have been charging interest on loans and deposits.
Simple interest is when you charge interest only on the original amount. So on a loan of £200 at 5% interest you would pay only £ 10 per year. Thus, with no payment back the amount that you owe would be £\(200+10n\) after the \(n\)th year.
Compound interest is when the amount the interest from the previous year becomes included in the amount on which you pay interest. So, at the end of the first year you would owe £210 exactly as with simple interest. However, then you pay 5% interest on this, which gives £210(1+0.05)=£220.50. In the table below you can see how quickly the amount you owe increases:
compound <- data.frame("Year" = 0:4, "Interest" = "200",stringsAsFactors=FALSE)
compound[2,"Interest"] <- "210 = 200(1+0.05)"; compound
compound[3,"Interest"] <- "220.50=210(1+0.05)"; compound
compound[4,"Interest"] <- "231.525=220.50(1+0.05)"; compound
compound[5,"Interest"] <- "243.101=231.525(1+0.05)"; compound
Year | Interest |
---|---|
0 | 200 |
1 | 210 = 200(1+0.05) |
2 | 220.50=210(1+0.05) |
3 | 231.525=220.50(1+0.05) |
4 | 243.101=231.525(1+0.05) |
So, in the \(n\)th year you would owe £\(200(1+0.05)^n\).
Suppose that the lender decides to compound interest monthly rather than yearly. At an interest rate of \(r\)% per year on a loan of \(L\) we then owe \[ V_{12} = L \left (1+{r \over 12} \right )^{12}. \] Will this amount be more or less than the amount you get compounding a single time?
Suppose we decide to compound weekly, or even daily. We would then get, respectively \[ V_{52} = L \left (1+{r \over 52} \right )^{52}, \] and \[ V_{365} = L \left (1+{r \over 365} \right )^{365}. \]
In order to investigate this numerically let us consider the case when \(L=1\) and \(r=1\). \[ V_{n} = \left (1+{1 \over n} \right )^{n}. \]
interest <- data.frame(n = c(1,10,100,1000), V_n = c(2,2.593742460100002,2.704813829421529,2.716923932235594))
n | V_n |
---|---|
1 | 2.00000000000 |
10 | 2.59374246010 |
100 | 2.70481382942 |
1000 | 2.71692393224 |
Can you | guess what the limit is going to be yet? |
4.1 The number \(e\)
Now let us explore this number by looking at the binomial expansion for this number. \[\begin{eqnarray} n \quad & & V_n \\ \hline \hline 1 \quad & & 1+1 \\ 2 \quad & & (1+1/2)^2=1+1+{1 \over 4} \\ 3 \quad && (1+1/3)^3=1+1+{1 \over 3}+{1 \over 27} \\ 4 \quad && (1+1/4)^4=1+1+{3 \over 8}+{1 \over 16} + {1 \over 256} \\ 5 \quad && (1+1/5)^5=1+1+{2 \over 5}+{2 \over 25} + {1 \over 125}+{1 \over 3125} \\ \end{eqnarray}\] Each of the columns above looks like a monotone increasing Cauchy sequence. Using the formula for the binomial expansion we can explore the limit of each sequence.
Let us look at the third column above 1/4, 1/3, 3/8, 2/5. The number here is the third term in the binomial expansion \[ \binom{n}{2} \left ( {1 \over n} \right )^2 = {n(n-1) \over (2!)n^2} = {1 \over 2!} - {1 \over (2!)n}. \] Does this agree with our calculations above? Thus the limit as \(n \rightarrow \infty\) of this sequence is \({1 \over 2}\). So we can write \[ \lim_{n \rightarrow \infty} \binom{n}{2} \left ( {1 \over n} \right )^2 = {1 \over 2}. \]
Let us explore the fourth column. The number there is \[ \binom{n}{3} \left ( {1 \over n} \right )^3 = {n(n-1)(n-3) \over (3!)n^3} = {1 \over 3!} - {4 \over (3!)n}+{3 \over (3!)n^2}. \] Hence \[ \lim_{n \rightarrow \infty} \binom{n}{3} \left ( {1 \over n} \right )^3 = {1 \over 3!}. \]
So we have the following theorem
Now, it may be that the limit above is infinity, as it would be if \(a_i=i\).
In the next code window we will see that this converges to \(e\) much faster than if we calculate \(e\) using the definition.
eapproximation <- data.frame("nupper" = c(1,2,5,10), "s_n" = c(0,0,0,0))
index <- 1 # indexes the first row in the data frame
for (n in 1:4){ # loop for each upper limit in nupper
ntop <- eapproximation[index,"nupper"] # take the number of terms from the data frame
eapprox <- 0 # initialise sum for this value of n
for (i in 0:ntop){
eapprox <- eapprox+1/factorial(i) # add the value for the next term in the sum
}
eapproximation[index,"s_n"] <- eapprox # put the approximation in the data frame
index <- index+1 # increase the index by 1 to put in the next row of the frame
}
nupper | s_n |
---|---|
1 | 2.00000000000 |
2 | 2.50000000000 |
5 | 2.71666666667 |
10 | 2.71828180115 |
4.2 The exponential function
Having defined the number \(e\) we can now define arguably the most important function in mathematics, the exponential \[ \exp(x) = e^x. \] From Definition 4.1 we have \[ e=\lim_{n \rightarrow \infty} \left ( 1 + {1 \over n} \right )^n. \] Then \[ \exp(x) = \lim_{n \rightarrow \infty} \left ( 1 + {1 \over n}\right )^{nx}. \] We have a little bit of a problem now, because it is not completely clear how we take a real valued power (we will get to fractional powers later too because they also cause problems). However let us continue as though this does not matter - by the end of the course hopefully we can tie up loose ends. Suppose we write \(nx=m\). Then as \(n \rightarrow \infty\), \(m \rightarrow \infty\) as long as \(x>0\). Substitute \(n=m/x\) in the last equation and we get
Now we can use exactly the same reasoning as for Theorem 4.2 to get the following important representation of the exponential function:
4.3 Differentiating the exponential
To find the derivative of the function \(\exp(x)\) we are going to use the result of Theorem 4.5 \[ \exp(x) = \sum_{i=0}^\infty {x^i \over i!}, \] and differentiate this. We are assuming that we can swap the order of the limit and the differentiation, and later in the course we will justify this step. If we differentiate \(\exp(mx)\), for some \(m \in {\mathbb N}\), term by term and we get \[ {d \over dx} \exp(mx) = \sum_{i=1}^\infty {i (mx)^{i-1} \over i!} = \sum_{i=1}^\infty {m^i x^{i-1} \over (i-1)!} = m \sum_{i=0}^\infty {(mx)^{i} \over (i)!} = m\exp(mx). \] Thus we have the result:
We have the amazing result that \(\exp(mx)\) is the function that when
you differentiate it you get it back itself. This introduces a
fundamental idea in mathematics: **eigenvector** or **eigenfunction**. You will see more
of this idea in linear algebra, but the idea is that we can solve
equations involving, in this case derivatives, more easily if we know
the **eigenfunctions** for differentiation. This will be used in
Chapter 6.