# 1 Introduction to Tidy Finance

The main aim of this chapter is to familiarize yourself with the tidyverse. We start by downloading and visualizing stock data before we move to a simple portfolio choice problem. These examples introduce you to our approach of Tidy Finance.

## 1.1 Working with stock market data

Exercises:

1. Download daily prices for one stock market ticker of your choice (e.g. AAPL) from Yahoo!Finance. Plot the time series of adjusted closing prices. To download the data you can use the command tq_get from the tidyquant package. If you do not know how to use it, make sure you read the help file by calling ?tq_get. I especially recommended to take a look in the examples section in the documentation.
2. Compute daily net returns for the asset and visualize the distribution of daily returns in a histogram. Also, use geom_vline() to add a dashed red line that indicates the 5% quantile of the daily returns within the histogram.
3. Compute summary statistics (mean, standard deviation, minimum and maximum) for the daily returns

Solutions: At the start of the session, we load the required packages. You can use the convenient tidyquant package to download price data. If you have trouble using tidyquant, check out the documentation. We load the packages tidyverse and tidyquant, but also show the code to install the packages in case you do not have them yet.

library(tidyverse)
library(tidyquant)

We first download daily prices for one stock market ticker, e.g., AAPL, directly from the data provider Yahoo!Finance. To download the data, you can use the command tq_get. If you do not know how to use it, make sure you read the help file by calling ?tq_get. We especially recommend taking a look at the documentation’s examples section.

prices <- tq_get("AAPL", get = "stock.prices")
prices
## # A tibble: 2,588 x 8
##   symbol date        open  high   low close    volume adjusted
##   <chr>  <date>     <dbl> <dbl> <dbl> <dbl>     <dbl>    <dbl>
## 1 AAPL   2012-01-03  14.6  14.7  14.6  14.7 302220800     12.6
## 2 AAPL   2012-01-04  14.6  14.8  14.6  14.8 260022000     12.6
## 3 AAPL   2012-01-05  14.8  14.9  14.7  14.9 271269600     12.8
## 4 AAPL   2012-01-06  15.0  15.1  15.0  15.1 318292800     12.9
## 5 AAPL   2012-01-09  15.2  15.3  15.0  15.1 394024400     12.9
## # ... with 2,583 more rows

tq_get downloads stock market data from Yahoo!Finance if you do not specify another data source. The function returns a tibble with eight quite self-explanatory columns: symbol, date, the market prices at the open, high, low and close, the daily volume (in number of traded shares), and the adjusted price in USD. The adjusted prices are corrected for anything that might affect the stock price after the market closes, e.g., stock splits and dividends. These actions do affect the quoted prices, but they have no direct impact on the investors who hold the stock.

Next, we use ggplot2 to visualize the time series of adjusted prices.

prices %>%
ggplot(aes(x = date, y = adjusted)) +
geom_line() +
labs(
x = NULL,
y = NULL,
title = "AAPL stock prices",
subtitle = "Prices in USD, adjusted for dividend payments and stock splits"
)

Instead of analyzing prices, we compute daily returns defined as $$(p_t - p_{t-1}) / p_{t-1}$$ where $$p_t$$ is the adjusted day $$t$$ price. The function lag computes the previous value in a vector.

returns <- prices %>%
arrange(date) %>%
select(symbol, date, ret)
returns
## # A tibble: 2,588 x 3
##   symbol date            ret
##   <chr>  <date>        <dbl>
## 1 AAPL   2012-01-03 NA
## 2 AAPL   2012-01-04  0.00537
## 3 AAPL   2012-01-05  0.0111
## 4 AAPL   2012-01-06  0.0105
## 5 AAPL   2012-01-09 -0.00159
## # ... with 2,583 more rows

The resulting tibble contains three columns where the last contains the daily returns. Note that the first entry naturally contains NA because there is no previous price. Additionally, the computations require that the time series is ordered by date. Otherwise, lag would be meaningless.

For the upcoming examples, we remove missing values as these would require separate treatment when computing, e.g., sample averages. In general, however, make sure you understand why NA values occur and carefully examine if you can simply get rid of these observations.

returns <- returns %>%
drop_na(ret)

Next, we visualize the distribution of daily returns in an histogram. For convenience, we multiply the returns by 100 to get returns in percent for the visualizations. Additionally, we also add a dashed red line that indicates the 5% quantile of the daily returns to the histogram, which is a (crude) proxy for the worst return of the stock with a probability of at least 5%.

quantile_05 <- quantile(returns %>% pull(ret) * 100, 0.05)

returns %>%
ggplot(aes(x = ret * 100)) +
geom_histogram(bins = 100) +
geom_vline(aes(xintercept = quantile_05),
color = "red",
linetype = "dashed"
) +
labs(
x = NULL,
y = NULL,
title = "Distribution of daily AAPL returns (in percent)",
subtitle = "The dotted vertical line indicates the historical 5% quantile"
)

Here, bins = 100 determines the number of bins and hence implicitly the width of the bins. Before proceeding, make sure you understand how to use the geom geom_vline() to add a dotted red line that indicates the 5% quantile of the daily returns. A typical task before proceeding with any data is to compute summary statistics for the main variables of interest.

returns %>%
mutate(ret = ret * 100) %>%
summarise(across(
ret,
list(
daily_mean = mean,
daily_sd = sd,
daily_min = min,
daily_max = max
)
))
## # A tibble: 1 x 4
##   ret_daily_mean ret_daily_sd ret_daily_min ret_daily_max
##            <dbl>        <dbl>         <dbl>         <dbl>
## 1          0.117         1.79         -12.9          12.0

We see that the maximum daily return was around 11.981 percent.
You can also compute these summary statistics for each year by imposing group_by(year = year(date)), where the call year(date) computes the year.

returns %>%
mutate(ret = ret * 100) %>%
group_by(year = year(date)) %>%
summarise(across(
ret,
list(
daily_mean = mean,
daily_sd = sd,
daily_min = min,
daily_max = max
),
.names = "{.fn}"
))
## # A tibble: 11 x 5
##    year daily_mean daily_sd daily_min daily_max
##   <dbl>      <dbl>    <dbl>     <dbl>     <dbl>
## 1  2012    0.124       1.86     -6.44      8.87
## 2  2013    0.0472      1.80    -12.4       5.14
## 3  2014    0.145       1.36     -7.99      8.20
## 4  2015    0.00199     1.68     -6.12      5.74
## 5  2016    0.0575      1.47     -6.57      6.50
## # ... with 6 more rows

In case you wonder: the additional argument .names = "{.fn}" in across() determines how to name the output columns. The specification is rather flexible and allows almost arbitrary column names which can be useful for reporting.

## 1.2 Scaling up the analysis

As a next step, we generalize the code from before such that all the computations can handle an arbitrary vector of tickers (e.g., all constituents of an index). Following tidy principles, it is quite easy to download the data, plot the price time series, and tabulate the summary statistics for an arbitrary number of assets.

Exercises:

1. Now the tidyverse magic starts: Take your code from before and generalize it such all the computations are performed for an arbitrary vector of tickers (e.g., ticker <- c("AAPL", "MMM", "BA")). You can also call ticker <- tq_index("DOW") to get daily data from all constituents of the Dow Jones index. Automate the download, the plot of the price time series and create a table of return summary statistics for this arbitrary number of assets.
2. Consider the research question Are days with high aggregate trading volume often followed by high aggregate trading volume days? How would you investigate this question? Hint: Compute the aggregate daily trading volume (in USD) and find an appropriate visualization to analyze the question.

Solutions: This is where the tidyverse magic starts: tidy data makes it extremely easy to generalize the computations from before to as many assets you like. The following code takes any vector of tickers, e.g., ticker <- c("AAPL", "MMM", "BA"), and automates the download as well as the plot of the price time series. In the end, we create the table of summary statistics for an arbitrary number of assets. We perform the analysis with data from all current constituents of the Dow Jones Industrial Average index.

### 1.2.1 Tidyverse magic

ticker <- tq_index("DOW")
index_prices <- tq_get(ticker,
get = "stock.prices",
from = "2000-01-01"
) %>%
filter(symbol != "DOW") # Exclude the index itself

The resulting file contains 159418 daily observations for in total 29 different corporations. The figure below illustrates the time series of downloaded adjusted prices for each of the constituents of the Dow Jones index. Make sure you understand every single line of code! (What are the arguments of aes()? Which alternative geoms could you use to visualize the time series? Hint: if you do not know the answers try to change the code to see what difference your intervention causes).

index_prices %>%
ggplot(aes(
x = date,
color = symbol
)) +
geom_line() +
labs(
x = NULL,
y = NULL,
color = NULL,
title = "DOW index stock prices",
subtitle = "Prices in USD, adjusted for dividend payments and stock splits"
) +
theme(legend.position = "none")

Do you notice the small differences relative to the code we used before? tq_get(ticker) returns a tibble for several symbols as well. All we need to do to illustrate all tickers simultaneously is to include color = symbol in the ggplot2 aesthetics. In this way, we generate a separate line for each ticker. Of course, there are simply too many lines on this graph to properly identify the individual stocks, but it illustrates the point well.

The same holds for stock returns. Before computing the returns, we use group_by(symbol) such that the mutate command is performed for each symbol individually. The same logic applies to the computation of summary statistics: group_by(symbol) is the key to aggregating the time series into ticker-specific variables of interest.

all_returns <- index_prices %>%
group_by(symbol) %>%
mutate(ret = adjusted / lag(adjusted) - 1) %>%
select(symbol, date, ret) %>%
drop_na(ret)

all_returns %>%
mutate(ret = ret * 100) %>%
group_by(symbol) %>%
summarise(across(
ret,
list(
daily_mean = mean,
daily_sd = sd,
daily_min = min,
daily_max = max
),
.names = "{.fn}"
))
## # A tibble: 29 x 5
##   symbol daily_mean daily_sd daily_min daily_max
##   <chr>       <dbl>    <dbl>     <dbl>     <dbl>
## 1 AMGN       0.0491     1.98     -13.4      15.1
## 2 AXP        0.0561     2.30     -17.6      21.9
## 3 BA         0.0592     2.20     -23.8      24.3
## 4 CAT        0.0704     2.03     -14.5      14.7
## 5 CRM        0.121      2.68     -27.1      26.0
## # ... with 24 more rows

Note that you are now also equipped with all tools to download price data for each ticker listed in the S&P 500 index with the same number of lines of code. Just use ticker <- tq_index("SP500"), which provides you with a tibble that contains each symbol that is (currently) part of the S&P 500. However, don’t try this if you are not prepared to wait for a couple of minutes because this is quite some data to download!

### 1.2.2 Are days with high aggregate trading volume often followed by high aggregate trading volume days?

Of course, aggregation across other variables than symbol can make sense as well. For instance, suppose you are interested in answering the question: are days with high aggregate trading volume likely followed by days with high aggregate trading volume? To provide some initial analysis on this question, we take the downloaded prices and compute aggregate daily trading volume for all Dow Jones constituents in USD. Recall that the column volume is denoted in the number of traded shares. Thus, we multiply the trading volume with the daily closing price to get a proxy for the aggregate trading volume in USD. Scaling by 1e9 denotes daily trading volume in billion USD.

volume <- index_prices %>%
mutate(volume_usd = volume * close / 1e9) %>%
group_by(date) %>%
summarise(volume = sum(volume_usd))

volume %>%
ggplot(aes(x = date, y = volume)) +
geom_line() +
labs(
x = NULL, y = NULL,
title = "Aggregate daily trading volume (billion USD)"
) 

One way to illustrate the persistence of trading volume would be to plot volume on day $$t$$ against volume on day $$t-1$$ as in the example below. We add a 45°-line to indicate a hypothetical one-to-one relation by geom_abline, addressing potential differences in the axes’ scales.

volume %>%
ggplot(aes(x = lag(volume), y = volume)) +
geom_point() +
geom_abline(aes(intercept = 0, slope = 1),
color = "red",
linetype = "dotted"
) +
labs(
x = "Previous day aggregate trading volume (billion USD)",
y = "Aggregate trading volume (billion USD)",
title = "Persistence of trading volume"
)
## Warning: Removed 1 rows containing missing values (geom_point).

Do you understand where the warning ## Warning: Removed 1 rows containing missing values (geom_point). comes from and what it means? Purely eye-balling reveals that days with high trading volume are often followed by similarly high trading volume days.

## 1.3 Portfolio choice problems

Exercises:

1. Compute monthly returns from the adjusted daily Dow Jones constituent prices.
2. Compute the vector of historical average returns and the sample variance-covariance matrix
3. Compute the minimum variance portfolio weights and the portfolio volatility and average returns
4. Visualize the mean-variance efficient frontier. For that, first compute the efficient portfolio for an arbitrary risk-free rate (below the average return of the minimum variance portfolio). Then use the two-fund separation theorem to compute the location of a grid of combinations of the minimum variance portfolio and the efficient portfolio on the mean-volatility diagram which has (annualized) volatility on the $$x$$ axis and (annualized) expected returns on the $$y$$ axis. Indicate the location of the individual assets on the mean-volatility diagram as well.

Solutions: In the previous part, we show how to download stock market data and inspect it with graphs and summary statistics. Now, we move to a typical question in Finance, namely, how to optimally allocate wealth across different assets. The standard framework for optimal portfolio selection considers investors that prefer higher future returns but dislike future return volatility (defined as the square root of the return variance): the mean-variance investor.

### 1.3.1 Monthly returns

An essential tool to evaluate portfolios in the mean-variance context is the efficient frontier, the set of portfolios which satisfy the condition that no other portfolio exists with a higher expected return but with the same volatility (i.e., the risk). We compute and visualize the efficient frontier for several stocks. First, we extract each asset’s monthly returns. In order to keep things simple we work with a balanced panel and exclude tickers for which we do not observe a price on every single trading day since 2000.

index_prices <- index_prices %>%
group_by(symbol) %>%
mutate(n = n()) %>%
ungroup() %>%
filter(n == max(n)) %>%
select(-n)

returns <- index_prices %>%
mutate(month = floor_date(date, "month")) %>%
group_by(symbol, month) %>%
summarise(price = last(adjusted), .groups = "drop_last") %>%
mutate(ret = price / lag(price) - 1) %>%
drop_na(ret) %>%
select(-price)

Next, we transform the returns from a tidy tibble into a $$(T \times N)$$ matrix with one column for each of the $$N$$ tickers to compute the covariance matrix $$\Sigma$$ and also the expected return vector $$\mu$$. We achieve this by using pivot_wider() with the new column names from the column symbol and set the values to ret. We compute the vector of sample average returns and the sample variance-covariance matrix, which we consider as proxies for the parameters of future returns.

returns_matrix <- returns %>%
pivot_wider(
names_from = symbol,
values_from = ret
) %>%
select(-month)

sigma <- cov(returns_matrix)
mu <- colMeans(returns_matrix)

Then, we compute the minimum variance portfolio weights $$\omega_\text{mvp}$$ as well as the expected return $$\omega_\text{mvp}'\mu$$ and volatility $$\sqrt{\omega_\text{mvp}'\Sigma\omega_\text{mvp}}$$ of this portfolio. Recall that the minimum variance portfolio is the vector of portfolio weights that are the solution to $\omega_\text{mvp} = \arg\min w'\Sigma w \text{ s.t. } \sum\limits_{i=1}^Nw_i = 1.$ It is easy to show analytically, that $$\omega_\text{mvp} = \frac{\Sigma^{-1}\iota}{\iota'\Sigma^{-1}\iota}$$ where $$\iota$$ is a vector of ones.

N <- ncol(returns_matrix)
iota <- rep(1, N)
mvp_weights <- solve(sigma) %*% iota
mvp_weights <- mvp_weights / sum(mvp_weights)

tibble(expected_ret = t(mvp_weights) %*% mu, volatility = sqrt(t(mvp_weights) %*% sigma %*% mvp_weights))
## # A tibble: 1 x 2
##   expected_ret[,1] volatility[,1]
##              <dbl>          <dbl>
## 1          0.00848         0.0314

Note that the monthly volatility of the minimum variance portfolio is of the same order of magnitude as the daily standard deviation of the individual components. Thus, the diversification benefits in terms of risk reduction are tremendous!

Next, we set out to find the weights for a portfolio that achieves three times the expected return of the minimum variance portfolio. However, mean-variance investors are not interested in any portfolio that achieves the required return, but rather in the efficient portfolio, i.e., the portfolio with the lowest standard deviation. If you wonder where the solution $$\omega_\text{eff}$$ comes from: The efficient portfolio is chosen by an investor who aims to achieve minimum variance given a minimum acceptable expected return $$\bar{\mu}$$. Hence, their objective function is to choose $$\omega_\text{eff}$$ as the solution to $\omega_\text{eff}(\bar{\mu}) = \arg\min w'\Sigma w \text{ s.t. } w'\iota = 1 \text{ and } \omega'\mu \geq \bar{\mu}.$ The code below implements the analytic solution to this optimization problem for a benchmark return $$\bar\mu$$ which we set to 3 times the expected return of the minimum variance portfolio. We encourage you to verify that it is correct.

mu_bar <- 3 * t(mvp_weights) %*% mu

C <- as.numeric(t(iota) %*% solve(sigma) %*% iota)
D <- as.numeric(t(iota) %*% solve(sigma) %*% mu)
E <- as.numeric(t(mu) %*% solve(sigma) %*% mu)

lambda_tilde <- as.numeric(2 * (mu_bar - D / C) / (E - D^2 / C))
efp_weights <- mvp_weights + lambda_tilde / 2 * (solve(sigma) %*% mu - D / C * solve(sigma) %*% iota)

### 1.3.2 The efficient frontier

The two mutual fund separation theorem states that as soon as we have two efficient portfolios (such as the minimum variance portfolio and the efficient portfolio for another required level of expected returns like above), we can characterize the entire efficient frontier by combining these two portfolios. The code below implements the construction of the efficient frontier, which characterizes the highest expected return achievable at each level of risk. To understand the code better, make sure to familiarize yourself with the inner workings of the for loop.

c <- seq(from = -0.4, to = 1.9, by = 0.01)
res <- tibble(
c = c,
mu = NA,
sd = NA
)

for (i in seq_along(c)) {
w <- (1 - c[i]) * mvp_weights + (c[i]) * efp_weights
res$mu[i] <- 12 * 100 * t(w) %*% mu res$sd[i] <- 12 * sqrt(100) * sqrt(t(w) %*% sigma %*% w)
}

Finally, it is simple to visualize the efficient frontier alongside the two efficient portfolios within one, powerful figure using ggplot2. We also add the individual stocks in the same call.

res %>%
ggplot(aes(x = sd, y = mu)) +
geom_point() +
geom_point( # locate the minimum variance and efficient portfolio
data = res %>% filter(c %in% c(0, 1)),
color = "red",
size = 4
) +
geom_point( # locate the individual assets
data = tibble(mu = 12 * 100 * mu, sd = 12 * 10 * sqrt(diag(sigma))),
aes(y = mu, x = sd), color = "blue", size = 1
) +
labs(
x = "Annualized standard deviation (in percent)",
y = "Annualized expected return (in percent)",
title = "Dow Jones asset returns and efficient frontier",
subtitle = "Red dots indicate the location of the minimum variance and efficient tangency portfolio"
)

The black line indicates the efficient frontier: the set of portfolios a mean-variance efficient investor would choose from. Compare the performance relative to the individual assets (the blue dots) - it should become clear that diversifying yields massive performance gains (at least as long as we take the parameters $$\Sigma$$ and $$\mu$$ as given).

## 1.4 Simple portfolio backtesting

In this exercise you will familiarize yourself with functions, numerical optimization and loops. You will compute efficient portfolio weights (that is, portfolios with the lowest possible volatility for a given level of expected return). To make things more interesting, we will also look at some potential frictions such as short-selling constraints which prevent investors to choose the optimal portfolio. After that we implement back-testing procedures to analyze the performance of different allocation strategies. More advanced techniques that reflect, for instance, transaction costs are presented in a separate chapter later in the course.

Exercises:

1. Use the monthly Dow Jones constituent return data from before (returns_matrix), the sample covariance $$\hat{\Sigma}$$ variance sigma of the returns based on the entire sample and the sample mean return $$\hat{\mu}$$ (mu).
2. Use $$\hat{\Sigma}$$ to compute the minimum variance portfolio weights. That is, given the number of assets $$N$$, choose a weight vector $$w$$ of length $$N$$ that minimizes the portfolio volatility $\arg\min_{w} w'\hat\Sigma w\text{, such that } w'\iota = 1$ where $$\iota$$ is a vector of ones. Use numerical optimization instead of the analytic solution with the package quadprog.
3. Suppose now short-selling is prohibited. Compute the restricted minimum variance portfolio weights as a solution to the minimization problem $\arg\min_{w} w'\hat\Sigma w\text{, such that } w'\iota = 1 \text{ and } w_i\geq0 \forall i=1,\ldots,N.$ You can use the package quadprog for the purpose of numerical optimization for quadratic programming problems.
4. What is the average monthly (in-sample) return and standard deviation of the minimum variance portfolio, $$\text{Var}(r_p) =\text{Var}(w'r)$$?
5. Implement a simple backtesting strategy: Adjust your code from before and re-estimate the weights of the no-short-selling minimum variance portfolio based on a rolling-window with length 100 days using a for loop. Before performing the out-of-sample exercise: What are your expectations? Which of the two strategies should deliver the lowest return volatility?
6. What is the out-of-sample portfolio return standard deviation of this portfolio? What about a naive portfolio that invests the same amount in every asset (thus, $$w_\text{Naive} = 1/N\iota$$)? Do the results reflect your initial thoughts? Why could there be deviations between the theoretically optimal portfolio and the best performing out-of-sample strategy?

Solutions: The estimated variance-covariance matrix $$\hat\Sigma$$ is a symmetric $$N \times N$$ matrix with full rank. It can be easily shown that the minimum variance portfolio weights are of the form $$w\propto \hat\Sigma^{-1}\iota$$. Therefore, we compute the inverse of $$\hat\Sigma$$ (which is done in R with solve()) and normalize the weight vector $$w$$ such that it sums up to 1.

N <- ncol(returns_matrix)
wmvp <- solve(sigma) %*% rep(1, N)
wmvp <- wmvp / sum(wmvp) 

The weights above are the minimum variance portfolio weights for an investment universe which consists of the chosen assets only.

To compute the optimal portfolio weights in the presence of, e.g., short-selling constraints, analytic solutions to the optimization problem above are not feasible anymore. However, note that we are faced with a quadratic optimization problem with one binding constraint ($$w'\iota = 1$$) and $$N$$ inequality constraints $$(w_i\geq 0)$$. Numerical optimization of such problems are well-established and we can rely on routines readily available in R, for instance with the package quadprog.

# install.packages("quadprog")
library(quadprog)

Make sure to familiarize yourself with the documentation of the packages main function ?solve.QP such that you understand the following notation. In general, quadratic programming problems are of the form $\arg\min_w -\mu'w + 1/2w'Dw \text{ such that } A'w \geq b_0$ where $$A$$ is a ($$\tilde N\times k$$) matrix with the first $$m$$ columns denoting equality constraints and the remaining $$k-m$$ columns denoting inequality constraints. To get started, the following code computes the minimum variance portfolio weights using numerical procedures instead of the analytic solution above.

wmvp_numerical <- solve.QP(Dmat = sigma,
dvec = rep(0, N), # no vector of expected returns for MVP
Amat = cbind(rep(1, N)), # Matrix A has one column which is a vector of ones
bvec = 1, # bvec is 1 and enforces the constraint that weights sum up to one
meq = 1) # there is one (out of one) equality constraint

# Check that w and w_numerical are the same (up to numerical instabilities)
cbind(wmvp, wmvp_numerical$solution) ## [,1] [,2] ## AMGN 0.03560 0.03560 ## AXP -0.06277 -0.06277 ## BA 0.00791 0.00791 ## CAT -0.05176 -0.05176 ## CSCO -0.01738 -0.01738 ## CVX 0.08897 0.08897 ## DIS 0.03354 0.03354 ## GS 0.00871 0.00871 ## HD -0.00771 -0.00771 ## HON -0.08534 -0.08534 ## IBM 0.04455 0.04455 ## INTC 0.04304 0.04304 ## JNJ -0.00284 -0.00284 ## JPM -0.00463 -0.00463 ## KO 0.08758 0.08758 ## MCD 0.05989 0.05989 ## MMM 0.10026 0.10026 ## MRK 0.01417 0.01417 ## MSFT 0.03638 0.03638 ## NKE 0.01236 0.01236 ## PG 0.26911 0.26911 ## TRV 0.02035 0.02035 ## UNH 0.08142 0.08142 ## VZ 0.04437 0.04437 ## WBA 0.04085 0.04085 ## WMT 0.19217 0.19217 ## AAPL 0.01121 0.01121 We see that the optimizer backs out the correct portfolio weights (that is, they are the same as the ones from the analytic solution). Now we extend the code above to include short-sale constraints. # Make sure you understand what the matrix t(A) is doing: A <- cbind(1, diag(N)) # Matrix of constraints t(A) >= bvec = c(1, rep(0, N)) # Introduce short-selling constraint: no element of w is allowed to be negative w_no_short_sale <- solve.QP(Dmat = 2 * sigma, # Efficient portfolio with risk aversion gamma = 2 dvec = mu, Amat = A, bvec = c(1, rep(0, N)), meq = 1)$solution

It seems like not much changed in the code: We added $$N$$ inequality constraints that make sure no weight is negative. This condition ensures no short-selling. The equality constraint makes sure that the weights sum up to one. The output chunk above also illustrates everything that is returned to you when calling the function solve.QP. Familiarize yourself with the documentation but what is crucial to understand is that you can call the vector that minimizes the objective function with w$solution after running w <- solve.QP(...) as shown above. Similar w$value returns the value of the function evaluated at the minimum. To be precise, the output of the function solve.QPis a list with 6 elements. List elements can be accessed with $. If in doubt, read Chapter 20.5 of R for Data Science. By changing A you can impose additional or other restrictions. For instance, how would you compute portfolio weights that minimize the portfolio volatility under the constraint that you cannot invest more than 30% of your wealth into any individual asset? r_p <- as.matrix(returns_matrix) %*% wmvp # portfolio returns list(sd = 100 * sqrt(12) * sd(r_p), mean = 100 * 12 * mean(r_p)) # Annualized portfolio performance ##$sd
## [1] 10.9
##
## $mean ## [1] 10.2 The code snippet above shows how to compute the realized portfolio return volatility and how to generate some summary statistics for these values which can be used for backtesting purposes. However, as $$\hat\Sigma$$ is computed using the entire history of the assets, we benefit from information that no real investor can actually use to base her decisions on. Instead, in the following we focus on out-of-sample computation of the portfolio weights based on a rolling window. Therefore, every month (day) we update the available set of information, recompute our concurrent estimate of $$\hat\Sigma_t$$ and rebalance our portfolio to hold the efficient minimum variance portfolio. The next day, we then compute the realized return of this portfolio. That way, we never use information which is only available in the future to choose the optimal portfolio. # Small function that computes the no short-sale minimum variance portfolio weight for a given matrix of returns mvp_weights <- function(tmp){ tmp_sigma <- cov(tmp) w_no_short_sale <- solve.QP(Dmat = tmp_sigma, # dvec = rep(0, N), # Minimum variance optimization with short sell constraint Amat = A, bvec = c(1, rep(0, N)), meq = 1)$solution
return(w_no_short_sale)
}

# Define out-of-sample periods
window_length <- 100
periods <- nrow(returns_matrix) - window_length # total number of out-of-sample periods

all_returns <- matrix(NA, nrow = periods, ncol = 2) # A matrix to collect all returns
colnames(all_returns) <- c("mvp", "naive") # we implement 3 strategies

for(i in 1:periods){ # Rolling window
return_window <- returns_matrix[i : (i + window_length - 1),] # the last X returns available up to date t

# Two portfolio strategies
mvp <- mvp_weights(return_window)
mvp_naive <- rep(1/N, N) # Naive simply invests the same in each asset

# Store realized returns w%*%w_t+1  in the matrix all_returns
all_returns[i, 1] <- as.matrix(returns_matrix[i + window_length, ]) %*% mvp # realized mvp return
all_returns[i, 2] <- as.matrix(returns_matrix[i + window_length, ]) %*% mvp_naive # realized naive return

}

The framework above allows you to add additional portfolio strategies and to compare their out-of-sample performance to the benchmarks above. How do the portfolio actually perform? To make the values easier to evaluate, I convert everything to annualized measures.

all_returns %>%
as_tibble() %>%
pivot_longer(everything()) %>% # Tidy-up the tibble for easier summary statistics
group_by(name) %>%
summarise(across(value,
list(Mean = ~12*mean(.), Volatility = ~sqrt(12)*sd(.), Sharpe = ~sqrt(12)*mean(.)/sd(.)))) 
## # A tibble: 2 x 4
##   name  value_Mean value_Volatility value_Sharpe
##   <chr>      <dbl>            <dbl>        <dbl>
## 1 mvp        0.110            0.124        0.890
## 2 naive      0.141            0.149        0.945

## 1.5 Equivalence between Certainty equivalent maximization and minimum variance optimization

Slide 42 (Parameter uncertainty) argues that an investor with a quadratic utility function with certainty equivalent $\max_w CE(w) = \omega'\mu - \frac{\gamma}{2} \omega'\Sigma \omega \text{ s.t. } \iota'\omega = 1$ faces an equivalent optimization problem to a framework where portfolio weights are chosen with the aim to minimize volatility given a pre-specified level or expected returns $\min_w \omega'\Sigma \omega \text{ s.t. } \omega'\mu = \bar\mu \text{ and } \iota'\omega = 1$ Note the differences: In the first case, the investor has a (known) risk aversion $$\gamma$$ which determines her optimal balance between risk ($$\omega'\Sigma\omega)$$ and return ($$\mu'\omega$$). In the second case, the investor has a target return she wants to achieve while minimizing the volatility. Intuitively, both approaches are closely connected if we consider that the risk aversion $$\gamma$$ determines the desirable return $$\bar\mu$$. More risk averse investors (higher $$\gamma$$) will chose a lower target return to keep their volatility level down. The efficient frontier then spans all possible portfolios depending on the risk aversion $$\gamma$$, starting from the minimum variance portfolio ($$\gamma = \infty$$).

Exercises: Proof that there is an equivalence between the optimal portfolio weights in both cases.

Solutions: In the following I solve for the optimal portfolio weights for a Certainty equivalent maximizing investor. The first order condition reads \begin{aligned} \mu - \lambda \iota &= \gamma \Sigma \omega \\ \Leftrightarrow \omega &= \frac{1}{\gamma}\Sigma^{-1}\left(\mu - \lambda\iota\right) \end{aligned} Next, we make use of the constraint $$\iota'\omega = 1$$. \begin{aligned} \iota'\omega &= 1 = \frac{1}{\gamma}\left(\iota'\Sigma^{-1}\mu - \lambda\iota'\Sigma^{-1}\iota\right)\\ \Rightarrow \lambda &= \frac{1}{\iota'\Sigma^{-1}\iota}\left(\iota'\Sigma^{-1}\mu - \gamma \right). \end{aligned} Plug-in the value of $$\lambda$$ reveals \begin{aligned} \omega &= \frac{1}{\gamma}\Sigma^{-1}\left(\mu - \frac{1}{\iota'\Sigma^{-1}\iota}\left(\iota'\Sigma^{-1}\mu - \gamma \right)\right) \\ \Rightarrow \omega &= \frac{\Sigma^{-1}\iota}{\iota'\Sigma^{-1}\iota} + \frac{1}{\gamma}\left(\Sigma^{-1} - \frac{\Sigma^{-1}\iota}{\iota'\Sigma^{-1}\iota}\iota'\Sigma^{-1}\right)\mu\\ &= \omega_\text{mvp} + \frac{1}{\gamma}\left(\Sigma^{-1}\mu - \frac{\iota'\Sigma^{-1}\mu}{\iota'\Sigma^{-1}\iota}\Sigma^{-1}\iota\right) \end{aligned} As shown in the slides, this corresponds to the efficient portfolio with desired return $$\bar r$$ such that(in the notation of the slides) $\frac{1}{\gamma} = \frac{\tilde\lambda}{2} = \frac{\bar\mu - D/C}{E - D^2/C}$ which implies that the desired return is just $\bar\mu = \frac{D}{C} + \frac{1}{\gamma}\left({E - D^2/C}\right)$ which is $$\frac{D}{C} = \mu'\omega_\text{mvp}$$ for $$\gamma\rightarrow \infty$$ as expected.