This vignette illustrates the design of risk parity portfolios, which are widely used by practitioners in the financial industry, with the package riskParityPortfolio, giving a description of the algorithms and comparing the performance against existing packages.

Quick Start

Let’s start by loading the package:

library(riskParityPortfolio)
?riskParityPortfolio  # to get help for the function

The simplest use is for the vanilla risk parity portfolio:

rpp_vanilla <- riskParityPortfolio(Sigma)
names(rpp_vanilla)
#> [1] "w"                          "relative_risk_contribution" "obj_fun"                    "is_feasible"
round(rpp_vanilla$w, digits = 3)
#>  AAPL   AMD   ADI  ABBV  AEZS     A   APD    AA    CF 
#> 0.157 0.068 0.125 0.133 0.045 0.129 0.158 0.085 0.101

To obtain the naive diagonal solution, also known as inverse volatility portfolio, make use of the formulation argument:

rpp_naive <- riskParityPortfolio(Sigma, formulation = "diag")

For a more realistic formulation including the expected return in the objective and box constraints: \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & \sum_{i,j=1}^{N}\left(\dfrac{w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{i}}{b_i}-\dfrac{w_{j}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{j}}{b_j}\right)^{2} \color{blue}{- \lambda \;\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\mu}}\\ \textsf{subject to} & \boldsymbol{w} \ge \boldsymbol{0}, \quad\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1, \quad\color{blue}{\boldsymbol{l}\le\boldsymbol{w}\le\boldsymbol{u}}. \end{array}\]

rpp_mu <- riskParityPortfolio(Sigma, formulation = "rc-over-b-double-index",
                              mu = mu, lmd_mu = 1e-3, # for expected return term
                              w_ub = 0.16)            # for box upper bound constraints

To plot and compare the results:

w_all <- cbind("EWP"           = rep(1/nrow(Sigma), nrow(Sigma)),
               "RPP (naive)"   = rpp_naive$w,
               "RPP (vanilla)" = rpp_vanilla$w,
               "RPP + mu"      = rpp_mu$w)
barplotPortfolioRisk(w_all, Sigma)

What is a Risk Parity Portfolio?

Signal model

Let us denote by \(\boldsymbol{r}_{t}\) the vector of the returns of \(N\) assets at time \(t\) and suppose they follow an i.i.d. distribution (which is not a totally accurate assumption, but it is widely adopted, nonetheless) with mean \(\boldsymbol{\mu}\) and covariance matrix \(\boldsymbol{\Sigma}\).

The portfolio vector \(\boldsymbol{w}\) denotes the normalized dollar weights allocated to the \(N\) assets, such that \(\boldsymbol{1}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{w}=1\), and the portfolio return is then \(r_{t}^{\sf portf} = \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{r}_{t}\), with expected return \(\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\mu}\) and variance \(\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}\).

Modern Portfolio Theory

In 1952, Markowitz proposed in his seminal paper (Markowitz, 1952) to find a trade-off between the portfolio expected return and its risk measured by the variance: \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{maximize}} & \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\mu}-\lambda\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}\\ \textsf{subject to} & \boldsymbol{w} \ge \boldsymbol{0}, \quad\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1, \end{array}\] where \(\lambda\) is a parameter that controls how risk-averse the investor is.

Markowitz’s portfolio has been heavily critized for over half a century and has never been fully embraced by practitioners, among many reasons because:

  • it only considers the risk of the portfolio as a whole and ignores the risk diversification (i.e., concentrates too much risk in few assets, which was observed in the 2008 financial crisis)
  • it is highly sensitive to the estimation errors in the parameters (i.e., small estimation errors in the parameters may change the designed portfolio drastically).

Although portfolio management did not change much during the 40 years after the seminal works of Markowitz and Sharpe, the development of risk budgeting techniques marked an important milestone in deepening the relationship between risk and asset management.

From “dollar” to risk diversification

Since the global financial crisis in 2008, risk management has particularly become more important than performance management in portfolio optimization. Indeed, risk parity became a popular financial model after the global financial crisis in 2008 (Asness et al., 2012; E. Qian, 2005).

The alternative risk parity portfolio design has been receiving significant attention from both the theoretical and practical sides (E. E. Qian, 2016; Roncalli, 2013) because it: (1) diversifies the risk, instead of the capital, among the assets and (2) is less sensitive to parameter estimation errors.

Nowadays, pension funds and institutional investors are using this approach in the development of smart indexing and the redefinition of long-term investment policies.

Risk parity is an approach to portfolio management that focuses on allocation of risk rather than allocation of capital. The risk parity approach asserts that when asset allocations are adjusted to the same risk level, the portfolio can achieve a higher Sharpe ratio and can be more resistant to market downturns.

While the minimum variance portfolio tries to minimize the variance (with the disadvantage that a few assets may be the ones contributing most to the risk), the risk parity portfolio tries to constrain each asset (or asset class, such as bonds, stocks, real estate, etc.) to contribute equally to the portfolio overall volatility.

The term “risk parity” was coined by Edward Qian from PanAgora Asset Management (E. Qian, 2005) and was then adopted by the asset management industry. Some of its theoretical components were developed in the 1950s and 1960s, but the first risk parity fund, called the “All Weather” fund, was pioneered by Bridgewater Associates LP in 1996. The interest in the risk parity approach has increased since the financial crisis in the late 2000s as the risk parity approach fared better than portfolios designed in traditional fashions.

Some portfolio managers have expressed skepticism with risk parity, while others point to its performance during the financial crisis of 2007-2008 as an indication of its potential success.

The idea of the risk parity portfolio (RPP), aka equal risk portfolio (ERP), is to “equalize” the risk so that the risk contribution of every asset is equal, rather than simply having an equal capital allocation like the equally weighted portfolio (EWP):

Risk parity portfolio

From Euler’s theorem, the volatility of the portfolio \(\sigma\left(\boldsymbol{w}\right)=\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}\) can be decomposed as \[\sigma\left(\boldsymbol{w}\right)=\sum_{i=1}^{N}w_i\frac{\partial\sigma}{\partial w_i} = \sum_{i=1}^N\frac{w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{i}}{\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}}.\]

The risk contribution (RC) from the \(i\)th asset to the total risk \(\sigma(\boldsymbol{w})\) is defined as \[{\sf RC}_i =\frac{w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}}\] which satisfies \(\sum_{i=1}^{N}{\sf RC}_i=\sigma\left(\boldsymbol{w}\right)\).

The relative risk contribution (RRC) is a normalized version: \[{\sf RRC}_i = \frac{w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}\] so that \(\sum_{i=1}^{N}{\sf RRC}_i=1\).

The risk parity portfolio (RPP) attemps to “equalize” the risk contributions: \[{\sf RC}_i = \frac{1}{N}\sigma(\boldsymbol{w})\quad\text{or}\quad{\sf RRC}_i = \frac{1}{N}.\]

More generally, the risk budgeting portfolio (RBP) attemps to allocate the risk according to the risk profile determined by the weights \(\boldsymbol{b}\) (with \(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{b}=1\) and \(\boldsymbol{b}\ge \boldsymbol{0}\)): \[{\sf RC}_i = b_i \sigma(\boldsymbol{w})\quad\text{or}\quad{\sf RRC}_i = b_i.\]

In practice, one can express the condition \({\sf RC}_i = \frac{1}{N}\sigma(\boldsymbol{w})\) in different equivalent ways such as \[w_i(\Sigma \boldsymbol{w})_{i} = w_j(\Sigma \boldsymbol{w})_{j}, \quad\forall i, j.\] The budget condition \({\sf RC}_i = b_i \sigma(\boldsymbol{w})\) can also be expressed as \[w_i (\Sigma \boldsymbol{w})_i = b_i \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\Sigma\boldsymbol{w}, \quad\forall i.\]

Solving the Risk Parity Portfolio (RPP)

Naive diagonal formulation

Assuming that the assets are uncorrelated, i.e., that \(\boldsymbol{\Sigma}\) is diagonal, and simply using the volatilities \(\boldsymbol{\sigma} = \sqrt{{\sf diag(\boldsymbol{\Sigma})}}\), one obtains \[\boldsymbol{w} = \frac{\boldsymbol{\sigma}^{-1}}{\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\sigma}^{-1}}\] or, more generally, \[\boldsymbol{w} = \frac{\sqrt{\boldsymbol{b}}\odot\boldsymbol{\sigma}^{-1}}{\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\left(\sqrt{\boldsymbol{b}}\odot\boldsymbol{\sigma}^{-1}\right)}.\] However, for non-diagonal \(\boldsymbol{\Sigma}\) or with other additional constraints or objective function terms, a closed-form solution does not exist and some optimization procedures have to be constructed. The previous diagonal solution can always be used and is called naive risk budgeting portfolio.

Vanilla convex formulation

Suppose we only have the constraints \(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1\) and \(\boldsymbol{w} \ge \boldsymbol{0}\). Then, after the change of variable \(\boldsymbol{x}=\boldsymbol{w}/\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}\), the equations \(w_i (\Sigma \boldsymbol{w})_i = b_i \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\Sigma\boldsymbol{w}\) become \(x_i\left(\boldsymbol{\Sigma}\boldsymbol{x}\right)_i = b_i\) or, more compactly in vector form, as \[\boldsymbol{\Sigma}\boldsymbol{x} = \boldsymbol{b}/\boldsymbol{x}\] with \(\boldsymbol{x} \ge \boldsymbol{0}\) and we can always recover the portfolio by normalizing: \(\boldsymbol{w} = \boldsymbol{x}/(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{x})\).

At this point, one could use a nonlinear multivariate root finder for \(\boldsymbol{\Sigma}\boldsymbol{x} = \boldsymbol{b}/\boldsymbol{x}\). For example, in R we can use the package rootSolve.

With the goal of designing risk budget portfolios, Spinu proposed in (Spinu, 2013) to solve the following convex optimization problem: \[\underset{\boldsymbol{x}\ge\boldsymbol{0}}{\textsf{minimize}} \quad \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \sum_{i=1}^{N}b_i\log(x_i),\] where the portfolio can be recovered as \(\boldsymbol{w} = \boldsymbol{x}/(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{x})\).

Indeed, Spinu realized in (Spinu, 2013) that precisely the risk budgeting equation \(\boldsymbol{\Sigma}\boldsymbol{x} = \boldsymbol{b}/\boldsymbol{x}\) corresponds to the gradient of the convex function \(f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\) set to zero: \[\nabla f(\boldsymbol{x}) = \boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}/\boldsymbol{x} = \boldsymbol{0}.\]

Thus, a convenient way to solve the problem is by solving the following convex optimization problem: \[\underset{\boldsymbol{x}\ge\boldsymbol{0}}{\textsf{minimize}} \quad \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\] which has optimality condition \(\boldsymbol{\Sigma}\boldsymbol{x} = \boldsymbol{b}/\boldsymbol{x}\).

Such solution can be computed using a general-purpose convex optimization package, but faster algorithms such as the Newton method and the cyclical coordinate descent method, employed in (Spinu, 2013) and (Griveau-Billion et al., 2013), are implemented in this package. (Yet another convex formulation was proposed in (Kaya and Lee, 2012).)

General nonconvex formulation

The previous methods are based on a convex reformulation of the problem so they are guaranteed to converge to the optimal risk budgeting solution. However, they can only be employed for the simplest risk budgeting formulation with a simplex constraint set (i.e., \(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1\) and \(\boldsymbol{w} \ge \boldsymbol{0}\)). They cannot be used if

  • we have other constraints like allowing shortselling or box constraints: \(l_i \le w_i \le u_i\)
  • on top of the risk budgeting constraints \(w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i = b_i \;\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}\) we have other objectives like maximizing the expected return \(\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\mu}\) or minimizing the overall variance \(\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}\).

For those more general cases, we need more sophisticated formulations, which unfortunately are not convex. The idea is to try to achieve equal risk contributions \({\sf RC}_i = \frac{w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}}\) by penalizing the differences between the terms \(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{i}\).

There are many reformulations possible. For illustrative purposes, one such formulation is \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & \sum_{i,j=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{i}-w_{j}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{j}\right)^{2} \color{blue}{- \;F(\boldsymbol{w})}\\ \textsf{subject to} & \boldsymbol{w} \ge \boldsymbol{0}, \quad\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1, \quad\color{blue}{\boldsymbol{w}\in\cal{W}} \end{array}\] where \(F(\boldsymbol{w})\) denotes some additional objective function and \(\cal{W}\) denotes an arbitrary convex set of constraints. More expressions for the risk concentration terms are listed in Appendix I.

The way to solve this general problem is derived in (Feng and Palomar, 2015, 2016) and is based on a powerful optimization framework named successive convex approximation (SCA) (Scutari et al., 2014). See Appendix II for a general idea of the method.

Using the Package riskParityPortfolio

A simple code example on how to design a risk parity portfolio is as follows:

library(riskParityPortfolio)

# generate synthetic data
set.seed(42)
N <- 10
V <- matrix(rnorm(N^2), nrow = N)
Sigma <- cov(V)

# compute risk parity portfolio
rpp <- riskParityPortfolio(Sigma)

# plot
barplotPortfolioRisk(rpp$w, Sigma, col = "#A29BFE")

As presented earlier, risk parity portfolios are designed in such a way as to ensure equal risk contribution from the assests, which can be noted in the chart above.

Modern Risk Parity Portfolio

The design of risk parity portfolios as solved by (Spinu, 2013) and (Griveau-Billion et al., 2013) is of paramount importance both for academia and industry. However, practitioners would like the ability to include additional constraints and objective terms desired in practice, such as the mean return, box constraints, etc. In such cases, the risk-contribution constraints cannot be met exactly due to the trade-off among different objectives or additional constraints.

RPP with additional expected return term

Let us explore, for instance, the effect of including the expected return as an additional objective in the optimization problem. The problem can be formulated as \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & R(\boldsymbol{w}) - \lambda_{\mu} \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\mu}\\ \textsf{subject to} & \boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1, \boldsymbol{w} \ge \boldsymbol{0}, \end{array}\] where \(R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i - b_i\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}\right)^{2}\) is the risk concentration function or risk parity function, \(\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\mu}\) is the expected return, and \(\lambda_{\mu}\) is a trade-off parameter.

library(ggplot2)
library(riskParityPortfolio)

N <- 10
V <- matrix(rnorm(N^2), nrow = N)
Sigma <- cov(V)
mu <- runif(N)

lmd_sweep <- 10^seq(-5, 2, .25)
mean_return <- c()
risk_concentration <- c()
for (lmd_mu in lmd_sweep) {
  rpp <- riskParityPortfolio(Sigma, mu = mu, lmd_mu = lmd_mu,
                             formulation = "rc-over-sd vs b-times-sd")
  mean_return <- c(mean_return, rpp$mean_return)
  risk_concentration <- c(risk_concentration, rpp$risk_concentration)
}

ggplot(data.frame(risk_concentration, mean_return),
       aes(x = risk_concentration, y = mean_return)) +
  geom_line() + geom_point() +
  labs(title = "Expected Return vs Risk Concentration", x = "Risk Concentration", y = "Expected Return")

RPP with additional variance term

Similarly, the riskParityPortfolio package allows us to include the variance \(\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}\) as an objective term: \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & R(\boldsymbol{w}) + \lambda_{\sf var} \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}\\ \textsf{subject to} & \boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1, \boldsymbol{w} \ge \boldsymbol{0}, \end{array}\] where \(\lambda_{\sf var}\) is a trade-off parameter.

In the code, that can be done by passing a positive value to the parameter lmd_var. Let’s check the following illustrative example that depicts the trade-off between volatility and risk parity:

library(ggplot2)
library(riskParityPortfolio)

load("Sigma_mu.RData")

lmd_sweep <- 10^seq(-5, 5, .25)
variance <- c()
risk_concentration <- c()
for (lmd_var in lmd_sweep) {
  rpp <- riskParityPortfolio(Sigma, lmd_var = lmd_var,
                             formulation = "rc-over-sd vs b-times-sd")
  variance <- c(variance, rpp$variance)
  risk_concentration <- c(risk_concentration, rpp$risk_concentration)
}
volatility <- sqrt(variance)

ggplot(data.frame(risk_concentration, volatility),
       aes(x = risk_concentration, y = volatility)) +
  geom_line() + geom_point() +
  labs(title = "Volatility vs Risk Concentration", x = "Risk Concentration", y = "Volatility")

RPP with general linear constraints

In version 2.0, we added support for general linear constraints, i.e., riskParityPortfolio is now able to solve the following problem: \[\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & R(\boldsymbol{w}) + \lambda F(\boldsymbol{w})\\ \textsf{subject to} & \boldsymbol{C}\boldsymbol{w} = \boldsymbol{c},~~\boldsymbol{D}\boldsymbol{w} \leq \boldsymbol{d}. \end{array}\]

Users interested in the details of the algorithm used to solve this problems are refered to Section V (Advanced Solving Approaches) of (Feng and Palomar, 2015). In summary, the algorithm fits well within the SCA framework, while preserving speed and scalability.

It was recently mentioned by (Richard and Roncalli, 2019) that the problem of designing risk parity portfolios with general constraints is harder than it seems. Indeed, (Richard and Roncalli, 2019) shows that, after imposing general linear constraints, the property of equal risk contributions (ERC) is unlikely to be preserved among the assets affected by the constraints.

Let’s check out a numerical example from (Richard and Roncalli, 2019). Consider we have a universe of eight assets and we would like to design a risk parity portfolio \(\boldsymbol{w}\) satisfying the following constraints: \[ w_5 + w_6 + w_7 + w_8 \geq 30\%, \] \[ w_2 + w_6 \geq w_1 + w_5 + 5\%, \] and \[ \sum_i w_i = 100\% . \]

# compute the covariance matrix
vol <- c(0.05, 0.05, 0.07, 0.1, 0.15, 0.15, 0.15, 0.18)
Corr <- rbind(c(100,  80,  60, -20, -10, -20, -20, -20),
              c( 80, 100,  40, -20, -20, -10, -20, -20),
              c( 60,  40, 100,  50,  30,  20,  20,  30),
              c(-20, -20,  50, 100,  60,  60,  50,  60),
              c(-10, -20,  30,  60, 100,  90,  70,  70),
              c(-20, -10,  20,  60,  90, 100,  60,  70),
              c(-20, -20,  20,  50,  70,  60, 100,  70),
              c(-20, -20,  30,  60,  70,  70,  70, 100)) / 100
Sigma <- Corr * (vol %o% vol)

# define linear constraints
Dmat <- matrix(0, 2, 8)
Dmat[1, ] <- c(rep(0, 4), rep(-1, 4))
Dmat[2, ] <- c(1, -1, 0, 0, 1, -1, 0, 0)
dvec <- c(-0.30, -0.05)

# design portfolio
rpp <- riskParityPortfolio(Sigma, Dmat = Dmat, dvec = dvec)

# plot portfolio weights
barplotPortfolioRisk(rpp$w, Sigma)

As we can observe, the risk contributions are somewhat clustered according to the relationship among assets defined by the linear constraints, as mentioned by (Richard and Roncalli, 2019).

The linear constraints are obviously satisfied:

# equality constraints
print(sum(rpp$w))
#> [1] 1

# inequality constraints
print(Dmat %*% rpp$w)
#>       [,1]
#> [1,] -0.30
#> [2,] -0.05

The results obtained by our implementation agree with those reported by (Richard and Roncalli, 2019).

A pratical example using FAANG price data

In (Souza, 2019), the author showed how to build a risk parity portfolio for FAANG companies (Facebook, Apple, Amazon, Netflix, and Google) using riskParityPortfolio. Here, we will attempt to replicate their backtest results, but using the package portfolioBacktest instead.

library(xts)
library(portfolioBacktest)
library(riskParityPortfolio)

# download price data
faang_data <- stockDataDownload(c("GOOG", "NFLX", "AAPL", "AMZN", "FB"),
                                from = "2014-01-01", to = "2019-06-25")

# define portfolios to be backtested
# risk parity portfolio
risk_parity <- function(dataset, w_current) {
  prices <- dataset$adjusted
  log_returns <- diff(log(prices))[-1]
  return(riskParityPortfolio(cov(log_returns))$w)
}

# tangency portfolio (maximum sharpe ratio)
library(quadprog)
max_sharpe_ratio <- function(dataset, w_current) {
    prices <- dataset$adjusted
    log_returns <- diff(log(prices))[-1]
    N <- ncol(prices)
    Sigma <- cov(log_returns)
    mu <- colMeans(log_returns)
    if (all(mu <= 1e-8))
        return(rep(0, N))
    Dmat <- 2 * Sigma
    Amat <- diag(N)
    Amat <- cbind(mu, Amat)
    bvec <- c(1, rep(0, N))
    dvec <- rep(0, N)
    res <- solve.QP(Dmat = Dmat, dvec = dvec, Amat = Amat, bvec = bvec, meq = 1)
    w <- res$solution
    return(w/sum(w))
}

# call portfolioBacktest and benchmark against the max. sharpe ratio portfolio
bt <- portfolioBacktest(list("risk parity portfolio" = risk_parity,
                             "tangency portfolio"    = max_sharpe_ratio),
                        list(faang_data),
                        lookback = 12*20,
                        optimize_every = 3*20,
                        rebalance_every = 3*20)

# dates of the designed portfolios
index(bt$tangency$data1$w_designed)
#> integer(0)

# check performance summary
backtestSummary(bt)$performance
#>                    risk parity portfolio tangency portfolio
#> Sharpe ratio                1.212936e+00       8.742348e-01
#> max drawdown                3.038298e-01       3.475521e-01
#> annual return               2.969885e-01       2.535384e-01
#> annual volatility           2.448509e-01       2.900118e-01
#> Sortino ratio               1.727319e+00       1.235928e+00
#> downside deviation          1.719361e-01       2.051401e-01
#> Sterling ratio              9.774831e-01       7.294977e-01
#> Omega ratio                 1.251229e+00       1.176488e+00
#> VaR (0.95)                  2.280477e-02       2.769914e-02
#> CVaR (0.95)                 3.730691e-02       4.459333e-02
#> rebalancing period          5.989474e+01       5.989474e+01
#> turnover                    1.375223e-03       1.185713e-02
#> ROT (bps)                   7.754604e+03       7.819831e+02
#> cpu time                    1.121053e-02       4.263158e-03
#> failure rate                0.000000e+00       0.000000e+00

# plot cumulative returns chart
backtestChartCumReturn(bt)

# plot max drawdown chart
backtestChartDrawdown(bt)

# plot assets exposures over time
backtestChartStackedBar(bt, portfolio = "risk parity portfolio", legend = TRUE)
#> Warning: position_stack requires non-overlapping x intervals

backtestChartStackedBar(bt, portfolio = "tangency portfolio" , legend = TRUE)
#> Warning: position_stack requires non-overlapping x intervals
#> position_stack requires non-overlapping x intervals

Indeed, the charts are quite similar to those reported in (Souza, 2019). Note that the weights evolution of the Markowitz’s tangency portfolio is quite sensitive to minute changes in the rebalance/optimization dates, whereas no significant difference is noticed for the risk parity portfolio weights.

Comparison with other Packages

Others R packages with the goal of designing risk parity portfolios do exist, such as FinCovRegularization, cccp, and RiskPortfolios. Let’s check how do they perform against riskParityPortfolio. (Note that other packages like FRAPO use cccp under the hood.)

library(FinCovRegularization)
library(cccp)
library(RiskPortfolios)

# generate synthetic data
set.seed(42)
N <- 10
V <- matrix(rnorm(N*(N+N/5)), N+N/5, N)
Sigma <- cov(V)

# uniform initial guess for the portfolio weights
w0 <- rep(1/N, N)

# compute risk parity portfolios using different methods
rpp <- riskParityPortfolio(Sigma, w0 = w0, formulation = "rc-double-index")
#> Warning in riskParityPortfolio(Sigma, w0 = w0, formulation = "rc-double-index"): The problem is a vanilla risk parity
#> portofolio, but a nonconvex formulation has been chosen. Consider not specifying the formulation argument in order to
#> use the convex formulation and get a guaranteed global solution.
fincov_w <- FinCovRegularization::RiskParity(Sigma)
riskport_w <- optimalPortfolio(Sigma = Sigma,
                               control = list(type = "erc",
                                              constraint = "lo"))
cccp_w <- c(getx(cccp::rp(w0, Sigma, mrc = w0, optctrl = ctrl(trace = FALSE))))

# plot
w_all <- cbind("riskParityPortfolio"  = rpp$w,
               "FinCovRegularization" = fincov_w,
               "cccp"                 = cccp_w,
               "RiskPortfolios"       = riskport_w)
barplotPortfolioRisk(w_all, Sigma)

Depending on the condition number of the covariance matrix, we found that the packages FinCovRegularization and RiskPortfolios may fail unexpectedly. Apart from that, all functions perform similarly.

Now, let’s see a comparison, in terms of computational time, of our cyclical coordinate descent implementation against the rp() function from the cccp package and the optimalPortfolio() function from the RiskPortfolios package. (For a fair comparison, instead of calling our function riskParityPortfolio(), we call directly the core internal function risk_parity_portfolio_ccd_spinu(), which only computes the risk parity weights, just like rp() and optimalPortfolio().)

library(microbenchmark)
library(ggplot2)
library(cccp)
library(RiskPortfolios)
library(riskParityPortfolio)

N <- 100
V <- matrix(rnorm(N^2), ncol = N)
Sigma <- cov(V)
b <- rep(1/N, N)

op <- microbenchmark(
          cccp = rp(b, Sigma, b, optctrl = ctrl(trace = FALSE)),
          riskParityPortfolio =
            riskParityPortfolio:::risk_parity_portfolio_ccd_choi(Sigma, b, 1e-6, 50),
          RiskPortfolios =
            optimalPortfolio(Sigma = Sigma,
                             control = list(type = 'erc', constraint = 'lo')),
          times = 10L)
print(op)
#> Unit: microseconds
#>                 expr        min         lq        mean     median         uq        max neval
#>                 cccp  15356.032  16072.666  17332.4621  16886.948  18107.322  21380.671    10
#>  riskParityPortfolio     36.346     47.425    112.7878     64.454    138.667    339.693    10
#>       RiskPortfolios 324381.602 332015.965 355327.0154 354580.632 366506.938 408736.297    10
autoplot(op)
#> Coordinate system already present. Adding new coordinate system, which will replace the existing one.

As it can be observed, our implementation is orders of maginitude faster than the interior-point method used by cccp and the formulation used by RiskPortfolios.

Appendix I - Risk concentration formulations

In general, with different constraints and objective functions, exact parity cannot be achieved and one needs to define a risk term to be minimized: \(R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(g_{i}\left(\boldsymbol{w}\right)\right)^{2}\), where the \(g_{i}\)’s denote the different risk contribution errors, e.g., \(g_{i} = w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i - b_i\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}\). A double-index summation can also be used: \(R(\boldsymbol{w}) = \sum_{i,j=1}^{N}\left(g_{ij}\left(\boldsymbol{w}\right)\right)^{2}\).

We consider the risk formulations as presented in (Feng and Palomar, 2015, 2016). They can be passed through the keyword argument formulation in the function riskParityPortfolio().

The name of the formulations and their mathematical expressions are presented as follows.

Formulation “rc-double-index”: \[R(\boldsymbol{w}) = \sum_{i,j=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_{i}-w_{j}\left(\boldsymbol{\Sigma}\boldsymbol{w }\right)_{j}\right)^{2}\]

Formulation “rc-vs-theta”: \[ R(\boldsymbol{w},\theta) = \sum_{i=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i - \theta \right)^{2} \]

Formulation “rc-over-var-vs-b”: \[ R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}}-b_i\right)^{2} \]

Formulation “rc-over-b double-index”: \[ R(\boldsymbol{w}) = \sum_{i,j=1}^{N}\left(\frac{w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{b_i} - \frac{w_j\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_j}{b_j}\right)^{2} \]

Formulation “rc-vs-b-times-var”: \[ R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i - b_i\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}\right)^{2} \]

Formulation “rc-over-sd vs b-times-sd”: \[ R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\sqrt{\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}}}-b_i\sqrt{\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}}\right)^{2} \]

Formulation “rc-over-b vs theta”: \[ R(\boldsymbol{w},\theta) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{b_i} - \theta \right)^{2} \]

Formulation “rc-over-var”: \[ R(\boldsymbol{w}) = \sum_{i=1}^{N}\left(\frac{w_{i}\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i}{\boldsymbol{w}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}\boldsymbol{w}}\right)^{2} \]

Appendix II - Numerical algorithms for the risk parity portfolio

In this appendix we describe the algorithms implemented for both the vanilla risk parity portfolio and the modern risk parity portfolio that may contain additional objective terms and constraints.

Algorithms for the vanilla risk parity formulation

We now describe the implementation of the Newton method and the cyclical (coordinate) descent algorithm for the vanilla risk parity formulations presented in (Spinu, 2013) and (Griveau-Billion et al., 2013).

Consider the risk budgeting equations \[w_i\left(\boldsymbol{\Sigma}\boldsymbol{w}\right)_i = b_i \;\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}, \qquad i=1,\ldots,N\] with \(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{w}=1\) and \(\boldsymbol{w} \ge \boldsymbol{0}\).

If we define \(\boldsymbol{x}=\boldsymbol{w}/\sqrt{\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{w}}\), then we can rewrite the risk budgeting equations compactly as \[\boldsymbol{\Sigma}\boldsymbol{x} = \boldsymbol{b}/\boldsymbol{x}\] with \(\boldsymbol{x} \ge \boldsymbol{0}\) and we can always recover the portfolio by normalizing: \(\boldsymbol{w} = \boldsymbol{x}/(\boldsymbol{1}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{x})\).

Spinu (Spinu, 2013) realized that precisely that equation corresponds to the gradient of the function \(f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\) set to zero, which is the optimality condition for its minimization.

So we can finally formulate the risk budgeting problem as the following convex optimization problem: \[\underset{\boldsymbol{x}\ge\boldsymbol{0}}{\textsf{minimize}} \quad \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x}).\]

Roncalli et al. (Griveau-Billion et al., 2013) proposed a slightly different formulation (also convex): \[\underset{\boldsymbol{x}\ge\boldsymbol{0}}{\textsf{minimize}} \quad \sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x}).\]

Unfortunately, even though these two problems are convex, they do not conform with the typical classes that most solvers embrace (i.e., LP, QP, QCQP, SOCP, SDP, GP, etc.).

Nevertheless, there are several simple iterative algorithms that can be used, like the Newton method and the cyclical coordinate descent algorithm.

Newton method The Newton method obtains the iterates based on the gradient \(\nabla f\) and the Hessian \({\sf H}\) of the objective function \(f(\boldsymbol{x})\) as follows: \[\boldsymbol{x}^{(k+1)} = \boldsymbol{x}^{(k)} - {\sf H}^{-1}(\boldsymbol{x}^{(k)})\nabla f(\boldsymbol{x}^{(k)})\]

In practice, one may need to use the backtracking method to properly adjust the step size of each iteration (Boyd and Vandenberghe, 2004).

  • For the function \(f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\), the gradient and Hessian are given by \[\begin{array}{ll} \nabla f(\boldsymbol{x}) &= \boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}/\boldsymbol{x}\\ {\sf H}(\boldsymbol{x}) &= \boldsymbol{\Sigma} + {\sf Diag}(\boldsymbol{b}/\boldsymbol{x}^2). \end{array}\]

  • For the function \(f(\boldsymbol{x}) = \sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\), the gradient and Hessian are given by \[\begin{array}{ll} \nabla f(\boldsymbol{x}) &= \boldsymbol{\Sigma}\boldsymbol{x}/\sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}} - \boldsymbol{b}/\boldsymbol{x}\\ {\sf H}(\boldsymbol{x}) &= \left(\boldsymbol{\Sigma} - \boldsymbol{\Sigma}\boldsymbol{x}\boldsymbol{x}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}/\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}\right) / \sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}} + {\sf Diag}(\boldsymbol{b}/\boldsymbol{x}^2). \end{array}\]

Cyclical coordinate descent algorithm This method simply minimizes in a cyclical manner with respect to each element of the variable \(\boldsymbol{x}\) (denote \(\boldsymbol{x}_{-i}=[x_1,\ldots,x_{i-1},0,x_{i+1},\ldots,x_N]^{\mkern-2mu\raise-1mu\textsf{T}}\)), while helding the other elements fixed.

  • For the function \(f(\boldsymbol{x}) = \frac{1}{2}\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\), the minimization w.r.t. \(x_i\) is \[\underset{x_i>0}{\textsf{minimize}} \quad \frac{1}{2}x_i^2\boldsymbol{\Sigma}_{ii} + x_i(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i}) - b_i\log{x_i}\] with gradient \(\nabla_i f = x_i\boldsymbol{\Sigma}_{ii} + (\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i}) - b_i/x_i\). Setting the gradient to zero gives us the second order equation \[x_i^2\boldsymbol{\Sigma}_{ii} + x_i(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i}) - b_i = 0\] with positive solution given by \[x_i^\star = \frac{-(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i})+\sqrt{(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i})^2+ 4\boldsymbol{\Sigma}_{ii} b_i}}{2\boldsymbol{\Sigma}_{ii}}.\]

  • The derivation for the function \(f(\boldsymbol{x}) = \sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}} - \boldsymbol{b}^{\mkern-2mu\raise-1mu\textsf{T}}\log(\boldsymbol{x})\) follows similarly. The update for \(x_i\) is given by \[x_i^\star = \frac{-(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i})+\sqrt{(\boldsymbol{x}_{-i}^{\mkern-2mu\raise-1mu\textsf{T}}\boldsymbol{\Sigma}_{\cdot,i})^2+ 4\boldsymbol{\Sigma}_{ii} b_i \sqrt{\boldsymbol{x}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{\Sigma}\boldsymbol{x}}}}{2\boldsymbol{\Sigma}_{ii}}.\]

Successive convex approximation algorithm for the modern risk parity formulation

Many practical formulations deployed to design risk parity portfolios lead to nonconvex problems, specially when additional objective terms such as mean return or variance, or additional constraints, namely, shortselling, are taken into account. To circumvent the complications that arise in such formulations, Feng & Palomar (Feng and Palomar, 2015) proposed a method called sucessive convex approximation (SCA). The SCA method works by convexifying the risk concentration term at some pre-defined point, casting the nonconvex problem into a much simpler strongly convex optimization problem. This procedure is then iterated until convergence is achieved. It is important to highlight that the SCA method always converges to a stationary point.

At the \(k\)-th iteration, the SCA method aims to solve \[\begin{align}\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & \sum_{i=1}^{n}\left(g_i(\boldsymbol{w}^k) + (\nabla g_i(\boldsymbol{w}^{k}))^{{\mkern-2mu\raise-1mu\textsf{T}}}(\boldsymbol{w} - \boldsymbol{w}^{k})\right)^2 + \tau ||\boldsymbol{w} - \boldsymbol{w}^{k}||^{2}_{2} + \lambda F(\boldsymbol{w})\\ \textsf{subject to} & \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{1} = 1, \boldsymbol{w} \in \mathcal{W}, \end{array}\end{align}\] where the first order Taylor expasion of \(g_i(\boldsymbol{w})\) has been used.

After some mathematical manipulations described in detail in (Feng and Palomar, 2015), the optimization problem above can be rewritten as \[\begin{align}\begin{array}{ll} \underset{\boldsymbol{w}}{\textsf{minimize}} & \dfrac{1}{2}\boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{Q}^{k}\boldsymbol{w} + \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{q}^{k} + \lambda F(\boldsymbol{w})\\ \textsf{subject to} & \boldsymbol{w}^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{1} = 1, \boldsymbol{w} \in \mathcal{W}, \end{array}\end{align}\] where \[\begin{align} \boldsymbol{Q}^{k} & \triangleq 2(\boldsymbol{A}^{k})^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{A}^{k} + \tau \boldsymbol{I},\\ \boldsymbol{q}^{k} & \triangleq 2(\boldsymbol{A}^{k})^{{\mkern-2mu\raise-1mu\textsf{T}}}\boldsymbol{g}(\boldsymbol{w}^{k}) - \boldsymbol{Q}^{k}\boldsymbol{w}^{k}, \end{align}\] and \[\begin{align} \boldsymbol{A}^{k} & \triangleq \left[\nabla_{\boldsymbol{w}} g_{1}\left(\boldsymbol{w}^{k}\right), ..., \nabla_{\boldsymbol{w}} g_{n}\left(\boldsymbol{w}^{k}\right)\right]^{{\mkern-2mu\raise-1mu\textsf{T}}} \\ \boldsymbol{g}\left(\boldsymbol{w}^{k}\right) & \triangleq \left[g_{1}\left(\boldsymbol{w}^{k}\right), ..., g_{n}\left(\boldsymbol{w}^{k}\right)\right]^{{\mkern-2mu\raise-1mu\textsf{T}}}. \end{align}\]

The above problem is a quadratic program (QP) which can be efficiently solved by standard R libraries. Furthermore, it is straightforward that adding the mean return or variance terms still keeps the structure of the problem intact.

Appendix III - Computational time

In order to efficiently design high dimensional portfolios that follows the risk parity criterion, we implement the cyclical coordinate descent algorithm proposed by (Griveau-Billion et al., 2013). In brief, this algorithm optimizes for one portfolio weight at a time while leaving the rest fixed.

The plot below illustrates the computational scaling of both Newton and cyclical algorithms. Note that the cyclical algorithm is implemented for two different formulations used by (Spinu, 2013) and (Griveau-Billion et al., 2013), respectively. Nonetheless, they output the same solution, as they should.

library(microbenchmark)
library(riskParityPortfolio)
library(ggplot2)

size_seq <- c(10, 50, 100, 200, 300, 400, 500, 600, 700)
df <- data.frame()
for (i in seq_along(size_seq)) {
  V <- matrix(rnorm(1000 * size_seq[i]), nrow = size_seq[i])
  Sigma <- V %*% t(V)
  bench <- microbenchmark(
            "Newton"            = riskParityPortfolio(Sigma, method_init = "newton"),
            "Cyclical Spinu"    = riskParityPortfolio(Sigma, method_init = "cyclical-spinu"),
            "Cyclical Roncalli" = riskParityPortfolio(Sigma, method_init = "cyclical-roncalli"),
            times = 10L, unit = "milliseconds", control = list(order = "inorder", warmup = 4))
  df <- rbind(df, data.frame("size" = size_seq[i], aggregate(. ~ expr, data = bench, FUN = median)))
}

ggplot(df, aes(x = size, y = time/1e6, col = expr)) +
  geom_line() + geom_point() +
  theme(legend.title = element_blank()) +
  scale_color_manual(values = c("#2d3436", "#6c5ce7", "#74b9ff")) +
  labs(title = "Convergence time for the convex formulation", x = "Portfolio size N", y = "CPU time [ms]")

References

Asness, C. S., Frazzini, A., and Pedersen, L. H. (2012). Leverage aversion and risk parity. Financial Analysts Journal, 68(1), 47–59.
Boyd, S. P., and Vandenberghe, L. (2004). Convex optimization. Cambridge University Press.
Feng, Y., and Palomar, D. P. (2015). SCRIP: Successive convex optimization methods for risk parity portfolios design. IEEE Trans. Signal Processing, 63(19), 5285–5300.
Feng, Y., and Palomar, D. P. (2016). A Signal Processing Perspective on Financial Engineering. Foundations; Trends in Signal Processing, Now Publishers.
Griveau-Billion, T., Richard, J.-C., and Roncalli, T. (2013). A fast algorithm for computing high-dimensional risk parity portfolios. SSRN.
Kaya, H., and Lee, W. (2012). Demystifying risk parity. Neuberger Berman.
Markowitz, H. (1952). Portfolio selection. J. Financ., 7(1), 77–91.
Qian, E. (2005). Risk parity portfolios: Efficient portfolios through true diversification. PanAgora Asset Management.
Qian, E. E. (2016). Risk parity fundamentals. CRC Press.
Richard, J.-C., and Roncalli, T. (2019). Constrained risk budgeting portfolios: Theory, algorithms, applications, & puzzles. SSRN.
Roncalli, T. (2013). Introduction to risk parity and budgeting. CRC Press.
Scutari, G., Facchinei, F., Song, P., Palomar, D. P., and Pang, J.-S. (2014). Decomposition by partial linearization: Parallel optimization of multi-agent systems. IEEE Trans. Signal Processing, 62(3), 641–656.
Souza, T. (2019). DIY ray dalio ETF: How to build your own hedge fund strategy with risk parity portfolios. Medium: https://towardsdatascience.com/ray-dalio-etf-900edfe64b05.
Spinu, F. (2013). An algorithm for computing risk parity weights. SSRN.