Tag Archives: bayesian

Time Varying Volatility And Risk

Summary The definition of risk can take various forms. One of the most used is the standard deviation or portfolio volatility. The evolution of the conditional variance may be parameterized by many different specifications. Here, I consider three models: the rolling window approach, the JPMorgan’s RiskMetrics and the GARCH(1,1). The rolling window and the RiskMetrics approach are methods that share similar features and the same drawback: they don’t account for the fact that volatility is a stationary process. GARCH (1,1) is a better method since it takes into account today’s variance as a starting point, but then unconditional variance in the far long run. In my previous research , I claimed that choosing the optimal portfolio strategy is critical in order to achieve extra return, and I provided the reader with a review of existing strategies, from the most naïve ones, such as 1/N, to the most sophisticated, such as the Bayesian strategies. In addition to portfolio construction, risk management is another essential topic that should be discussed. Indeed, risk is ubiquitous, and the intelligent investor has to be able to manage it. The definition of risk can take various forms. One of the most used is the standard deviation or portfolio volatility, which measures the spread of the distribution of returns around its mean. Volatility has different characteristics: it is not directly observable, it evolves over time in a continuous manner, it reacts differently to positive and negative price changes, and last but not least: Volatility is a stationary process. Bear in mind this last feature, as it would be critical in the analysis below. Conditional and unconditional volatility A key distinction is between the conditional and unconditional volatility. The unconditional volatility (σ) is just the standard measure of the volatility, whereas the conditional volatility (h t 1/2 ) is the measure of uncertainty about a variable given a model and an information set. Consider the return (r t ) at time t decomposed in its location and scale representation as follows. μ t is the conditional mean of r t and may be parameterized by a time series model like an ARMA(p,q) while ε t might be defined as follows: Where: is the conditional variance (volatility 2 ) of r t depending on the information set F available at time t-1, and is the unconditional variance (volatility 2 ) of r t , which does not depend on previous information. The focus of this research is on the time varying volatility and risk and therefore on the conditional variance. The evolution of the conditional variance may be parameterized by many different specifications. Note that with the word “evolution”, I mean how the conditional volatility evolves over time, as new information becomes available. Here, I consider three models: the rolling window approach, the JPMorgan’s RiskMetrics and the GARCH(1,1). The rolling window approach The rolling window approach relies on a particular stylized fact: the best guess of future volatility is based on an equally weighted average of the volatility of past m periods. To capture this feature, let tomorrow’s variance be equal to the sample variance computed over the last m observations: This specification implies that if volatility is high today, it is also likely to be high tomorrow. Naturally, the choice of m is critical: If it is too high, h t+1 results excessively smooth and slow evolving (exhibit 1) If it is too low, h t+1 presents excessively jagged patterns over time (exhibit 2) (click to enlarge) Exhibit 1 – Rolling window approach (m=120) (click to enlarge) Exhibit 2 – Rolling window approach (m=20) Note that in both cases, the forecasted volatility is constant over time, and it depends on today’s volatility: if volatility is high today, it is also likely to be high tomorrow, but how we will understand later, it is not always the case. In addition, the farer past m period has the same weight as the most recent. JPMorgan’s RiskMetrics The RiskMetrics approach can be seen as a generalization of the rolling window. All we have to do is: Replace the equal weights 1/m with exponentially decaying weights λτ-1 Replace the averaging over the past m period with an infinite summation The result is as follows: Or equivalently: According to the RiskMetrics, the forecast for tomorrow’s volatility is a weighted average of today’s volatility h t and today’s squared residual ε t. This method is slightly better than the rolling window since it gives more importance to recent observations rather than older ones. In other words, it doesn’t use an equally weighted average of the observations of past m periods, but exponentially decaying weights. However, it shares the same drawback: the forecasted volatility is constant over time and the unconditional volatility is completely ignored, as the graph below shows: (click to enlarge) Exhibit 3 – RiskMetrics If today is a low (high) variance day, RiskMetrics predicts low (high) variance for all future days. This will give a false sense of calmness (activity) of the market in the future. GARCH(1,1) Compared to the last two methods, the GARCH model represents the best way to estimate the future conditional volatility. In particular: Where ω> 0,α j ≥0,β j ≥0. Given the unconditional volatility: solving for ω and substituting in the GARCH equation, we obtain: meaning that the future variance (volatility 2 ) is a weighted average of: • The long-run variance (unconditional variance) • Today’s squared innovation • Today’s variance The more you forecast volatility ahead in the future, the more it depends on the long-run variance rather than today’s variance while the latter matters if you forecast volatility in the near future. In other words, if today is a low (high) variance day, the GARCH(1,1) predicts low (high) variance in the near future, and the long-run variance far in the future. In order to grasp the meaning of these words, exhibit 4 shows the results of GARCH. (click to enlarge) Exhibit 4 – GARCH(1,1) As the reader may understand, GARCH accounts for the fact that volatility is a stationary process, whereas the last two processes consider the process non-stationary. Thus, it is reasonable that tomorrow’s variance is similar to yesterday’s variance, but the volatility far in the future cannot be constant (like the rolling window and RiskMetrics predict) and it will stick to its mean or to the unconditional (long-run) variance. At the end, volatility remains a stationary process. Conclusions The rolling window and the RiskMetrics approach are methods that share similar features and the same drawback: they don’t account for the fact that volatility is a stationary process. Hence, the forecasted volatility is constant and it depends too heavily on today’s volatility. GARCH (1,1) is a better method since it takes in account today’s variance as a starting point, but then unconditional variance in the far long run. Hence, after having selected the best portfolio strategy or a combination of strategies, think about your risk management approach, and if you use volatility as a measure of risk, remember that, among the three models examined here, GARCH(1,1) is the best to forecast volatility. If you would like to read more about GARCH, I suggest you reading Bollerslev (1986).

Portfolio Construction Techniques: A Brief Review

Summary The mean-variance optimization suggested by Henry Markowitz represents a path-breaking work, the beginning of the so-called Modern Portfolio Theory. This theory has been criticized by some researchers for issues linked to parameter uncertainty. Two main approaches to the problem may be identified: a non-Bayesian and a Bayesian approach. Smart Beta strategies are virtually placed between pure alpha strategies and beta strategies and emphasize capturing investment factors in a transparent way. The article does not determine which strategy is the best, since I believe that the success of an investment technique cannot be determined a priori. Introduction How to allocate capital across different asset classes is a key decision that all investors are required to make. It is widely accepted that holding one or few assets is not advisable, as the proverb “Don’t put all your eggs in one basket” suggests. Hence, practitioners recommend their clients to build portfolios of assets in order to benefit from the effects of diversification. An investor’s portfolio is defined as his/her collection of investment assets. Generally, investors make two types of decisions in constructing portfolios. The first one is called asset allocation, namely the choice among different asset classes. The second one is defined security selection, namely the choice of which particular securities to hold within each asset class. Moreover, portfolio construction could follow two kinds of approaches, namely a top-down or a bottom-up approach. The former consists in facing the asset allocation and security selection choices exactly in this order. The latter inverts the flow of actions, starting from security selection. No matter the kind of approach, investors do need a precise rule to follow when building a portfolio. In fact, the choice of asset classes and/or of securities has to be done rationally. The range of existing strategies is considerably wide. Indeed, one may allocate his/her own capital by splitting it equally among assets, optimizing several functions and/or applying some constraints. Every day in the asset management industry, there are plenty of strategies that are proposed to investors all over the world. The aim of this article is to provide the reader with a comprehensive summary of those. Static and Dynamic Optimization Techniques To begin with, it is worth distinguishing the existing portfolio optimization techniques by the nature of their optimization process. In particular, static and dynamic processes are considered. In the former case, the structure of a portfolio is chosen once for all at the beginning of the period. In the latter case, the structure of the portfolio is continuously adjusted (for a detailed survey on this literature, see Mossin (1968), Samuelson (1969), Merton (1969, 1971), Campbell et al (2003), Campbell & Viceira (2002). Maillard (2011) reports that for highly risk-averse investors, the difference between the two is moderate, whereas it is larger for investors who are less risk averse. Markowitz Mean-Variance Optimization Within the static models, it is common knowledge that the mean-variance optimization suggested by Henry Markowitz represents a path-breaking work, the beginning of so-called Modern Portfolio Theory (MPT). In fact, Markowitz ( 1952 , 1959 ) presents a revolutionary framework based on the mean and variance of a portfolio of “N” assets. In particular, he claims that if investors care only about mean and variance, they would hold the same portfolio of risky assets, combined with cash holdings, whose proportion depends on their risk aversion. Despite of its wide success, this theory has been criticized by some researchers for issues linked to parameter uncertainty. In fact, the true model parameters are unknown and have to be estimated from the data, resulting in several estimation error problems. The subsequent literature has focused on improving the mean-variance framework in several ways. However, two main approaches to the problem may be identified, namely a non-Bayesian and a Bayesian approach. Two Approaches As far as the former is concerned, it is worth reporting several studies. For instance, Goldfarb & Iyengar (2003) and Garlappi et al. (2007) provide robust formulations to contrast the sensitivity of the optimal portfolio to statistical and modelling errors in the estimates of the relevant parameters. In addition, Lee (1977) and Kraus & Litzenberger (1976) present alternative portfolio theories that include more moments such as skewness; Fama (1965) and Elton & Gruber (1974) are more accurate in describing the distribution of return, while Best & Grauer (1992), Chan et al. (1999) and Ledoit & Wolf (2004a, 2004b) focus on methods that aim to reduce the estimation error of the covariance matrix. Other approaches involve the application of some constraints. MacKinlay & Pastor (2000) impose constraints on moments of assets returns, Jagannathan & Ma (2003) adopt short-sale constraints, Chekhlov et al (2000) drawdown constraints, Jorion (2002) tracking-error constraints, while Chopra (1993) and Frost & Savarino (1988) propose constrained portfolio weights. On the other hand, the Bayesian approach plays a prominent role in the literature. It is based on Stein (1955) , who proved the inadmissibility of the sample mean as an estimator for multivariate portfolio problems. In fact, he advises to apply the Bayesian shrinkage estimator that minimizes the errors in the return expectations, rather than trying to minimize the errors in each asset class return expectation separately. In following studies, this approach has been implemented in multiple ways. Barry (1974) and Bawa et al (1979) use either a non-informative diffuse prior or a predictive distribution obtained by integrating over the unknown parameter. Then, Jobson & Korkie (1980), Jorion (1985, 1986) and Frost & Savarino (1986) use empirical Bayes estimators, which shrink estimated returns closer to a common value and move the portfolio weights closer to the global minimum-variance portfolio. Finally, Pastor (2000), and Pastor & Stambaugh (2000) use the equilibrium implications of an asset-pricing model to establish a prior. Simpler Models To attempt portfolio construction throughout optimization is not the only alternative, though. In fact, alongside the wide range of portfolio optimization techniques, it is also worth considering other rules that require no estimation of parameters and no optimization at all. DeMiguel at al (2005) define them as ” simple asset-allocation rules “. For instance, one could just allocate all the wealth in a single asset, i.e., the market portfolio . Alternatively, investors may adopt the 1/N rule, dividing their wealth according to an equal-weighting scheme. At this point, the reader may wonder why one should consider this kind of rules. In fact, techniques that require no optimization should not be optimal according to any measure. However, as far as the naïve 1/N is concerned, some researchers have reported some interesting results. For instance, Benartzi & Thaler (2001) and Liang & Weisbenner (2002) show that more than a third of direct contribution plan participants allocate their assets equally among investment options, obtaining good returns. Moreover, Huberman & Jiang (2006) find similar results. Similarly, DeMiguel et al (2009) evaluate 14 models across seven empirical datasets, finding that none is consistently better than the 1/N rule in terms of Sharpe ratio, certainty-equivalent return or turnover. However, Tu & Zhou (2011) challenge DeMiguel et al. (2009) combining sophisticated optimization approaches with the naïve 1/N technique. Their findings confirm that the combined rules have a significant impact in improving the sophisticated strategies and in outperforming the simple 1/N rule. Moreover, other naïve rules are reported by Chow et al. (2013), such as the 1/σ and the 1/β, included in the so-called low-volatility investing methods. In particular, they report that low-volatility investing provides higher returns at lower risk than traditional cap-weighted indexing, at the cost of underperformance in upward-trending environments. Smart Beta Strategies Finally, it is worth mentioning a special group of strategies that are extremely popular among asset management firms, known as Smart Beta strategies. Smart Beta strategies are virtually placed between pure alpha strategies and beta strategies, and emphasise capturing investment factors in a transparent way, such as value, size, quality and momentum. Examples of these strategies are risk parity, minimum volatility, maximum diversification and many others. Apart from the wide range of these kinds of techniques, it is critical to highlight why they are so diffuse among practitioners. Their enormous success is due to several interesting advantages, including the flexibility to access tailored market exposures, improved control of portfolio exposures and the potential to achieve improved return/risk trade-offs. Final Remarks This article aims to be a summary of the most notorious techniques considered in the existing literature, but the list is far from being complete. Moreover, the article does not analyze which strategy is the best, since I believe that the success of an investment technique depends on several factors, including the time frame considered, the kind of assets, the geography of the examined portfolio, the client’s preferences, and it surely must rely on a quantitative application using real or simulated data.

The Idiot’s Guide To Asset Allocation

The finance industry constantly strives to confuse investors with new, more sophisticated and increasingly complex ways to manage risk and generate returns. But these new products and strategies generate their own risks – for example, falling prey to data mining or extrapolation. But there are simple ways to invest that can produce superior investment outcomes with a fraction of the time and effort. This article focuses on investment techniques that are so simple, it is surprising how well they work – a phenomenon that Brett Arends of MarketWatch has called “dumb alpha.” A Simpler Way to Think about the Future Let’s assume you are in your thirties or forties. You need to finance your retirement with your savings. Creating a portfolio to build retirement wealth is no easy feat, given the fact that retirement may be 30-40 years in the future. A lot can happen in that time. Who can say what the next 30 years will look like? Since it is impossible to predict which investments will do well during the next three decades, there are only two logical ways to invest. One is to keep all your savings in cash or the safest short-term bills and bonds. The problem with this approach is that you will find it impossible to keep pace with inflation once taxes and other expenses are taken into account. The alternative is to invest an equal amount of your money in every asset class that’s available in the marketplace. This makes sense, because you don’t know how stocks will do compared with bonds or real estate investments, or how Apple (NASDAQ: AAPL ) stock will do compared to Amazon (NASDAQ: AMZN ). The simplest example of this naive equal-weighted approach would be a portfolio split 50/50 between stocks and bonds. Another approach would be to invest one-quarter of your assets in cash, one-quarter in bonds, one-quarter in equities, and one-quarter in precious metals. Similarly, instead of investing in a common stock index, such as the cap-weighted S&P 500 Index, you could evenly spread your precious funds across all 500 stocks of the index. The Advantages of a Naive Asset Allocation As it turns out, this way of investing tends to work extremely well in practice. In their 2009 article ” Optimal versus Naive Diversification: How Inefficient Is the 1/N Portfolio Strategy? ” Victor DeMiguel, Lorenzo Garappi, and Raman Uppal tested this naive asset allocation technique in 14 different cases across seven different asset classes and found that it consistently outperformed the traditional mean-variance optimization technique. None of the more sophisticated asset allocation techniques they used, including minimum-variance portfolios and Bayesian estimators, could systematically outperform naive diversification in terms of returns, risk-adjusted returns, or drawdown risks. Unfortunately, naive asset allocation does not work all the time. Over the last several years, only one asset class has generated high returns: stocks. So, a naive asset allocation will not keep up with the more equity-concentrated portfolios during such periods. But it is interesting to note how well a naive approach works over an entire business cycle. Practitioners should compare their portfolios with a naive asset allocation to check whether they really have a portfolio that delivers more than an equal-weighted portfolio. You can create a better (“more sophisticated”) portfolio than the equal-weighted (“dumb”) one, but it is surprisingly hard to do. As a check, you can create an equal-weighted portfolio from the assets or asset classes used in your current portfolio. Then test whether the current portfolio is superior to this equal-weighted benchmark over time in terms of returns, risks, and risk-adjusted returns. If that is the case, congratulations: You have a good portfolio. If not, you should think of ways to improve the performance of your existing portfolio. It is also pretty clear why this dumb alpha works. Within stock markets, putting the same amount of money in every stock systematically prefers value and small-cap stocks over growth and large-cap stocks. These two effects conspire to create outperformance. There is a second effect at play, however. After all, the value and small-cap effect cannot explain why a naive asset allocation also works in a multi-asset-class portfolio. The key reason for its strong showing is its robustness to forecasting errors. Most asset allocation models, like mean-variance optimization, are very sensitive to prediction errors. Unfortunately, even financial experts are terrible at forecasting, and one follows forecasts at one’s peril. By explicitly assuming that you cannot predict future returns at all, an equal-weighted asset allocation is well suited for unexpected surprises in asset class returns – both positive and negative. Since unexpected events happen time and again in financial markets, in the long run an equal-weighted asset allocation tends to catch up with more “sophisticated” asset allocation models whenever an event happens that the latter are unable to reflect. In other words, if a naive asset allocation outperforms a more sophisticated portfolio, it might provide a hint as to why this is the case. Are there too many risky assets in the sophisticated portfolio that directly or indirectly create increased stock market exposure? What are the implicit or explicit assumptions that led to the more sophisticated portfolio that have not materialized and have led to an underperformance relative to a less sophisticated naive asset allocation? In this sense, the naive asset allocation can act as a more practical alternative to a sophisticated portfolio, and as a more easily managed risk management tool.