Tag Archives: contests

Time Varying Volatility And Risk

Summary The definition of risk can take various forms. One of the most used is the standard deviation or portfolio volatility. The evolution of the conditional variance may be parameterized by many different specifications. Here, I consider three models: the rolling window approach, the JPMorgan’s RiskMetrics and the GARCH(1,1). The rolling window and the RiskMetrics approach are methods that share similar features and the same drawback: they don’t account for the fact that volatility is a stationary process. GARCH (1,1) is a better method since it takes into account today’s variance as a starting point, but then unconditional variance in the far long run. In my previous research , I claimed that choosing the optimal portfolio strategy is critical in order to achieve extra return, and I provided the reader with a review of existing strategies, from the most naïve ones, such as 1/N, to the most sophisticated, such as the Bayesian strategies. In addition to portfolio construction, risk management is another essential topic that should be discussed. Indeed, risk is ubiquitous, and the intelligent investor has to be able to manage it. The definition of risk can take various forms. One of the most used is the standard deviation or portfolio volatility, which measures the spread of the distribution of returns around its mean. Volatility has different characteristics: it is not directly observable, it evolves over time in a continuous manner, it reacts differently to positive and negative price changes, and last but not least: Volatility is a stationary process. Bear in mind this last feature, as it would be critical in the analysis below. Conditional and unconditional volatility A key distinction is between the conditional and unconditional volatility. The unconditional volatility (σ) is just the standard measure of the volatility, whereas the conditional volatility (h t 1/2 ) is the measure of uncertainty about a variable given a model and an information set. Consider the return (r t ) at time t decomposed in its location and scale representation as follows. μ t is the conditional mean of r t and may be parameterized by a time series model like an ARMA(p,q) while ε t might be defined as follows: Where: is the conditional variance (volatility 2 ) of r t depending on the information set F available at time t-1, and is the unconditional variance (volatility 2 ) of r t , which does not depend on previous information. The focus of this research is on the time varying volatility and risk and therefore on the conditional variance. The evolution of the conditional variance may be parameterized by many different specifications. Note that with the word “evolution”, I mean how the conditional volatility evolves over time, as new information becomes available. Here, I consider three models: the rolling window approach, the JPMorgan’s RiskMetrics and the GARCH(1,1). The rolling window approach The rolling window approach relies on a particular stylized fact: the best guess of future volatility is based on an equally weighted average of the volatility of past m periods. To capture this feature, let tomorrow’s variance be equal to the sample variance computed over the last m observations: This specification implies that if volatility is high today, it is also likely to be high tomorrow. Naturally, the choice of m is critical: If it is too high, h t+1 results excessively smooth and slow evolving (exhibit 1) If it is too low, h t+1 presents excessively jagged patterns over time (exhibit 2) (click to enlarge) Exhibit 1 – Rolling window approach (m=120) (click to enlarge) Exhibit 2 – Rolling window approach (m=20) Note that in both cases, the forecasted volatility is constant over time, and it depends on today’s volatility: if volatility is high today, it is also likely to be high tomorrow, but how we will understand later, it is not always the case. In addition, the farer past m period has the same weight as the most recent. JPMorgan’s RiskMetrics The RiskMetrics approach can be seen as a generalization of the rolling window. All we have to do is: Replace the equal weights 1/m with exponentially decaying weights λτ-1 Replace the averaging over the past m period with an infinite summation The result is as follows: Or equivalently: According to the RiskMetrics, the forecast for tomorrow’s volatility is a weighted average of today’s volatility h t and today’s squared residual ε t. This method is slightly better than the rolling window since it gives more importance to recent observations rather than older ones. In other words, it doesn’t use an equally weighted average of the observations of past m periods, but exponentially decaying weights. However, it shares the same drawback: the forecasted volatility is constant over time and the unconditional volatility is completely ignored, as the graph below shows: (click to enlarge) Exhibit 3 – RiskMetrics If today is a low (high) variance day, RiskMetrics predicts low (high) variance for all future days. This will give a false sense of calmness (activity) of the market in the future. GARCH(1,1) Compared to the last two methods, the GARCH model represents the best way to estimate the future conditional volatility. In particular: Where ω> 0,α j ≥0,β j ≥0. Given the unconditional volatility: solving for ω and substituting in the GARCH equation, we obtain: meaning that the future variance (volatility 2 ) is a weighted average of: • The long-run variance (unconditional variance) • Today’s squared innovation • Today’s variance The more you forecast volatility ahead in the future, the more it depends on the long-run variance rather than today’s variance while the latter matters if you forecast volatility in the near future. In other words, if today is a low (high) variance day, the GARCH(1,1) predicts low (high) variance in the near future, and the long-run variance far in the future. In order to grasp the meaning of these words, exhibit 4 shows the results of GARCH. (click to enlarge) Exhibit 4 – GARCH(1,1) As the reader may understand, GARCH accounts for the fact that volatility is a stationary process, whereas the last two processes consider the process non-stationary. Thus, it is reasonable that tomorrow’s variance is similar to yesterday’s variance, but the volatility far in the future cannot be constant (like the rolling window and RiskMetrics predict) and it will stick to its mean or to the unconditional (long-run) variance. At the end, volatility remains a stationary process. Conclusions The rolling window and the RiskMetrics approach are methods that share similar features and the same drawback: they don’t account for the fact that volatility is a stationary process. Hence, the forecasted volatility is constant and it depends too heavily on today’s volatility. GARCH (1,1) is a better method since it takes in account today’s variance as a starting point, but then unconditional variance in the far long run. Hence, after having selected the best portfolio strategy or a combination of strategies, think about your risk management approach, and if you use volatility as a measure of risk, remember that, among the three models examined here, GARCH(1,1) is the best to forecast volatility. If you would like to read more about GARCH, I suggest you reading Bollerslev (1986).

Why US Investing Differs A Lot From Europe Investing…

Summary US is definitely not a market for traditional stock-pickers, as this market is a flow-driven market. In Europe, the economic knowledge of the population is very low. Stock-pickers should focus on Europe, and systematic or factor-based investors on the US. Smart risk management is as important as finding equity ideas to generate alpha. The whole study with all the statistics and charts may be found on SSRN at http://ssrn.com/abstract=2701901 .Or just ask the author. We compare European Indices (DJ Stoxx 600, Eurostoxx 50, FTSE 100) to US Indices (Russell 2000, S&P 500, Nasdaq Composite, Nasdaq 100) and Japanese Indices (Topix, Nikkei225). First, from 2014, December 31st to 2015, November 11th. Using a longer period could lead to wrong conclusions given the important turnover of the components within each index (roughly 5% per year), and the death-survivo r ship bias. Therefore, in a second attempt, we compare the behavior of the large indices such as Topix, Nasdaq Composite and Russell 2000, year after year, from 1999 to 2015. We do the same analysis for DJ Stoxx 600, even if the sample seems tight. Why year after year and not the 16 years in a row? Because turnover is huge on US indices, and the Russell 2000 or Nasdaq Composite composition as of 2015 is very different from the one as of 1999… RUSSELL 2000 Beta per couple (capitalization; volatility) (click to enlarge) · First of all, turnover is huge. Therefore, it is important to stress again that a study over a long period of this index versus its components is not relevant. · Second, looking at the performance vs (capitalization; volatilities) we can notice that although over the period, the performance of the index is largely positive (+249% total return between Dec, 31st 1998 and Nov, 11th 2015) – meaning it was a bull market with on average 7.7% per year, the red cells are much more represented on the right column of the table. This happens when the index performance is negative of course (2002, 2008), but it happens as well when the index performance is flat or mildly positive (2000, 2001, 2004, 2011, 2012, 2014, 2015). On the other hand, these high volatility stocks strongly outperform the universe in two periods out of seventeen: 1999 and 2003, with respective total return performance of the Russell2000 of +21%, +47%. This means that the outperformance of volatile small caps is very hard to capture because over the long run it may be easy to experience huge drawdowns with difficulties to recover. Keep in mind that when a stock drops by 50%, it needs to increase by 100% to come back to the initial level. Regarding capitalization effect, things seem to be more difficult to explain. As a summary for this part, should you want a smooth pattern, focusing on the low-volatility stocks in N-1 is worth in order to succeed in such a challenge, whereas dealing with historically high-volatility stocks may suffer from huge drawdowns (2002, 2008), and only rare astonishing performances, which may struggle in erasing the previous underperformance. The issue is always the same: what is your investment timeframe? And it has to deal with the way performance fees are calculated and rewarded. If the latter depend on High-WaterMark (HWM), then low-volatility should be chosen. If it does not, then the performance fees may be perceived as a yearly call on performance…And when you are long a call, it depends positively on volatility, and do not suffer if the market is negative end of year, as its value is null. Therefore, the asset manager is likely to choose the riskier stocks as he may – even if it is only 2 years among 17 – sharply outperforms the index punctually and underperform most of the time. HWM is strongly needed in order to protect investors from these type of greedy and unconscious asset managers. This phenomenon is likely to persist and be amplified by the emergence of smart-beta, risk premia, through the ETF market which is huge in the US and tends to offset the traditional Mutual and Hedge Funds: flows focus on ETF, and the latter focus on low-volatility stocks creating and feeding the famous “low-volatility puzzle”, challenging the well-known Markowitz theory. In this puzzle, the lower the volatility, the higher the expected return, whereas Markowitz used to state the opposite… · Regarding the persistence of the winners and losers, this relationship is quite volatile. According to the numerous papers by Bouchaud (“Two centuries of Trend-Following”), most of the time the market is trend-followers, but when the regime changes, it hurts a lot (examining the performance of CTAs may help to understand – CTAs being by construction trend-followers). 2009 is a very good example (with the red circle): the losers of 2008 were the winners of 2009, within a strong rebound of the market. It looks as if after a huge drop, the rule is to buy the worst performers. · Looking at the beta per volatility quartile, the higher the historical volatility, the higher the beta, whereas there is no clear pattern with respect to capitalization. This can be explained by the fact that small capitalizations are perceived to be more volatile than large, but in practice this is not the case. Do not forget that beta is the ratio of covariance over the product of standard deviations, therefore the surprising “in-range” beta is much more explained by the low numerator (covariance): small caps are volatile but not correlated with the benchmark, whereas large caps are less volatile but much more correlated with the benchmark. · Regarding stock-picking, stock-pickers are likely to pick their stocks in the upper right hand side of the table: low capitalization, high volatility. Low capitalization, because they aims at being anti-benchmark, and high volatility because their way of choosing relies on fundamental analysis and upsides – the higher the volatility, the higher the upside. · The Russell 2000 is definitely not a territory for stock-pickers, with 2% of the stocks exhibiting more than 100% YtD performance in 2015, and more than 55% doing worse than the index. · Should you want to post performance by picking up small caps and high volatility stocks within the Russell 2000 universe, then you have to be very sharp in terms of choosing the right ones, and avoiding all the underperformers (which are numerous – “Many are called, but few are chosen”), and be very sharp in terms of market-timing, given the number of years small caps largely underperform. NASDAQ COMPOSITE Beta per couple (capitalization; volatility) (click to enlarge) · Turnover is huge with less than 5% of the components remaining after 16 years. · The “capitalization effect” is more important on Nasdaq Composite than it is on Russell 2000. Russell 2000 only refers to small capitalization (less than 10BlnUSD), whereas Nasdaq Composite gathers stocks whose capitalization lays between 2MlnUSD and 700BlnUSD in 2015. The beta is decreasing with respect to capitalization, and is increasing with respect to historical volatility, with a beta close to 2 for the couple (1st capitalization; 4th historical volatility). · As for Russell 2000, the red part of the table is concentrated on the right hand side, with scarce very high outperformances. Same explanation about the smoothness profile required, and the performance fees policy needed. · Regarding the persistence of the winners and losers, this relationship is quite volatile, as for Russell2000. Most of the time (and easy to see in 2002 and 2015), the winners of N-1 remain the winners of N (momentum effect), whereas in a year such 2009, the breach is very sudden and the relationship no longer holds. · Looking again at the couple (1st capitalization; 4th historical volatility), which we use as a proxy for stock-picking here the ranking of this couple among the other couple per year. The ranking goes from 1 to 16. We could say that the higher the index performance, the higher the ranking of this “stock-picking couple proxy” (“SP”). Before 2012 it works. But since 2012, we can notice that in spite of the huge performance of the index (respectively +17.8% and +40.2% in 2012 and 2013), this stock-picking proxy lags a lot . We compare the stock-picking proxy to its opposite, the “benchmark proxy” which is the couple (4th capitalization; 1st historical volatility) (“B”). In 2012 and 2013, the respective median performance (in absolute value) of “SP” and “B” were The impact of ETF and “low-volatility” Smart Beta (“Minimum Variance” products, “Equal Risk Contribution” products) dramatically changed the market, developing thanks to the high risk-aversion of customers (still traumatized by the 2008 drop in equities). The flows are huge and totally offset any fundamental reasoning since 2010. At this date, 2 years after the big krach, investors are eager to take some equity risk again, but with strong risk management. This is the promise of these ETF. On the other hand, one can notice the difference of magnitude between the performance boundaries over the years: It is interesting to look at this table as of logarithmic return, as this type of returns keeps the symmetry. Therefore, we can notice that “B” suffer less from asymmetry than “SP”. The same reasoning we already made on Russell 2000 holds here again about huge drawdowns for “SP”, and the smooth pattern for “B”, with less difficulty to recover. Once again, the performance fees policy is the key to secure the shareholder, and prevent him from any rogue asset manager. · As for the Russell 2000, the Nasdaq Composite is definitely not a territory for stock-pickers, with 2.5% of the stocks exhibiting more than 100% YtD performance in 2015, almost 2/3 doing worse than the index, and a random stock picking underperforming the index by almost 10%. · The market evolution and the emergence of ETF does not allow any stock-picker to outperform the index. DJ STOXX 600 Beta per couple (capitalization; volatility) (click to enlarge) · Turnover is pretty low compared to US Indices. · Beta depends as on capitalization (negative relationship), and historical volatility (positive relationship). The difference between stock-picker (“SP”) as explained for the Nasdaq Composite and benchmark investors (“B”) is pretty clear on the table, with a beta of 0.66 for “B” in the lower left, and a beta of 1.57 in the upper right. · Red and green colors seem a lot more balanced than in the US, either among columns or among rows. No pattern with respect to capitalization or to historical volatility may be exhibited. ETF did not modified significantly the European equity market (yet?). · We can notice that during years with very positive return (2005, 2006, 2009, 2013), high historical volatility stocks tend to outperform significantly, so do small caps. But the difference between “SP” and “B” performances remains very low compared to US extremes. · Regarding the “momentum effect” and the persistence of winners and losers, we find the same pattern as in the US, meaning a quite strong trend-following process, except during big breaches such as what happened in 2008-2009. Therefore, we can suggest to separate the ETF impact and the “low-volatility” puzzle their flows create in the US, and the trend-following process of the market. The latter does not rely on ETF flows, but on the behavioral and cognitive biases of investors. · Europe equity market remain a territory for stock-pickers. Definitely. The ETF impact remain very contained. The only major pattern that can be exhibited is a trend-following aspect of the returns over the years, but nothing relative to capitalization nor historical volatility. TOPIX · First of all, looking at the beta per couple, we can notice, that the higher the capitalization the higher the beta. This means that lower capitalizations post very dispersed returns with very low correlated returns among a given class, whereas the big caps exhibit very close behaviors among themselves. · Performances are well balanced between columns (volatilities) and rows (capitalizations). Using our former notations (“SP”) and (“B”), let’s have a look at the rankings over the years. · On the table, we can notice a change of pattern since 2014 (included), with a more European look-like pattern before and a US look-like pattern since then. · If we add the latter characteristic to the fact that beta depends positively on the capitalization, Topix seems to be at the middle of the road between US and Europe in terms of investment philosophy, US being the “new-way” of investing, flow-driven, and Europe being the “old-way” of investing, fundamental-driven. · “Momentumwise”, except in 2009, where the worst performers of 2008 posted the best performance of years, it is difficult to sort the Japanese market either on the “trend-following” side or on the “mean-reverting”. · The Topix remains quite difficult to understand, as it is a mix between European patterns and US ones. We can notice that there is no clear “trend-following” or “mean-reverting” process. Large capitalizations seems to be riskier, due to their high-intra correlated pattern, posting a higher beta than small caps, which suffer from highly dispersed returns. GLOBAL CONCLUSION First of all, we noticed over the past 15 years that US stocks returns are much more dispersed than Europe or Japanese. We have much more positive and negative extreme outliers in the US. US is definitely not a market for traditional stock-pickers, as this market is a flow-driven market. This relies on a structural fact: US people are all interested in stock exchange performances as their retirement relies on the latter. Therefore, the level of knowledge in the US is by far higher than the one in Europe, meaning that all the Americans are stock-exchange investors, providing huge flows, and expecting the same commitment from their financial advisors in term of risk exposure. People are still scared by the 2008 crisis and their come-back in the equity markets relies on a strict risk-management rule. Today, smart-beta ETFs provide solution, mainly known as “Minimum Variance” or “Equal Risk Contribution”. This is the reason why last years rally in US equities is often described as a “defensive” rally. Therefore, flows concentrate on these products encouraging the pattern to pursue. In Europe, the economic knowledge of the population is very low. In addition to that, financial practitioners and financial related topics are hated. There is no pension funds in Continental Europe. Therefore the equity market does not depend on huge flows as in the US, and remains the stronghold of some “happy-fews” whose way of thinking relies on fundamentals. Thus, European equity market still reacts on fundamental data and news, as flows are almost insignificant. The question is: until when these patterns may last? Why they may be threatened? In the US, we have been waiting for 6 years an “aggressive” rally. It will happen when the couple (“small caps”, “high vol”) will dramatically outperform the couple (“large caps”, “low vol”). It happened in 2009, after the 2008 krach, but this can be analyzed as a kind of “mean-reverting” process on very low levels of valuation. But, today in the US, valuation standards do not exist anymore. An investor just have to think as follows: Where do the flows go? What are the main drivers of the market with metrics such as capitalizations and historical volatilities? We could challenge this vision: how can a low volatility stock perform a high volatility stock? Because low volatility stocks exhibits positive volatility (volatility on upside moves) and a smooth pattern, whereas high volatility stocks exhibit negative volatility (volatility on downside moves) and jumpy charts. Thus the question is: given such matter of fact, is the stock exchange the best place for a start-up to raise money? Isn’t Private Equity a better shelter, and just wait to get a decent size or a decent brand-famousness (as Alibaba or Uber) to go listed? In Europe, while the money is still in the hands of the 50+ old generations, we will keep this fundamental-driven market. Recently, we noticed the emergence of Fintech actors in Europe, with 40- founders. This 40- generation is interested in stock exchange and portfolio management. When these guys will take the money of the elders, and given the difficulty of savings system in Europe, pension funds are likely to develop dramatically. Therefore, we can assume that today’s US pattern will cross the Atlantic. Thus, when this happens, it will be time to focus on large caps, low volatility names such as the Swiss. Japan is very difficult to understand. It seems to be a merge of Europe and US, but the trend tends towards a more US look-like market, with stock-picking that is likely to become more and more difficult. In addition to these area, type of investors – related pattern, there is a “momentum effect” that tends to be persistent. “Winners remains the winners, losers the losers”, same as for good and bad pupils. This stresses the “trend-following” pattern of the equity market, whatever be US, European or Japanese, with a kind of performance clustering over the years, as we can notice about volatility: period of good performance tends to be followed by good performance again. Stock-pickers should focus on Europe, and systematic or factor-based investors on the US. Should you want to pick-up stocks in the US, first select quantitatively a universe with capitalization and historical volatility factors. It is likely to enhance significantly the performance of this “conditional” stock-picking, and avoid large losses. Moreover, keep in mind that today, fundholders have access to financial information instantaneously, so do have the asset managers. There is no more information asymmetry. Information is now the same for everybody, professional and not professional. This means that finance has changed a lot: 30 years ago, the fundholder used to receive informations about his funds two times per year . Now it happens everyday. Therefore, his psychological risk-budget -which has not increased – is filled by far more quickly. The consequence? Implicitly, unconsciously, this phenomenon has dramatically reduced the holding period of the fund by the fund holder. Therefore, risk-management has – now more than ever – to be taken into account ex ante in the asset management process – and not ex post, as it can be seen too often in the French AM industry. Smart risk management is as important as finding equity ideas to generate alpha. It is a way to avoid negative alpha and then create added value for the fundholder. The other requirement is to know and understand the market you invest in. This is the aim of this article: it is not the same to know the companies you invest in (analyst), and to know the market you invest in (asset manager).

The Limits Of Risky Asset Diversification

Do you want to reduce the volatility of your asset portfolio? I have the solution for you. Buy bonds and hold some cash. Now once upon a time, in ancient times, prior to the Nixon Era, no one hedged, and no one looked for alternative investments. Those buying stocks stuck to well-financed “blue chip” companies. The diversification from investor behavior is largely gone (the liability side of correlation). Spread your exposures, and do it intelligently, such that the eggs are in baskets are different as they can be, without neglecting the effort to buy attractive assets. But beyond that, hold dry powder. Think of cash, which doesn’t earn much or lose much. Think of some longer high quality bonds that do well when things are bad, like long treasuries. Photo Credit: Baynham Goredema . When things are crowded, how much freedom to move do you have? Stock diversification is overrated. Alternatives are more overrated. High quality bonds are underrated. This post was triggered by a guy from the UK who sent me an infographic on reducing risk that I thought was mediocre at best. First, I don’t like infographics or video. I want to learn things quickly. Give me well-written text to read. A picture is worth maybe fifty words, not a thousand, when it comes to business writing, perhaps excluding some well-designed graphs. Here’s the problem. Do you want to reduce the volatility of your asset portfolio? I have the solution for you. Buy bonds and hold some cash. And some say to me, “Wait, I want my money to work hard. Can’t you find investments that offer a higher return that diversify my portfolio of stocks and other risky assets?” In a word the answer is “no,” though some will tell you otherwise. Now once upon a time, in ancient times, prior to the Nixon Era, no one hedged, and no one looked for alternative investments. Those buying stocks stuck to well-financed “blue chip” companies. Some clever people realized that they could take risk in other areas, and so they broadened their stock exposure to include: Growth stocks Midcap stocks (value & growth) Small cap stocks (value & growth) REITs and other income passthrough vehicles (BDCs, Royalty Trusts, MLPs, etc.) Developed International stocks (of all kinds) Emerging Market stocks Frontier Market stocks And more… And initially, it worked. There was significant diversification until… the new asset subclasses were crowded with institutional money seeking the same things as the original diversifiers. Now, was there no diversification left? Not much. The diversification from investor behavior is largely gone (the liability side of correlation). Different sectors of the global economy don’t move in perfect lockstep, so natively the return drivers of the assets are 60-90% correlated (the asset side of correlation, think of how the cost of capital moves in a correlated way across companies). Yes, there are a few nooks and crannies that are neglected, like Russia and Brazil, industries that are deeply out of favor like gold, oil E&P, coal, mining, etc., but you have to hold your nose and take reputational risk to buy them. How many institutional investors want to take a 25% chance of losing a lot of clients by failing unconventionally? Why do I hear crickets? Hmm… Well, the game wasn’t up yet, and those that pursued diversification pursued alternatives, and they bought: Timberland Real Estate Private Equity Collateralized debt obligations of many flavors Junk bonds Distressed Debt Merger Arbitrage Convertible Arbitrage Other types of arbitrage Commodities Off-the-beaten track bonds and derivatives, both long and short And more… one that stunned me during the last bubble was leverage nonprime commercial paper. Well guess what? Much the same thing happened here has happened with non-“blue chip” stocks. Initially, it worked. There was significant diversification until… the new asset subclasses were crowded with institutional money seeking the same things as the original diversifiers. Now, was there no diversification left? Some, but less. Not everyone was willing to do all of these. The diversification from investor behavior was reduced (the liability side of correlation). These don’t move in perfect lockstep, so natively the return drivers of the risky components of the assets are 60-90% correlated over the long run (the asset side of correlation, think of how the cost of capital moves in a correlated way across companies). Yes, there are some that are neglected, but you have to hold your nose and take reputational risk to buy them, or sell them short. Many of those blew up last time. How many institutional investors want to take a 25% chance of losing a lot of clients by failing unconventionally? Why do I hear crickets again? Hmm… That’s why I don’t think there is a lot to do anymore in diversifying risky assets beyond a certain point. Spread your exposures, and do it intelligently, such that the eggs are in baskets are different as they can be, without neglecting the effort to buy attractive assets. But beyond that, hold dry powder. Think of cash, which doesn’t earn much or lose much. Think of some longer high quality bonds that do well when things are bad, like long treasuries. Remember, the reward for taking business risk in general varies over time. Rewards are relatively thin now, valuations are somewhere in the 9th decile (80-90%). This isn’t a call to go nuts and sell all of your risky asset positions. That requires more knowledge than I will ever have. But it does mean having some dry powder. The amount is up to you as you evaluate your time horizon and your opportunities. Choose wisely. As for me, about 20-30% of my total assets are safe, but I have been a risk-taker most of my life. Again, choose wisely. PS – if the low volatility anomaly weren’t overfished, along with other aspects of factor investing (Smart Beta!) those might also offer some diversification. You will have to wait for those ideas to be forgotten. Wait to see a few fund closures, and a severe reduction in AUM for the leaders…