Tag Archives: datetime-local

The Future Of Beta – Slip Sliding Away…

Value, momentum, size, quality, volatility, etc., as factors in investing are quite popular. They’ve produced significant outsized returns relative to benchmarks. Now, we even have Smart Beta funds and ETFs popping up all over to make taking advantage of factors super easy. That brings up the critical question every investor interested in taking advantage of factors in their portfolio should ask – will the outperformance of factor investing continue in the future? Here I’ll take a look at a recent post from Alpha Architect that addresses this question. In short, investors should expect past outperformance to decrease in the future. Basically, there are two reasons why outperformance could go away; data mining (the factor is not real and just an artifact of the data) and arbitrage (basically investors becoming aware of the anomaly, investing in it in a big way, and thus it disappears). The Alpha Architect post references a study that looked at out of sample performance of factors. Below are the results. Basically, out of sample returns are lower than what the historical results had shown. The returns were about 40-70% of what they were in the past. Sobering. But as I’ve discussed on the blog in the past, some factors are better than others. In another post , a bunch of factors are analyzed and the only two sustainable ones are value and momentum. This is the reason all the strategies I use are primarily focused around these two factors. But one of the reasons that value and momentum work is that they come with periods of awful performance, absolute and relative, and drawdowns. All of which make them very difficult to stick with over the long term. And if history is a guide, investors should expect their relative outperformance to decrease going forward as more investors become aware of them. In a way, these factor strategies are even harder to stick with than just simple buying and holding of traditional index products. When you’re indexing at least you’re doing no worse than the index! There is no FOMO (Fear of Missing Out). If you’re not willing or able to tolerate underperformance, potentially for long periods of time, then you won’t be successful with factors. But I think the are a several things investors can do to increase their chances of success going forward. Reduce expectations: I always reduce potential outperformance by at least half when I look at implementing a strategy. Diversify: Use multiple strategies – buy and hold indexing, TAA, smart beta, individual stocks. There’s a very strong chance at least one of the strategies will be outperforming, thus increasing your chances of sticking with your program. It doesn’t and shouldn’t be all or nothing. Stick with what works – Value and momentum strategies have stood the test of time… at least so far. Dampen portfolio volatility with bonds. Reduce noise – There is a lot of noise in markets today. Investors need to work hard to tune it out. Try and go 1 month without looking at the market. Most investors I know can’t go a week. Have an investing process – Investing your money shouldn’t be haphazard and random. As with many things in life, having a system and process will help you achieve success. What are your goals? How does your portfolio match those goals? When do you rebalance? What strategies are you implementing and why? Do the same things at the same times on a regular schedule, etc… In summary, factor outperformance could very possibly decrease in the future. But they are still likely to be very powerful wealth building strategies if investors can stick with them and not expect the future to be exactly like the past.

A Better Way To Run Bootstrap Return Tests: Block Resampling

Developing confidence about a portfolio strategy’s track record (or throwing it onto the garbage heap), whether it’s your own design or a third party’s model, is a tricky but essential chore. There’s no single solution, but a critical piece of the analysis for estimating return and risk, including the potential for drawdowns and fat tails , is generating synthetic performance histories with a process called bootstrapping. The idea is based on simulating returns by drawing on actual results to see thousands of alternative histories to consider how the future may unfold. The dirty little secret in this corner of Monte Carlo analysis is that there’s more than one way to execute bootstrapping tests. To cut to the chase, block bootstrapping is a superior methodology for asset pricing because it factors in the reality that market returns exhibit autocorrelation. The bias for momentum – positive and negative – in the short run, in other words, can’t be ignored, as it is in standard bootstrapping. There’s a tendency for gains and losses to persist – bear and bull markets are the obvious examples, although shorter, less extreme runs of persistence also mark the historical record as well. Conventional bootstrapping ignores this fact by effectively assuming that returns are independently distributed. They’re not, which is old news. The empirical literature demonstrates rather convincingly a strong bias for autocorrelation in asset returns. Designing a robust bootstrapping test on historical performance demands that we integrate autocorrelation into the number crunching to minimize the potential for generating misleading results. The key point is recognizing that sampling historical returns for analysis should focus on multiple periods. Let’s assume that we’re looking at monthly performance data. A standard bootstrap would reshuffle the sequence of actual results and generate alternative return histories – randomly, based on monthly returns in isolation from one another. That would be fine if asset returns weren’t highly correlated in the short run. But as we know, positive and negative returns tend to persist for a stretch, sometimes in the extreme. The solution is sampling actual histories in blocks of time (in this case several months) to preserve the autocorrelation bias. The question is how to choose the length for the blocks, along with some other parameters. Much depends on the historical record, the frequency of the data, and the mandate for the analysis. There’s a fair amount of nuance here. Fortunately, R offers several practical solutions, including the meboot package (“Maximum Entropy Bootstrap for Time Series”). As an illustration, let’s use a couple of graphics to compare a standard bootstrap to a block bootstrap, based on monthly returns for the US stock market (S&P 500). To make this illustration clear in the charts, we’ll ignore the basic rules of bootstrapping and focus on a ridiculously short period: the 12 months through March 2016. If this was an actual test, I’d crunch the numbers as far back as history allows, which runs across decades. I’m also generating only ten synthetic return histories; in practice, it’s prudent to create thousands of data sets. But let’s dispense with common sense in exchange for an illustrative example. The first graph below reflects a standard bootstrap – resampling the historical record with replacement. The actual monthly returns for the S&P (red line) are shown in context with the resampled returns (light blue lines). As you can see, the resampled performances represent a random mix of results via reshuffling the sequence of actual monthly returns. The problem is that the tendency for autocorrelation is severed in this methodology. In other words, the bootstrap sample is too random – the returns are independent from one another. In reality, that’s not an accurate description of market behavior. The bottom line: modeling history through this lens could, and probably will, lead us astray as to what could happen in the future. Let’s now turn to block bootstrapping for a more realistic profile of market history. Note that the meboot package does most of the hard work here in choosing the length of the blocks. The details on the algorithm are outlined in the vignette. For now, let’s just look at the results. As you can see in the second chart below, the resampled returns resemble the actual performance history. It’s obvious that the synthetic performances aren’t perfectly random. Depending on the market under scrutiny and the goal of the analytics, we can adjust the degree of randomness. The key point is that we have a set of synthetic returns that are similar to, but don’t quite match, the actual data set. Note that no amount of financial engineering can completely wipe away uncertainty. The future can and probably will deliver surprises, for good and ill, no matter how clever our analytics. Nonetheless, bootstrapping historical data (or in-sample returns via backtests) can help separate the wheat from the chaff when looking into the rearview mirror as a preview of what lies ahead. But the details on how you run a bootstrap test are critical for developing comparatively high-confidence test results. In short, we can’t ignore a simple fact: market returns have an annoying habit of exhibiting non-random behavior.

The ETF Monkey Vanguard Core Portfolio: April 13, 2016 Rebalance

Back on February 11, 2016, I executed a series of transactions to rebalance The ETF Monkey Vanguard Core Portfolio . As explained in that article, the severe decline in both domestic and foreign stocks left these two asset classes significantly underweight, with bonds being overweight. Here, for convenience, is a “before and after” snapshot of that transaction. Click to enlarge As it turns out, the timing of that rebalancing could not have been better. In hindsight, it can be seen that February 11 represented, at least to this point, the low point for 2016. I don’t take particular credit for this. My efforts were simply an application of the principles found in this article . As I also noted in my previous article, I executed a fairly aggressive set of transactions. Mindful of the fact that I am deliberately incurring trading commissions on all transactions in this particular portfolio, to make the exercise as “real world” as possible, I commented that I need to make each transaction count. This being the case, I temporarily underweighted bonds in favor of adding to the other two severely depressed asset classes. Here is the equivalent Excel spreadsheet for today’s transaction. Have a look, and then I will offer some comments. Click to enlarge Likely, the first thing that jumped out at you is that both domestic and foreign stocks have staged fairly stunning comebacks since February 11. The Vanguard Total Stock Market ETF (NYSEARCA: VTI ) registered a gain of 14.69% during this period, while the Vanguard FTSE All-World ex-US ETF (NYSEARCA: VEU ) did even better, at 15.55%! On the flip side, this incredible performance, combined with my aggressive rebalancing transaction, left bonds substantially underweight, with their 13.37% weighting being a full 4.13% below my target weight of 17.50%, or a full 23.6% in relative terms (13.37 / 17.50). Given these developments, it appeared to me that this was a fitting point to take some of those profits, so to speak, and get the portfolio more closely aligned with my target weights. While it is not yet May, I will admit that the old adage “Sell in May and go away” contributed in some small way to the timing of this decision. I didn’t want to take a chance on being overweight domestic equities, only to have them experience a summer swoon! You may also notice that foreign stocks were about even with my target allocation as I reviewed the portfolio today. This is because I did not add as heavily to this asset class in the prior rebalance. Therefore, I decided to only affect the domestic stock and bond asset classes with this transaction, saving me one trading commission. Take one last peek at the “after” section of the spreadsheet, and you will notice that all 3 asset classes are now fairly closely aligned with their targets. I hope that this sets the portfolio up nicely for the summer. Disclosure: I am not a registered investment advisor or broker/dealer. Readers are cautioned that the material contained herein should be used solely for informational purposes, and are encouraged to consult with their financial and/or tax advisor respecting the applicability of this information to their personal circumstances. Investing involves risk, including the loss of principal. Readers are solely responsible for their own investment decisions.