Rethinking Performance EvaluationVW Staff
Rethinking Performance Evaluation
Duke University – Fuqua School of Business; National Bureau of Economic Research (NBER); Duke Innovation & Entrepreneurship Initiative
Texas A&M University, Department of Finance
November 26, 2015
We show that the standard equation-by-equation OLS used in performance evaluation ignores information in the alpha population and leads to severely biased estimates for the alpha population. We propose a new framework that treats fund alphas as random effects. Our framework allows us to make inference on the alpha population while controlling for various sources of estimation risk. At the individual fund level, our method pools information from the entire alpha distribution to make density forecast for the fund’s alpha, offering a new way to think about performance evaluation. In simulations, we show that our method generates parameter estimates that universally dominate the OLS estimates, both at the population and at the individual fund level. While it is generally accepted that few if any mutual funds outperform, application of our method leads to sharply different inference: we find more than a quarter of funds significantly outperform.
Rethinking Performance Evaluation – Introduction
In a method reaching back to Jensen (1969), most studies of performance evaluation run separate regressions to obtain the estimates for alphas and standard errors. By following this approach, each fund is treated as a distinct entity and has a fund specific alpha. This is analogous to the fixed effects model in panel regressions where a nonrandom intercept is assumed for each subject. We depart from the extant literature by proposing a “random effects” counterpart of the performance evaluation model (referred as the random alpha model). In particular, we assume that fund i’s alpha i is drawn independently from a common distribution. There are many reasons for us to consider the random alpha model. First, the fund data that researchers use (particularly, hedge fund data) are likely to only cover a fraction of the entire population of funds. Therefore, with the usual caveats about sample selection in mind, it makes sense to make inference on this underlying population rather than just focusing on the available fund data. This is also one of the situations where a random effects setup is preferred over a fixed effects setup in panel regression models.
Second, our random alpha model provides a structural approach to study the distribution of fund alphas. It not only provides estimates for the quantities that are economically important (e.g., the 5th percentile of alphas, the fraction of positive alphas), but also provides standard errors for these estimates by taking into account various sources of parameter uncertainty, in particular the uncertainty in the estimation of alphas.
Currently, there are three main approaches to performance evaluation, each having its own shortcomings. In the first method, fund-level OLS are run in the first stage and hypothesis tests are performed in the second stage. The regression t-statistics are obtained for each fund and used to test its statistical significance. Adjustments are sometimes used for test multiplicity. Recent papers that follow this approach include Barras et al. (2010), Fama and French (2010), Ferson and Chen (2015), and Harvey and Liu (2015a).
There are several problems with this approach when it comes to making inference on the cross-sectional distribution of fund alphas. First, it does not allow us to extrapolate beyond the range of the t-statistics of the available data. For instance, while the observed best performer might have a t-statistic of 3.0, we do not know the fraction of funds that have a t-statistic exceeding 3.0 in the population. Second, neither single tests nor multiple tests are useful when we try to make statements about the properties of the population of alphas. For instance, one question that is economically important is: what is the fraction of investment funds that generate a positive alpha? Under the hypothesis testing framework, one candidate answer is the fraction of funds that are tested to generate a significant and positive alpha. However, this answer is likely to be severely biased given the existence of many funds that generate a positive yet insignificant alpha. Indeed, these funds are likely to be classified as zero-alpha funds | funds that have an alpha that strictly equals zero under hypothesis testing. In essence, equation-by-equation hypothesis testing treats
fund alphas as dichotomous variables and thus does not allow us to make inference on the cross-sectional distribution of fund alphas.
Our method allows us to estimate the underlying alpha distribution and make inference on quantities that depend on the alpha population. Meanwhile, it provides a density estimate for each fund’s alpha, making it possible to make inference on individual funds and allowing us to answer the question: did the fund outperform?
The second approach involves first running fund-level OLS and then trying to estimate the distribution of the fitted alphas. By doing this, it is possible to make inference on the alpha population. However, this approach fails to take into account the various sources of estimation uncertainty, rendering the inference problematic. For instance, Chen et al. (2015) try to model the cross-section of fund alphas. Since the alphas are obtained from the first stage OLS, their model cannot take into account the uncertainty in the estimation of the model parameters, in particular, the uncertainty in the estimation of alphas. Such uncertainty is important given the time-varying nature of fund returns and the fact that for some investment styles standard factor models are only able to explain a small fraction of fund return variance.
See full PDF below.