The Sharpe ratio or Sharpe index or Sharpe measure or reward-to-variability ratio is a measure of the mean excess return per unit of risk in an investment asset or a trading strategy. Since its revision by the original author made in 1994, it is defined as:
,
where R is the asset return, Rf is the return on a benchmark asset, such as the risk free rate return, E[R − Rf] is the expected value the excess of the asset return over the benchmark return, and σ is the standard deviation the excess return (Sharpe 1994).
Note, if Rf is a constant risk free return throughout the period,
. Sharpe's 1994 revision acknowledged that the risk free rate changes with time, prior to this revision the definition was
assuming a constant Rf.
The Sharpe ratio is used to characterize how well the return of an asset compensates the investor for the risk taken. When comparing two assets each with the expected return E[R] against the same benchmark with return Rf, the asset with the higher Sharpe ratio gives more return for the same risk. Investors are often advised to pick investments with high Sharpe ratios.
Sharpe ratios, along with Treynor ratios and Jensen's alphas, are often used to rank the performance of portfolio or mutual fund managers.
This ratio was developed by William Forsyth Sharpe in 1966. Sharpe originally called it the "reward-to-variability" ratio in before it began being called the Sharpe Ratio by later academics and financial professionals. Recently, the (original) Sharpe ratio has often been challenged with regard to its appropriateness as a fund performance measure during evaluation periods of declining markets (Scholz 2007).
No comments:
Post a Comment