Ranking rankings: an empirical comparison of the predictive power of sports ranking methods
In this paper, we empirically evaluate the predictive power of eight sports ranking methods. For each ranking method, we implement two versions, one using only win-loss data and one utilizing score-differential data. The methods are compared on 4 datasets: 32 National Basketball Association (NBA) seasons, 112 Major League Baseball (MLB) seasons, 22 NCAA Division 1-A Basketball (NCAAB) seasons, and 56 NCAA Division 1-A Football (NCAAF) seasons. For each season of each dataset, we apply 20-fold cross validation to determine the predictive accuracy of the ranking methods. The non-parametric Friedman hypothesis test is used to assess whether the predictive errors for the considered rankings over the seasons are statistically dissimilar. The post-hoc Nemenyi test is then employed to determine which ranking methods have significant differences in predictive power. For all datasets, the null hypothesis – that all ranking methods are equivalent – is rejected at the 99% confidence level. For NCAAF and NCAAB datasets, the Nemenyi test concludes that the implementations utilizing score-differential data are usually more predictive than those using only win-loss data. For the NCAAF dataset, the least squares and random walker methods have significantly better predictive accuracy at the 95% confidence level than the other methods considered.
Year of publication: |
2013
|
---|---|
Authors: | Daniel, Barrow ; Ian, Drayer ; Peter, Elliott ; Garren, Gaut ; Braxton, Osting |
Published in: |
Journal of Quantitative Analysis in Sports. - De Gruyter, ISSN 1559-0410. - Vol. 9.2013, 2, p. 187-202
|
Publisher: |
De Gruyter |
Saved in:
Saved in favorites
Similar items by subject
-
Find similar items by using search terms and synonyms from our Thesaurus for Economics (STW).