In this post, we continue our analysis, and achieve better success after some adjustments to the methodology.
Our best indicator was tuition (in-state) which had 66% correct which beat out the analysts who performed the worst by 3% (1 Pick) and tied the analyst who picked second from last.
Our indicators didn’t stack up very well to the “experts” or national average at all. However, we noticed that our indicators were picking the wrong schools in completely obvious scenarios such as 1 vs. 16 matchups and other very high vs. very low seeded teams. These are generally the easy picks and not the ones we need help with. To counteract this we picked the top seeded team to automatically win in the following games: 1 vs. 16, 2 vs. 15, 3 vs. 14, 4 vs. 13, and 5 vs. 1. The top-seeded team was picked even if we know the higher seeded team won the game. Of these games there were four wrong picks out of 20 games.
We left 12 games to be decided by our indicators. These twelve games include the tough 8 vs. 9 matchup through the 6 vs. 11.
Figure 1 shows how these modified scenarios against the same unmodified scenarios.
As shown in the graph, the modified strategy beat the original strategy for every indicator. Every indicator went to 69% or greater and three strategies (Tuition (In-state), Admission Rate and 2012 endowment) either tied or beat the national average picks. Admission rates gave the best results at 78%.
Table 2 shows the performance with the comparators.
The three top analysts and President Obama remained at the top, all picked over 80%. However, admission rate and 2012 endowment joined the upper ranks predicting better than the average pundits, the national average bracket and the “chalk” bracket.
By selecting the top ranked teams in the easy games and one of these indicators to predict the closer games our bracket beat the national average and the average analyst prediction.