From time immemorial (2012) the PPP Top 25 Under 25 has used one tabulation method to figure out the final rankings. (As far as we know, no one can really remember.) That method was to weight all the unranked votes at 100 and then just average the votes.
This method hasn’t been too popular lately, and the difficulty here is simple: Either we force every voter to rank the entire list or we weight unranked votes somehow, it’s merely a question of how.
The very good reason to not do the full ranking is because I don’t wanna. No one else does either. This is fun, not work, and in years when the eligibility list has 15 or more players everyone will say absolutely no to, they’d just fudge that ranking anyway. So given that we’ve established we’re weighting the unranked votes, all we have left to do is agree on a weighting.
Using 100 seemed way too big. It really punished players that half the voters were ranking in the lower half of the list while the other half hadn’t even heard of them. This year, we decided to take it to the opposite extreme, which recognizes that we don’t know why each voter left someone unranked. Did they just barely make it? Or were they considered bottom of the pile? It also recognizes that this is only an issue for the bottom portion of the list and the few honourable mentions each year who almost make it.
For the weighting to have any weight in the ranking outcome there has to be about a 50-50 split in votes to unranked amongst the voting pool. If one voter fails to vote for someone high up the ranking, that’s not going to drop them down much in either system.
The other consideration is that we — no matter who we’ve been or how many of us there are over the years — nearly always rank the top half of the rankings unanimously. Not by order, of course, but there is a point in the list where everyone ranked all the players left every year. The unranked are the group at the bottom who have wider swings in votes, less known about them, and are more subject to subjectivity.
The method used, as first suggested and tested by Hardev, was the following:
Every first place vote was changed to 25, every second to 24, every third to 23 and so on to every 25th place vote changed to 1. That left all the unranked votes as zeros. The total votes were summed up, and that gave us the ranking — highest total (out of a possible 300) is first and so on down to the lowest total. The completely unranked all ended up with a sum of zero, and the 25th ranked “player”. the Vegas 4th-round pick, had a sum of 20.
The weighted average ranking you see on each article is calculated from that sum and then inverted back to the order that makes sense to us all with one at the top and 25 at the bottom.
This year, the top 15 players ranked all had 12 votes and two more ranked lower down did as well. For those players, both the 100 weighting and the reverse zero weighting give the same average ranking. The reverse zero ranking is essentially the same as ranking all unranked votes as a vote of 26:
This is the order from Denis Malgin, the last player to have his ranking changed by the new method, on down through all the players who received at least one vote. The grey bars are where they’d rank under the old system.
Joseph Woll is the big loser here. He would have been ranked 23 in the old method, but only because some of those ranked above him in the new system had more unranked votes than he did. The big 100 weighting moved them a bit and he floated up. He’s actually tied with Ian Scott for 26th and got slotted in at 27 by the cruelty of the alphabet as a tie breaker, so three places is the farthest anyone moved by changing the weighting.
Denis Malgin and Egor Korshkov swapped places, and I don’t think anyone would complain too hard about that. Pontus Holmberg, Joseph Duszak and Filip Kral got rearranged as a group. Bracco and Hollowell were switched, and the two 4th-round picks, Jesper Lindgren and Woll were rearranged. Scott ends up out of the picture in 26th place by either method.
The results of this exercise in this year, where the overall agreement on the list is fairly high due more to the makeup of the eligible players than much else, shows an effect that is very, very small. The number of tie votes — Scott and Woll, Rubins and Koster and two others you don’t know about yet, show that there is definitely not clear delineations between players this year with a step down from each placing to the next.
It’s entirely possible this 100 ranking has never made much of a difference, that the worst it has ever done is float one indistinguishable player up over 25 and another down in ways that don’t detract from our conversation about the Leafs prospect pool. Which is the point of this exercise, after all. That and ruining careers.