Comparing the Two Leading Pro Disc Golf Ranking Systems

When discussing professional disc golf, or any individual sport, a piece of information that is likely to come up is a player’s ranking. Who is considered number one in the world, who is in the top ten,  whose ranking is falling, whose is rising, etc. In disc golf, there are two prominent ranking systems: the United States Tour Ranking, managed by the PDGA, and the UDisc World Rankings. The mere existence of two separate ranking systems seems to throw into question the validity of each. Gannon Buhr is number one? Well UDisc says that Eagle McMahon is number one, so which is it? Let’s take a look at both ranking systems, how they work, the value of each, and see if we can decide which one is superior, if one of them is.

The idea for this article came to me when I got into a brief argument with a stranger on Facebook. The argument remained civil, and lasted only two or three exchanges, but it was enough to get me thinking. The thread originated on a post by Disc Golf Fanatic or some page like it, I don’t remember exactly. The post was simply a screenshot of one of the hot takes that Brodie Smith has dropped in the last few weeks. Honestly, I don’t remember exactly what the take was, and as you will come to see, it’s not really important. I want to make a quick aside and say that I have no beef with Brodie Smith. I am pretty ambivalent about him in general. I have seen clips and takes from him in the past that I was impressed with; he seems like an overall nice and thoughtful person. But I also think that he can be a little hot headed and too quick to criticize conventions in a sport that he has only played for a couple years. Anyway, on the Facebook post, I made a comment criticizing Smith for exactly the thing I just mentioned, saying that he was awfully critical of a sport that he’s only been playing for two or three seasons. A stranger responded to my comment saying something to the effect of, “Well he’s a top 30 player! He has a right to criticize!” Upon reading this I thought that this stranger absolutely had to be off his rocker, because I knew that there was no possibility that Brodie Smith was anywhere close to being a top 30 player. I immediately pulled up UDisc World Rankings (which I assumed everyone uses) and found Smith ranked number 69 in the world. Right about where I would have guessed. I pointed out to the social media stranger that Brodie Smith is not in fact a top 30 player, but is ranked a much lower 69th. The stranger came back at me with a screenshot from the PDGA website showing Smith ranked in the high twenties. 28th or 29th, I don’t remember exactly. The discussion ended at this point, but I was shocked at how huge of a discrepancy there was between these two rankings. 30 spots is massive! I have of course spent lots of time browsing both rankings while doing research for this article, and unsurprisingly, Brodie Smith is not the only player who is ranked very differently on each of the lists. How can players be ranked so differently on two lists that claim to rank the same thing? This is the question that I am here to investigate.

In the following paragraphs I am going to give a brief overview of how rankings are calculated in each system. Please note that my overview will not be in depth. There is really not much point in me copying and pasting, or worse, explaining in my own words, the explanations of these systems here in this article when you can just go do that reading yourself at these links: UDisc explanation, PDGA explanation. My descriptions will be distillations of the proprietors’ explanations and will make efforts to focus on each system’s strengths, weaknesses, and biases.

First let’s look at the PDGA US Tour Ranking. The most outstanding difference between the two ranking systems is definitely complexity. The PDGA’s ranking system is dramatically simpler than the UDisc system. The PDGA system takes 8 categories, assigns each category a number based on the player’s performance in that category, and then averages those categories. The player with the lowest average is number 1, and so on. That is the system put in the simplest possible terms. Now let’s elaborate on it a bit.

Players’ ten best Elite Series event finishes, and three best major finishes are used for assessing performance in the following 8 categories: place at the World Championships, place at Champions Cup, place at USDGC, average place at the player’s ten counted Elite Series events, average round rating at counted ES events, number of wins, number of podiums in counted ES events, and finally, number of top 10 finishes at counted ES events. For all of the categories other than the major finishes, the ranking of the values are used in calculating the average of a player’s categories rather than the values themselves. For example, Gannon Buhr has the most ES wins out of his counted ten for the year, with 4 wins, so the number counted for his “wins” category is 1, not 4, since he is 1st in wins.

My above explanation of the PDGA’s ranking system is fairly cursory, so if you are still not clear on how it works, please visit one of the links above and check it out. When you’re looking at the rankings and their explanation directly, it’s really not too hard to understand. I’m going to go ahead and move on to the strengths and weaknesses of this ranking system. I will say right up front that I think this ranking system is pretty bad. I think it has very few strengths if any, and I think it has a plethora of weaknesses, which I will get into shortly. If this was the only ranking system that existed, I may not be as critical; it sort of gets the job done. But when held up next to UDisc’s system, it absolutely falls apart.

Weakness 1: Not enough events are counted. I can maybe kinda see the logic in only counting players’ ten best ES and majors in that it is a distillation of what players are capable of at the top of their game, but here’s the problem with that: it does not reward consistency. Or at least, not as much as it could/should. In my opinion, consistency is the essence of disc golf (and ball golf too for that matter). The pursuit of consistency is what gets people addicted to golf. When you hit/throw a perfect shot, it scratches an itch in your brain and it makes you say “I want to do that every single time.” Consistency is the most difficult part of disc golf too. It’s not the backhand, the forehand, the putting, it’s doing any of those things consistently. I can execute a lot of the shots that the pros do, as I’m sure you can too, but while they would execute those shots 90% of the time, I could only do it maybe 10% of the time. All this to say, I think that consistency is one of, if not the most valuable metrics for assessing the greatness of a player. By taking only players’ ten best ES events and three best majors, players are essentially let off the hook for underperforming in the uncounted events, and I don’t think that’s right. Players’ poor performances should count against them just as their good performances should count for them. Also, just from a statistical standpoint, why use less data? I just can’t understand omitting a large number of Elite Series events and ALL Silver Series if accurate statistics is the end goal. More data is always better.

Weakness 2: Rankings are not updated often enough. This is fairly self-explanatory. The PDGA rankings only update after majors and at the end of the season. This is just not often enough. There are bound to be large periods of time when events have taken place that will change the rankings significantly, but there they sit, giving viewers a false impression of where players actually deserve to be ranked.

Weakness 3: European events are omitted. This may be the biggest issue for me. The European Open and the PCS Open are not counted in this ranking system. These are only two events, but they’re big ones. It’s a major and an Elite Series! These events are the exact same level of event as the other majors and other ES, and they are just arbitrarily not counted because they happened in Europe. This is just insane to me. I guess the rankings say in the name, “United States Tour Rankings” but here's the thing: who cares where the events happened?? The geography is irrelevant! Those events are played by mostly the same players, and more importantly are on the same tour as all the others! And one of them is a MAJOR! I just cannot understand this decision. Corey Ellis just arbitrarily does not get credit for winning a major, and Paul McBeth arbitrarily does not get credit for his only ES win of the year, the PCS Open. Not to mention Paul McBeth’s other Euro Tour wins. Another negative result of this omission is it makes the European players ranked in the list less meaningful. If a mid-level European player only played 3 or 4 events in the US but excelled on the Euro Tour, they are going to be ranked significantly lower than a potentially worse American player.

Now let’s look at the UDisc World Rankings. The UDisc system is much more dynamic and encompassing than the PDGA system. The math and statistical analysis is also much more advanced. As I mentioned before, I am not going to get into the nitty-gritty of the math because the reader would be better off reading it from the horse’s mouth at the link above. What I have been able to glean from UDisc’s explanation article is that rather than having several separate categories with individual values that are averaged to arrive at a ranking, there is a formula that is executed for each player after each event, with the variable in the formula being the player’s final place in the event. Each player maintains a rating. This is not to be confused with a PDGA rating, but rather a rating within the ranking system itself. The players rating changes after each event after the execution of the formula.

UDisc’s ranking system is superior to the PDGA’s for the following reasons, which are largely the converses of my above critiques:

Reason 1: The system is dynamic. What I mean by dynamic is that a player’s rating within the ranking system changes more or less depending on the ratings of the players they beat or lost to. There is a great example of this in the explanation article linked above where the rating changes of a hypothetical tournament with 4 players are calculated that can help you understand this. I really like this feature because I think that a lower level player should be rewarded if they pop off in an event and defeat much higher level players, even if they don’t win. And vice versa, high level players should take a big rating hit if they lose to significantly worse players. So the results of each event and their effect on player rankings are much more nuanced and detailed than in the PDGA system and will effect each player differently.

Reason 2: More events are counted. UDisc’s system counts all pro majors, Elite and Silver Series, Euro Tour, Prodigy Disc Pro Tour, and European Disc Golf Championship events. This lineup is going to give a much more complete picture of a player’s season than the PDGA’s system. As I griped about earlier, I think it’s important for all events on the same tour to be counted. While Silver Series events are definitely all-around “smaller” (smaller purse, fewer players, lower average caliber of player, etc.) they are still very high-level events and are still part of the Disc Golf Pro Tour. The more events counted, the better indicator of consistency a ranking will be. In my mind, a player who happened to play well at majors and a couple ES events but missed cash at 5 Silvers should be ranked lower than a player who never finished top 10, but also never missed cash.

Reason 3: It is updated immediately after every event. Rankings are updated almost the minute an event has concluded. This leaves very little meantime in which players could be unrealistically ranked.

There will probably always be people who say, “Oh, PDGA ratings/rankings/tour points/whatever aggregating metric doesn’t matter.” And in a sense, I get this school of thought. Why try to label and rank every little thing? But I think one could also argue that the perpetual question of all professional sports is, who is the best? This is a hard thing to really nail down for a number of reasons: not every player plays every event, every event is different, and how different people define excellence is different. Ranking systems are an effort to overcome these issues and aggregate a player’s performance into a value that is directly comparable to all other players. It’s an effort to establish a reliable and accurate way to compare apples to apples. I think the UDisc World Rankings does this very effectively. At the end of the day, who is the best is defined by who beats the most players the most often, and this is really the driving force behind UDisc’s ranking system, and I hope that I have shown that adequately. I know that I really ripped into the PDGA’s system, and I stand by what I said. But I also feel a little bad because somebody presumably worked hard to develop that system, and I hate to bash those people and their work. My intention is not to pick on the developers of the PDGA system or the PDGA itself, but merely to expose the flaws in the system and to encourage readers of this article to not put too much stock in their rankings and instead look at UDisc’s rankings as the most official and accurate tool to compare players.

Disagree with anything in this article? Drop a comment and give us your thoughts. Let’s discuss.

Previous
Previous

Suggestions to DGN from an Enthusiastic Viewer: An Open Letter

Next
Next

DO NOT develop a putting routine!