The Ever-Popular Recruiting/Development Rankings

Screen Shot 2013-02-26 at 9.50.40 PMSome of you more loyal readers may recall the recruiting/development rankings I created at some point around this time last year. At the time, the development rankings were basically just a thought project to quantify some of the claims that get thrown around on this blog. On the blog and just around DIII tennis, you’ll often hear people say things like, “Cruz is so good at developing their players,” and “Chicago should be way better with how much talent they have.” I wanted to put those assumptions to the test.

The scoring system I used last year depended solely on the TRN “star” system, and a number of readers pointing out many of the more glaring flaws. This year, I’ve tried a new system with the same basic idea to address some of those concerns. Here are the features of the new development/recruiting ranking system:

-Instead of using TRN stars, I’ve used each player’s highest ranking from their senior year (as reported by TRN). I figured the “highest” ranking was more indicative of how good the player is than their final ranking because players tend to stop playing as many tournaments once they get into college. Picking each player’s high point makes more sense than using whatever arbitrary time they stop ranking the player. This is obviously still flawed, but probably less-so. Some players play fewer tournaments, and their rankings are not indicative of their ability. Other players get injured. Etc. In cases when a player doesn’t have a ranking from their senior year, I used their highest ranking from their junior year.

-The previous system sort of penalized teams for having highly rated freshmen low in their lineup (sort of). To lessen this effect, I’ve used a “weighted average.” Each player’s individual ranking is weighted by how many years they’ve spent at a program (transfers are credited only for years within a program). Basically, the rankings are multiplied by number of years and the whole thing is divided by  the total number of years of experience in the starting lineup.

-The previous system also did not consider the worth of a 5-star at 1 singles vs. a 1-star playing only 3 doubles (or something like that). This system weight’s a player’s development based on how high they are playing in the lineup.

-The previous system considered both singles and doubles starters. Several readers pointed out that some players may have lower rankings because they are doubles specialists. That’s totally valid, so I just scrapped doubles starters entirely. Consider this list a “singles development/recruiting ranking.” Obviously, this system is still flawed because I use a team’s national ranking, which is largely determined from their doubles prowess, but very few teams have massive discrepancies between their singles and doubles play. The few that appeared to have poor doubles play last year (Hop, Wash U) didn’t appear to have their national ranking lowered, while “doubles” teams (GAC, Whitewater) didn’t appear to have their national ranking significantly raised by that strength.

-The “weighted average player ranking” serves as a recruiting ranking. That way a team’s recruiting ranking isn’t affected by players who just end up on the roster without ever being recruiting, and teams don’t get credit for recruiting 4-stars that never play (I’m looking at you, Emory. Let’s rephrase, I’m looking at you, National Champion Emory Eagles).

-Take the recruiting ranking (how successful a team should be based solely on how good the players are when they show up on campus), subtract the actually ranking (how successful the team actually is after the players show up on campus and start practicing), and you get a number. The higher the number, the better the development (theoretically). Teams are ranked by that number. Ties are broken by national ranking (thinking that the higher-ranked team probably deserves some credit for being higher ranked).

Some of the problems with the old rankings are still there: there is a ceiling and a floor. If you have really good recruiting, you basically can’t have good development with this system. For example, if you’re Amherst and you have the best recruiting ranking, the absolute best you can do in this system is a 0, which is just OK. They could have great development and not get credit for it. Conversely, it only takes into account nationally ranked teams, so a team like Chicago can only get ranked as low as 30. By no means does that mean they’re the 30th best developing program in the country. It just means they are the 30th best developing team in the top 30. Yikes. As last year, NCW was omitted entirely because they just are different.

Oh, if a team only has 5 or 4 players that I could find on TRN, the weights were adjusted accordingly. Whitewater ended up getting screwed by this, so sorry Warhawks! I still think your development is fantastic!

Also, the rankings from the end of last season were used, as were the lineups from each team’s last important match that was on the ITA tennis website, so this is really a measure of how well a team developed in 2012.

Without further ado:

Recruiting Rankings (Weighted Average Ranking in Parens)

1. Emory (85.46)

2. Amherst (88.64)

3. CMS (102.48)

4. Hopkins (113.46)

5. Middlebury (118.91)

6. Williams (120.67)

7. Wash U (145.88)

8. Carnegie Mellon (182.03)

9. Trinity CT (187.27)

10. Kenyon (198.53)

11. Skidmore (201.02)

12. Bowdoin (201.31)

13. Chicago (202.96)

14. Trinity TX (221.60)

15. UC Santa Cruz (232.12)

16. Washington & Lee (264.63)

17. Bates (281.98)

18. Pomona-Pitzer (284.43)

19. Mary Washington (302.98)

20. Redlands (309.39)

21. Wisconsin-Whitewater (310.97)

22. Gustavus Adolphus (326.64)

23. Depauw (338.90)

24. Swarthmore (359.71)

25. Cal Lutheran (370.64)

26. Denison (394.95)

27. Whittier (397.70)

28. Kalamazoo (400.03)

29. Whitman (411.09)

30. Case Western (452.06)

Development Rankings (With Difference btw. Recruiting and Actual in Parens)

1. Whitman (15)

2. Cal Lu (13)

3. Case Western (12)

4. Kenyon (8)

5. Pomona-Pitzer (7)

6. UC Santa Cruz (6)

7. Bowdoin (5)

8. Trinity TX (4)

9. Williams (3)

10. Wash U (3)

11. Redlands (3)

12. Bates (1)

13. Swarthmore (1)

14. Whittier (1)

15. Emory (0)

16. Denison (-1)

17. Mary Washington (-2)

18. Kalamazoo (-2)

19. Amherst (-3)

20. CMS (-3)

21. Gustavus Adolphus (-3)

22. Hopkins (-4)

23. Washington and Lee (-4)

24. Depauw (-5)

25. Wisconsin-Whitewater (-8)

26. Middlebury (-10)

27. Carnegie Mellon (-11)

28. Skidmore (-11)

29. Trinity CT (-15)

30. Chicago (-17)

Well, there it is. Whitman is in the top spot again. Chicago is in the bottom spot, again. With a lot of variation in between. This is far from a perfect ranking system, so please post suggestions in the comments. Please don’t be too combative. I still believe this system generally gets to the point and can tell us some meaningful things. For instance, almost all of the top 15 teams are either warm-climate schools or have their own indoor facility. Coincidence? You also see three of the big research universities renowned for brutal work loads near the bottom of the list (CMU, Hop, and Chicago).

Again, I would like to refine this thing, so please leave suggestions. I think we need a more extensive actual ranking system to make these rankings better. It’s about time the top 40 teams in DIII got ranked, though I wouldn’t ask the ranking committee to commit more time to something they don’t get paid for and get criticized for every time they do it.

As far as the recruiting rankings go, I’m continually astounded by what the likes of Amherst, Emory, CMS, and Williams can do on the recruiting trail. That they are able to consistently attract such a high calibre of recruit is incredible. Each of these teams has 3-stars that enter the program and ride the pine for four years without sniffing the starting lineup. That would have been absolutely unheard of four years ago, and it truly speaks to the strength of the sport and division we all love.

As far as the development rankings go, I’m continually impressed by what the West region stalwarts Whitman, Pomona-Pitzer, and Cal Lu have done with the recruits they get. Each team has stayed in the national rankings despite being out-recruited by dozens of other programs. I often wonder what they would do with a host of 4-stars. More recently, Case has been absolutely amazing on the development trail and should top this list next year if things keep going the way they have been.

Thanks for reading!

Editor’s Note: a previous version of this article was posted with completely different rankings using an equation that was incorrect. A helpful reader pointed out the error (which was actually quite egregious), and the corrections have been made. Thank you to that reader! I’m dumb.

11 thoughts on “The Ever-Popular Recruiting/Development Rankings

  1. Anonymous

    You should also consider a player’s Universal Tennis Ranking. I think you’d find some of these guys are actually stronger than TR says they are. UT doesn’t care how old someone for their head-to-heads and once a player is in college age doesn’t matter. TR does some weird stuff factoring results between high school classes which can make their rankings off, especially the farther you get from the top 100. Of course UT doesn’t provide a history so if you don’t capture the data when are HS seniors, you won’t be able to gauge improvement.

    1. d3tennisguy

      I agree that UTR will be better when they start time-stamping. Until then, TRN is what we got.

  2. Anonymous

    I’m not sure that there is a correlation regarding your findings among the big research schools. All of the UAA Conference schools are members of the AAU, the leading research universities in the U.S/Canada. Case Western, Wash U, and Emory are on the top half of that development list, and they also have brutal work loads. Hopkins, CalTech, and MIT are the only other D3 schools among the AAU institutions that are not in the UAA Conference (although Hopkins was a member for a while).

  3. Anonn

    I would say the biggest advantage the west has for developing the lower star recruits is the vast amount of private coaches available.

  4. BMAC

    There’s one team that is in a cold climate, no indoor facility of its own, has a brutal academic workload and is STILL near the top of the development rankings.

    CASE
    #HAAAAAANH

  5. anonymous

    Pls publish the formulas you are using. It is hard to understand them with just the descriptions you list above.

    1. d3tennisguy

      New formula: [(Player #1 Ranking * Player #1 Years Experience * 6 + Player #2 Ranking * Player #2 Years Experience * 5 + ….)/(Player #1 Years Experience * 6 + Player #2 Years Experience * 5 + …)]

      1. Anonymous

        Hi I was the one who responded before with the denominator correction.

        I have another issue with your formula being used to measure development. I will show you that I think it measures something else.

        Let’s say you have a lineup of all 4-year starters, and their rankings were 100, 200, 300, 400, 500, 600

        The denominator for the formula for this team would be 4*(6+5+4+3+2+1)=84.

        The same program should in theory develop all players the same.

        So let’s say the teams singles lineup was in order of how they were ranked on TRN

        1. #100
        2. #200
        3. #300
        4. #400
        5. #500
        6. #600

        Doing all of the math you get an average score of 266.67. Places them #17 on the list.

        Let’s say same scenario of all four year starters, same team, but instead of the above order, it was reversed.

        #1. 600
        #2. 500
        #3. 400
        #4. 300
        #5. 200
        #6. 100

        Total average score = 433.33 Places them #29 on the list.

        These players in both scenarios were all developed by the same program. Yet generated two wildly different scores with your formula. Therefore, I think it is a major error to call your calculations a development ranking.

        What your rankings really more accurately measure is identifying potential talent among lower ranked players. As shown above when development was held as a constant, it was identifying the lower ranked players with potential that provided the difference.

        Identifying potential talent is in and of itself is an extremely important part of recruiting. however to characterize this as development is, I believe, a mistake.

        1. d3tennisguy

          Can you ^ e-mail me with this, and we can talk about it? My e-mail is d3tennisguy@gmail.com

          1. Anonymous

            I can try to email you – I am assuming you are maintaining anonymity so I guess I will also create an email address and contact you. It may not be until later this evening or even tomorrow.

            I appreciate what you are trying to do, I was just trying to point out some severe holes in this. And when some schools are being praised/others knocked in this age of recruiting I think it is a little reckless to put out this type of thing in public without having a more foolproof system.

            I think saying certain teams have done a good job at identifying talent is a much better and more fair way to put it. Even then I think you have to take into account roster size and the people NOT playing. A program looks like amazing developers because they have a 500 playing a good #2, but why didn’t they similarly develop an older 250 who isn’t starting? Again it is identifying talent. And when you have a larger pool to choose from the better a “developer” one appears.

          2. d3tennisguy

            To respond to your complaints: the formula is written as it is purposely to reward a team for turning a 500-600 kid into a DIII star. Cal Lu is high on this list almost entirely because Justin Wilson (ranked 737) played #2 singles. I saw Justin as a freshman, and he was not good. I saw him as a senior, and he was incredible. Now, I’m sure he was better than the average 737 player, but if Wilson goes to Amherst, Emory, wherever else, he never sniffs the starting lineup. Cal Lu gets credit for developing him here, because there are very few schools where he ever would have had a chance.

            As far as your 600, 500, 400 scenario goes, it’s an interesting thought, but such a team does not exist
            (Case is probably the closest, but they have older, lower ranked players near the top of the lineup because those are the players that have been developing in a good program for longer).

            It seems like your biggest beef is that a team doesn’t really get punished for failing to develop a better player. I agree that there should be a way to take into account roster size, and I invite suggestions. As far as these rankings go, however, none of the teams in the top 8 have any talent sitting on the bench that they’ve failed to develop. It’s not like Whitman is turning down 4-stars so they can take a chance on a 1-star with a ton of upside. These teams are succeeding because they’ve made the best with what they have. In that sense, I really do believe these rankings have identified the best developers.

            The only team that is rewarded in these rankings for the situation your describing is Kenyon (and that’s a stretch), but nobody in the top 8 had a “older 250 that’s not starting.” The teams that have players like that are low in the rankings anyways, so I guess that just doesn’t seem like a valid complaint to me. Regardless, I welcome ideas for how to take into account roster size. It’s tough, though, because if you try to punish a CMS for having someone in the top 200 on the bench, who’s to say he wouldn’t be playing #4 singles for a team like Cal Lu?

Leave a Comment