Recruiting Rankings/Player Development Rankings

With the recent successes of Whitman, Wisconsin-Whitewater, and Kalamazoo (three teams with unimpressive tennisrecruiting.net profiles) and the new rankings out, I thought it would be interesting to try to compile a ranking system for the teams with the best player development. I wanted something that was relatively easy to calculate but still mostly objective. Here’s what I came up with:

First, Compile a “recruiting ranking.” This gives credit to teams for their recruiting.

– I made the recruiting rankings by calculating the average stars/starter for each of the 33 top 30 teams. I only included starters because some teams like Emory and Amherst have 25 players on their roster, and it doesn’t make sense for a school’s recruiting ranking to go down because the coach keeps players on his or her roster that he or she didn’t recruit.

-To determine who the “starters” were for each of the teams, I simply went to the last match each of the teams played in which they would have used their top roster. For NESCAC schools, that meant using their last match of last season.

-NCW was excluded from the rankings entirely because it’s impossible to judge with all of the foreign players

-Players without tennisrecruiting.net rankings were omitted entirely.

-Obviously, this system is seriously flawed. It doesn’t take into account the fact that some 2 stars actually have higher national rankings than other 3 stars.To calculate the national ranking/starter would be a logistical nightmare. It also inflates a team like Bates’ recruiting ranking because they have several players without tennisrecruiting.net profiles and deflates W-W’s recruiting ranking because their #1 singles player is international (and therefore not included in the stars/starter system). It doesn’t take into account injuries or fluctuating lineups. It only looks at one match, where a team’s stars/starter number actually changes from match to match. It doesn’t weigh the scores based on how long players have had to develop within the system (which theoretically punishes teams like Hopkins and Wash U that are freshman-heavy). It’s just easy to calculate and gives a relatively good idea of how good a team’s recruiting is.

-The higher the stars/starter, the higher the recruiting ranking. If two teams have the same stars/starter, the tiebreaker is the higher national ranking. Here are the recruiting rankings:

1. Amherst (3.857 stars/starter)

2. Middlebury (3.833 stars/starter)

3. Williams (3.75 stars/starter)

4. CMS (3.714 stars/starter)

5. Hopkins (3.571 stars/starter)

6. Chicago (3.500 stars/starter)

7. Emory (3.375 stars/starter)

8. Wash U (3.375 stars/starter)

9. Bowdoin (3.167 stars/starter)

10. Trinity TX (3.000 stars/starter)

11. Carnegie Mellon (3.000 stars/starter)

12. Bates (3.000 stars/starter)

13. Kenyon (2.875 stars/starter)

14. Redlands (2.857 stars/starter)

15. Skidmore (2.857 stars/starter)

16. Trinity CT (2.750 stars/starter)

17. Mary Washington (2.750 stars/starter)

18. UC Santa Cruz (2.667 stars/starter)

19. Pomona-Pitzer (2.667 stars/starter)

20. MIT (2.667 stars/starter)

21. Cal Lutheran (2.25 stars/starter)

22. Kalamazoo (2.167 stars/starter)

23. Depauw (2.143 stars/starter)

24. Gustavus Adolphus (2.000 stars/starter)

25. Tufts (2.000 stars/starter)

26. Case Western (1.875 stars/starter)

27. Whitman (1.714 stars/starter)

28. Denison (1.714 stars/starter)

29. Rhodes (1.667 stars/starter)

30. Texas-Tyler (1.500 stars/starter)

31. Wisconsin-Whitewater (1.250 stars/starter)

32. Occidental (1.143 stars/ starter)

Second, The recruiting rankings are used to calculate the player development rankings.

-A team’s actual ranking is subtracted from a team’s recruiting ranking to give it’s “Development Number.” Theoretically, a higher development number is better because the team is performing better than one would expect with their players. If numbers never lied, Wisconsin-Whitewater would never ever ever beat Chicago, but that happened, so there’s obviously some good player development going on in Whitewater. This is just a way to quantify it, and test common notions like “Cruz has great player development.”

-Obviously, this system is also flawed. For instance, the #30 ranked team couldn’t possible have a high player development ranking because they are subtracting 30 from their recruiting ranking. (The max for the recruiting ranking is 33, so the best possible development number would be 3 for any of the #30 ranked teams). It would also be impossible for the team with the best recruiting to have a good development number. Again, I admit that the system is flawed, it’s just a very basic way to see where teams stand in relation to each other.

-Also, if a team’s player development ranking in 25, that does not mean they have the best player development in the country, it means their player development is 25th out of the 33 ranked teams. 25 seems like a good thing, but it’s not.

-This ranking system is also vulnerable to wild fluctuations in rankings. For instance, Whitman has an inflated ranking right now, so their player development ranking is also inflated. Depauw’s ranking is probably lower than it should be, and so is their player development ranking. If I had done this for the previous rankings, Depauw would have been in the top 8.

-Here are the teams listed based on development number (tiebreaker is national ranking)*:

1. Whitman (19)

2. Santa Cruz (9)

3. Case Western (9)

4. Kenyon (7)

5. Pomona-Pitzer (7)

6. Wisconsin-Whitewater (7)

7. Emory (5)

8. Gustavus Adolphus (3)

9. Occidental (2)

10. Cal Lutheran (1)

11. Amherst (0)

12. Williams (0)

13. CMS (0)

14. Rhodes (0)

15. Texas-Tyler (0)

16. Trinity TX (-1)

17. Redlands (-1)

18. Kalamazoo (-1)

19. Wash U (-2)

20. Trinity CT (-2)

21. Mary Washington (-2)

22. Tufts (-2)

23. Denison (-3)

24. Middlebury (-5)

25. Bowdoin (-5)

26. Carnegie Mellon (-5)

27. MIT (-5)

28. Depauw (-5)

29. Johns Hopkins (-8)

30. Bates (-14)

31. Skidmore (-15)

32. Chicago (-16)

Do completely buy in to the accuracy of these rankings? Not at all. These rankings do, however, offer a fairly accurate qualitative representation of which teams are good at developing players than others. Chicago, Hopkins, and MIT are probably the worst teams in the country at developing its players–due in part to a greater academic workload than the other schools (even though almost all the schools in the top 30 are academically strong, these three have a reputation for overwhelming their students with work)–, and all of the teams at the top of the list have a reputation for good player development. I mainly just did this as a conversation starter. It’s an interesting concept. I hope nobody hesitates to chime in with their thoughts.

Depending on the response, I will do this at the end of the year when the rankings will better represent how good each team actually is…

*The development numbers are well approximated by the normal distribution with a mean of a little less than 0 (because NCW wasn’t included, so the sum of the recruiting rankings is 22 greater than the sum of the actual rankings).

 

29 thoughts on “Recruiting Rankings/Player Development Rankings

  1. Anonymous

    One thing that interested me about the average star ratings in the first list is how much difference being close to 4.0 makes. Amherst, CMS, Williams, and Middlebury are the closest to four star and certainly the first two have separated themselves from everyone else in D-III. Williams is certainly elite based on last year, and Middlebury has been in the recent past, and we will see this year. The other elite team, Emory surprised me that they were not closer to 4.0, but they may be the exception that proves the rule with their top two players being so dominant whatever their star rating was, and some 4 star freshmen. Once you get to around 3.5 and below, the teams are much more vulnerable to upsets from “lesser” teams.

  2. anonymous

    Can we get rankings for how much programs fail to utilize their talent. Comparing a team like Cruz versus CMS is a joke in this ranking. I don’t know how CMS ends up as high as they do on the development scale as they have arguable the most talented team in the country yet can never win when it counts the most as opposed to Cruz who comes in ever year with a seemingly more impossible task of staying in the national title conversation.

    1. d3tennisguy

      This is a ranking for utilization of talent, and Cruz is way ahead of CMS. To be clear, 13 isn’t a great ranking on this scale because it’s only out of 32 teams, but they are being rewarded for having a high national ranking. CMS has some of the best recruiting in the nation, but they back it up by being on of the best teams in the nation. They don’t necessarily need to be punished for the fact that they have several 3-stars on the bench

  3. Anonymous

    What I meant by elite academic institutions is that admissions standards are so high at some of these schools that you can’t hand-pick recruits, and have to look at other things. In the same vein, some kids are so focused on academics that the don’t play that many tournaments and as a result are ripe to improve in college. Same goes for kids coming from boarding schools. They may drop off the map until college. One way to gauge might be to look at their results once in college (ie how a 2 star kid does against 4 stars, etc…). You might see a pattern develop.

  4. ross greenstein

    How about you ask the kids on each team how much they have improved? How much knowledge they think their coaches have? ect…

  5. Anonymous

    One thing that is not taken into consideration, is starters who don’t have stars. They could very well be recruited. Especially when you factor in the fact that many of these schools are elite academic institutions, it is more than likely that some players were recruited even though they didn’t have stars. Part of recruiting is finding diamonds in the rough and by simply not counting those players, it does a disservice to the program. There is more to recruiting than bringing in 3, 4, and 5 star recruits. Finding players that are under the radar and will improve is a big part of development.

    1. d3tennisguy

      I don’t know how the elite academic thing factors in to finding diamonds in the rough, but you’re completely right, nonetheless. I think Bates has two or three players in their lineup without rankings. I would really like to factor in a coach’s ability to find good players who don’t play USTA tournaments. Once again, I couldn’t think of how. I thought of arbitrarily estimating their ranking to be the average of the players above and below them, but that doesn’t make any sense and yields the same results as not using them.

      Keep the suggestions coming, though. It seems like there is a fair bit of interest in this idea, and I want to compile a less half-assed rankings over the summer when I have a little more time. So far, my plan is to use highest-ranking instead of stars. I will also use a team’s top 8 if I can find out who those players are. If anyone can think of a way to factor in USTA ranking-less players and doubles skills, I’m all ears.

      1. Anonymous

        Surely someone without a ranking is a “0” star??

        1. d3tennisguy

          I don’t think so. Players without rankings didn’t play USTA tournaments, so their lack of ranking doesn’t really have any bearing on their tennis playing abilities.

  6. Anonymous

    Though this analysis is very crude, I find fascinating and quite eye opening. What would be even more interesting is rather than averaging the number of stars, averaging the the highest ranking from tennisrecruiting.net and see what kind of results you get. Thanks for a fun view into player development!

  7. Anonymous

    Chicago, MIT and Hopkins have the toughest workload? Really? Are you serious? What about Amherst, Williams, Tufts, and Middlebury? You should know that it depends on what they are studying.

    1. Mike

      Beg to differ…NESCAC’s are not large research universities. They are extremely difficult to gain acceptance, but when accepted, you don’t have the same workload as schools like Chicago, Hopkins, MIT.

      1. Anonymous

        The fact that CMU is not mentioned on the hardest workloads is a travesty.

        1. Annonymous

          Let’s please shift this discussion away from incredibly subjective comparisons of workloads between schools. As anybody who is familiar with these schools should be fully aware, most of these schools are excellent academic institutions that expect significant academic commitments from their students. However, to say that one team has more or less time based on the school that they attend is an incredibly shallow assessment. Academic workloads vary significantly between academic majors and on the grades a student is hoping to achieve. If you really wanted to make the argument about academic workload being a major factor between schools, you would have to control for major difficulty/time commitment (varies between majors at a given school and from school to school), grades achieved by players, credits taken during season, etc, etc.

          This argument also fails because it is based on the premise that academic and tennis commitments are the only two major activities – clearly the culture of the team and school with regards to the party scene, non-tennis extracurricular, and say dating, are also major factors.

  8. Anonymous

    How can you not factor in the total roster size into this discussion? If a team has multiple (and some of these teams have 5-10+) two/three/four star kids on the “bench” I think that has to be weighed into this. At lot easier to “develop” when you have 15-20 of these players to choose from…

    1. d3tennisguy

      I talked about it a little in the post, but my main reason for not including bench players is because teams like Amherst and CMS have 24 person rosters that include players the coaches didn’t recruit at all. Including them in the rankings would not be a fair representation of a school’s recruiting. There are definitely “bench” players that should be considered because they are valuable to their team, and will most likely start in the future. Overall, I would say the “cons” of including bench players outweigh the “pros.” (not sure why I put quotes around pros and cons).

      For example, the Amherst roster includes 1-stars Will Rives and Brenton Arnaboldi. I can almost guarantee that Coach Garner hardly exchanged an e-mail with either, that neither will ever start, and that the team probably practices in flights so these guys never even share the court with Rattenhuber. To include them would unjustly damage their recruiting ranking, and give them credit for developing players that never get developed.

      1. Anonymous

        Instead of using the average of the starters, use an average of a teams 8-10 top players on their roster. This would give a more accurate gauge of what teams had available in terms of “stars” and the talent pool from which they are “developing”. Your system rewards a team for having two-star players starting despite the fact they may have multiple three/four star players on the bench. Are the coaches/programs that are being rewarded for doing a good job “developing” these 2-stars doing a poor job with their higher ranked players on the bench?

        1. d3tennisguy

          That would definitely be best, and I hadn’t thought of that. How would I figure out who the top 8-10 players are, though?

          1. Anonymous

            Look at a team’s roster and take the 8-10 highest ranked players on the roster. More work for sure, but if you are anonymously tossing out information that attempts to objectively rank programs in this area (which I think is impossible) you want to be as careful and accurate as possible. Especially in this day and age of recruiting.

          2. d3tennisguy

            What about a team like Kenyon? Austin Griffin is starting, but he’s not in their 10 highest ranked players. Should I go starters + next highest ranked players? Also, I’m going to weigh it based on years in the program, which is pretty easy to do. I agree that I need to be more careful about stuff like this in the information age, but this was like a rough draft.

            Finally, you’re absolutely right that no system is ever going to be perfect, but it has been helpful in the past to use imperfect measures of performance. Just look at the quarterback rating in the NFL; it rewarded Tebow for not throwing interceptions when everyone can see he’s a terrible passer. Nonetheless, it is accurate more often than not, and has been used to make decision that effect entire cities.

          3. Anonymous

            How do you know how players are ranked internally on a team?

          4. d3tennisguy

            I don’t. That’s why I just used starters in the first place

  9. Anonymous

    Here is what I see as the major flaws in this system:

    1. Tennisrecruiting rankings give zero weight to doubles, yet 33% of points in D3team matches are the result of doubles.

    2. Tournament success in singles is not the same as success in singles in a team atmosphere. One may say singles is singles, but those that are involved know they are two totally different animals. The results of doubles definitely plays an influence on the singles matches. Thus, one really could very logically argue that a statistic that tennisrecruiting does not measure accounts for significantly more than the 33% mentioned in point #1.

  10. Anonymous

    What you are not looking at, is probably the most important thing once any kid gets to college. What does the kid do to improve or not improve? Does he or she find another passion while in school? Whether this be studies, relationships, parties or did he already peak as a junior before college. Tennis recruiting.net is a guide to help with recruits. No more no less. We have all seen the 5 star who does nothing and the 1 star who lights it up in college. Sometimes it means a coach can develope these players and sometimes a kid can just loose the fire.
    I think your exercise was just that. Even as you mentioned, seriously flawed. There is no merit at all.

    1. d3tennisguy

      I think this is an excellent point, especially since a 4-star who actually elects to play D3 tennis probably isn’t as excited about improving as many lower-ranked players. At the same time, the culture of a team can effect who loses the fire and who gains it, and the team culture is another thing that contributes to a program’s ability to develop its players. I still think the rankings are qualitatively correct.

      1. chairace

        I don’t agree with your comment that 4-star players are not so excited about improving. They just know that they wouldn’t start in a big d1 team and the want to play as well as get a good education.

  11. Anonymous

    This is pretty clever and fun to take a look at. Thanks for putting this together.

  12. Anonymous

    eesh. there has to be much better way to do this.

    1. d3tennisguy

      agreed. I thought of several better ways to do this, but they were all incredibly time consuming. We need a D3tennis statistician. I still think this works alright, qualitatively.

Leave a Comment