Pre-Season Rankings

Preseason rankings are out.

They almost always look exactly like the end-of-season rankings from the previous year, but look for Amherst. No, scroll down. Keep scrolling. Keep scrolling. There they are, right where they belong at #19. That’s what you get when you don’t roll out a full lineup for matches against two real good teams in the fall. The ranking committee basically decided to punish Amherst rather than reward CMU, which was the safe and lazy choice. I think it’s safe to say they’ll be moving up

8 thoughts on “Pre-Season Rankings

  1. Anonymous

    There is a difference between polls and rankings. Within rankings, there are more than a few methods, purposes, and structures that they serve, so to say that rankings are in fact speculation is too broad a statement and in this case false. There is speculation involved in the translation and interpretation of results, but there are guidelines as well (rankings criteria). The purpose of ITA Division III rankings is to as accurately as possible reflect the results up to the point of the rankings date, based upon the rankings criteria. The rankings are not meant to say, for example, that player no. 17 is better than player no. 16. Instead, that based upon available results, player no. 16 merits a position one spot higher than player no. 17. There are of course subtleties and differences in any rankings, but to say that they are all speculative is to disregard their purpose and intent. These are not intended as say a ranking of the top 10 hip-hop songs of the 2000’s.

  2. Love D3 Tennis

    Anonymous,

    Rankings, whether it be for any sport, or for anything outside of a sport, are always done in instances where it is not completely certain whether one person, thing, etc. is better (or worse) in ability, value, importance, etc., than another. Therefore, rankings are always done in sports for the purpose of speculating, and showing the conclusion of that speculation, about what teams or sports competitors are better (or worse) than another, as no proven answer is available.

  3. Anonymous

    Rankings are not meant to speculate. They are not a poll. They hopefully reflect results, and while the interpretations of those results can be debated, there is no point to rankings based on what we “feel” they should be. The point that should not be lost here is that these are actual people, who during this portion of the season, have excelled. While they may or may not remain ranked as the season progresses, there is no reason at all why their great results cannot be recognized and rewarded. It does not minimize the rankings, or those not included. It is merely a snapshot of the fall season.

  4. D3 Coach

    I don’t see any issue with how the fall rankings are done. It rewards players who play well and those who don’t still have the entire spring to make it up. Plus if you want to base rankings off of last year, how do you rank players who spent their Fall abroad? Should Brantner Jones or Joey Fritz be ranked high in the region because they were at the end of last year? Also, I don’t see how you can justify ranking someone low in the fall because they played #5/6 the year before.. If they play well in the ITA, they deserve a high ranking. If that same player is at 5/6 when the season starts they will gradually be moved down/out of the rankings. The fall rankings determine very little, and the system should stay as it is.

  5. Love D3 Tennis

    Obviously my reference above to #1 and #2 ranked singles and doubles “teams” did not refer to any team, but only to singles and doubles competitors.

  6. Love D3 Tennis

    Computing team rankings based upon pre-season matches, months before the regular season begins, is bad enough to unreasonably throw team rankings out of whack. But it is much worse that this is done for individual and doubles rankings purposes, as unlike team rankings, the individual and doubles rankings do not seem take into account, at all, prior year performance and probably only the results from ITA regional and national fall tournament (though it is possible that pre-season team match performance is also taken into account).

    The main reason the latter practice is so ludicrous is that, in many, many cases, the individuals and doubles teams now ranked are not likely to be play #1 or #2 for their teams. I think if someone looked at the final individual and doubles rankings from last season, it would be totally dominated with #1 ranked singles and doubles teams, with a scattering of #2 ranked teams, with no #3 or below ranked teams included.

    So I think any singles or doubles player who is now ranked has very little, if any chance, to be ranked at the end of the year if he does not consistently play #1 or #2 singles or doubles for his team. While it is difficult to determine, in a few cases, who is likely to play #1 in singles or doubles for a school, I think in most cases, this should be easy to determine and may not correlate with the performance in the fall tournament.

    While I understand it is easiest for the committee to not have to try to blend a full season of prior year performance during team matches with the results from a single pre-season tournament, I think with a little effort, that could fairly easily be done, and would lead to results that would more closely reflect what the rankings truly should be. One method to do so would be perform some sort of average between the final year end rankings in singles and doubles from last season for returning players, again reflecting a full season worth of matches, with the results of the fall tournament.

  7. Sam

    Little bit confused how Amherst in #19 nationally but #1 in the northeast, ahead of teams like #3 nationally Williams

    1. d3tennisguy

      Here’s how I understand it: the regional rankings committees are given different ranking criteria, which are basically only supposed to take into account matches against in-region teams. Despite finishing behind Williams in the national rankings last year, they were still ahead of Williams in the regional rankings because of their two head-to-head victories.
      As far as the national rankings go right now, they are obviously completely out of whack. The national ranking committee, unlike the regional committee, had to take into account Amherst’s two losses this Fall, which meant they would either reward Hopkins and CMU, which would mean moving Carnegie into the top 10 for a win over a short-handed Amherst team, or punish Amherst. Punishing Amherst was easier, and made more sense, so they did that. It’s akin to how CMS spent a couple weeks at #18 in the country last year after losing to Swat and Cruz. Everyone knew they were better than that, but their results hadn’t showed it yet. Being at #19 is a little annoying for Amherst, especially if their recruits (who, let’s remember, are sometimes lazy high school seniors who won’t take the time to look at the story behind the ranking) get a little disenchanted, but you won’t be seeing me crying over Amherst’s recruiting plight. They’ll get back into the top 10 as soon as they start playing matches in March.

Leave a Comment