Frequently Asked Questions

General Questions

What is the purpose of this site?

We wanted to provide an advanced, objective ranking system for the entire state of Minnesota tennis community.  We rate every team, every player using the match data from the current tennis season.

We are not trying to recreate the UTR or USTA rankings for the top players, as these are separate ratings based on tournament match play for a significantly greater number of matches that are closer to each players' ability level.  Our rankings are only based on match play that occurs in conjunction with the MSHSL tennis season.  However, our team and player rankings are fairly accurate for the matches that are played during the high school tennis seasons.  Whether or not our player rankings agree with UTR or USTA rankings is largely dependent on the quality of match play that occurs during the high school tennis season (see "How accurate are the rankings?") and how many matches are played (some teams have 3x the number of matches compared to other teams).

This site is not intended to be used by college coaches for evaluating the top players.  The level of competition varies significantly among the top players, mainly due to suboptimal match scheduling practices.  However, the TrueSkill algorithm does a pretty good job at ranking the top players appropriately during competition at the state tournament.

We want to stress that we are doing this for fun and as a service to the MN high school tennis community.  We are tennis parents ourselves and love a good "Who's number 1?" argument as well.

Feel free to contact us at mnhstennis@icloud.com!

Who sponsors this site?

This website is privately maintained.  As such, we have no affiliation with the Minnesota State High School League (MSHSL), the Minnesota Tennis Coaches Association, or any other profession tennis body or commercial business.  All rankings on the site are created by us.

As you can see, there is no advertising or user fees for the site and we intend to keep it that way.  We generate no revenue at all from these rankings, and fund all server, data and programming expenses ourselves.

How accurate are the rankings?

Over the past 3 years, we have evaluated many different algorithms to determine which would be the best for rankings tennis matches.  Some methods only use wins and losses (RPI, Colley, Elo, TrueSkill, Dominance Matricies) while others use scoring information (Massey, Bayesian Inference).   Generally, using scoring data, in addition to wins and losses, results in greater accuracy.

For our site, accuracy is defined as the percentage of team or individual matches where the predicted outcomes based on our rankings matches what actually happened during match play during the season.  Our team and individual ranking accuracies are consistently between 90 and 95%.  The remaining matches where the predictions and actual match play disagree (violations) are a combination of true upsets and situations where the teams rankings are so close to one another  that the outcome is essentially a toss-up.  Half of the time, these toss-up matches will result in violations.

We are happy to discuss our methods further with anyone interested.  We are also interested in adding content is there is a desire for it.  There are a whole lot more of statistics that we have behind the scenes, but we are trying not to clutter up the website interface with trivial information.  There is also nothing we do that we consider to be proprietary, although it would take an extra-ordinary effort to duplicate it.

How can I increase my ranking?

The Team Rankings use scoring information to determine dominance (Massey, Stan).  Every single game won or lost in each match counts.  So win as many games as possible and lose as few games as possible.  And win as many matches as possible.  And strength of schedule counts as well.

For Player Rankings, it basically comes down to how well you play similarly ranked players.  Beating better players will increase your skill level Mu.  Playing often and playing similarly ranked players will cause your Sigma to decrease.  Remember that the Leaderboard ranking is calculated as:

LB (Skill Rating) = Mu - 2*Sigma

So beating good players (increasing your Mu) and playing similarly ranked players often (decreasing your Sigma) will help you achieve your highest ranking.

The biggest problem with ranking the Minnesota High School tennis seasons is the large number of instances where highly ranked teams are playing significantly weaker teams, resulting in blowouts.  These matches are no fun for either team and there is little to no information contained in a match where one team blows out another.  That is why blowouts between mismatched teams are weighted significantly lower for the Massey method.

What is the solution?  Smart scheduling.  Team should be looking to schedule matches with other teams of similar ability.  This will result in more competitive matches and better overall rankings.  We are currently working on a software implementation of this.

Are the rankings bias free?

In a word, yes.

All of the rankings are computed using advanced mathematical algorithms that are freely available on the internet.  The methods themselves are unbiased and treat all teams equally with respect to their season play.  If there is any "bias" present, it would affect all teams equally.

What are the computer requirements for calculating these rankings?

The hardware and software used are as follows:

All calculations are performed on an Apple iMac Pro (2017) using 8 cores.

The base software we are using is R (The R Project for Statistical Computing), an open source,  functional programming language that is well suited for statistical analysis, and RStudio, which is a integrated development environment (IDE) specifically designed for R.  Most of our code is custom written, although some of it has been written elsewhere in the R community.

The TrueSkill rankings are computed using Julia, which is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.  Using R, the Trueskill rankings typically take about 3 hours to process over 13,000 matches for approximately 2400 players (a typical girls tennis season).  Using Julia, this task is reduced to approximately 30 seconds.

Data

How do you know which teams are playing each season?

The Minnesota State High School League publishes the Competitive Sections list on their website prior to each season.  We use that list to download match data from TennisReporting.com.

Note that some teams on the list may choose not to field a team for any particular season and hence they will not show up in the rankings because they have no match data on Tennisreporting.com.  A few other teams may not show up in the rankings because they have played too few matches or have not won a single match to date.

Where do you get the match data?

All match data is downloaded from TennisReporting.com daily during each tennis season.  After completion of the state tournament, we will wait until all of the results have been entered and will then perform a final download and calculate final rankings for the entire season.

Occasionally, as we improve our algorithms, we may recalculate prior season rankings so that all season rankings are based on the same algorithms.

What scores are you using to create rankings?

The fundamental unit of play in our rankings is the singles or doubles match.  The total games won for each winner and each loser of any particular match are summed to create the overall score.  For a typical match score of 6-4, 6-1, our rankings calculate a combined score of 12-5.  The score differential, defined as the winning player's score minus the losing player's score, is then calculated and used, although somewhat differently, for both the Massey and Stan rankings.  In the Massey method, the score differential itself is used.  For the Stan ratings, the square root of the score differential is used.

Note that it is entirely possible that the losing player may win more games than the winning player, such as with a match score of 7-5, 1-6, 6-4 for an overall score of 14-15.  This will result in a negative score differential which will slightly favor the losing player.  However, these cases are not common, and only usually result from a 3 set match, which results in essentially a tie from the computer's calculations.

What do you do with scores that have been entered incorrectly, like reversed scores?

We have a separate program that is devoted entirely to score correction.  Here's what it can do:

  • Find and fix reversed scores.
  • Analyze and fix set scores to make sure all games totals result in a valid set.
  • Analyze and fix combinations of sets to make sure all sets result in a valid match.
  • Summarize the types of winning matches (2 set wins, 3 set wins, pro-sets, etc.) to make sure they are within normal parameters.

 

In some cases, the computer cannot correct an invalid score and those are set aside for manual correction.  Many times we can look at all of the set scores in a match and quickly determine what was meant to be entered (e.g.  a 61-0 score is most likely a 6-1 set).  For those score errors where we cannot determine the original intent, we convert it to a closer valid score or simply leave it as is if deemed to be an insignificant error.

Do you include retirements and defaults in the rankings?

Retirements ARE included in the rankings because they correspond to a match that was actually played, but not finished,  and contain valid performance information.

Defaults ARE NOT included in the rankings since they correspond to matches that were never played and contain no information about the strength of either player.

Are exhibition matches included in rankings?

Our software looks for exhibition matches and deletes them.  Exhibition matches are usually designated as flights 5 and greater for singles and flights 4 and greater for doubles.  The software also checks to make sure that these matches are not part of a singles or doubles tournament, which frequently results in higher flight numbers.  All tournament matches are included in the rankings.

How do you handle out-of-state matches?

Some schools along the MN border play matches with Wisconsin, Iowa and South Dakota high schools.  These matches are deleted and not used in the rankings.  This is because the computer algorithms need to calculate rankings for each school present in the overall match data.  If we were to include out-of-state schools, we would have to calculate their rankings as well, which would require us to know their entire out-of-state schedule and rank all of their out-of-state opponents, and so forth.  This is a common situation with most algorithms and the work around is to simply delete those matches.   This can result in the border schools having lower numbers of matches that comprise their rankings.

Are "unapproved" matches included included in the rankings?

TennisReporting.com has a system in place whereby all matches need to be approved by both coaches before being finalized.  This does create some delay in finalizing matches, although the thought is that this will reduce errors in the match data.  At any one time, there are usually between 7% and 9% of all matches that are "unapproved".

We still use these unapproved matches in the rankings, but isolate them first and then check them to make sure they include the minimum information necessary to use for the team rankings.  However, there is still some manual effort required to add missing team names to a small fraction of these "unapproved" matches.

Do you delete any other matches before creating the rankings?

There are additional matches that need to be deleted from TennisReporting.com for the following reasons:

  • Matches that feature two players from the same school are deleted prior to team rankings.  This occurs rarely in tournaments.   However, we may include these matches for the player rankings at a later date.
  • Teams with fewer than 7 matches (less than one full dual meet) are deleted since they don't have enough matches to calculate a valid ranking.
  • Teams that are winless are also deleted from the season data prior to ranking since their ranking is pretty much already known and their opponents rankings may be adversely affected by playing a team at the bottom of the rankings, even if it was a blow-out.

Team Rankings

Why are there so many team rankings? Which one is most accurate?

For our team rankings, we combine the results of 2 different polls to help increase our overall accuracy.  For our purposes, accuracy is defined as the number of season matches correctly predicted (by win-loss only) by the ranking in question.  Both the Massey and Stan ratings yield comparable accuracy, although you can still see a difference in their respective rankings.

The fun part of using 2 different ranking methods is that it allows for there to be controversy between them, such as having 2 number 1 teams.  Everyone loves a good "who's number 1" arguement!

The Power 10 score is a separate metric that is based on the sum of the TrueSkill ratings for the top 4 singles players  and the top 6 doubles players, similar to the Power 6 that is calculated for the college teams on the UTR website.  This is described further in a separate question.

What is the Massey method?

The Massey Rating model was published in 1997 by Dr. Kenneth Massey, in his honors thesis at Bluefield College.  His technique was subsequently used as one of six computer polls in the BCS (Bowl Championship Series) selection system from 2004 through 2013.

The principle of the Massey Rating model is fairly simple.  It is based on the assumption that the difference in ratings between 2 teams should be proportional to the difference in their scores if a game was played between the two teams.  The derivation of the model is fairly straight forward using linear algebra and comes down to solving a  (n x n) system of linear equations (n = number of teams) for n unknowns (the ratings vector).  The method uses score differentials between the winners and the losers.

For those interested, here are a couple links to descriptions of his method.  Many more descriptions are available on the internet, including many variations that have been developed over the past 2 decades.

What is the "Weighted" Massey Method that you use?

We first implement the basic Massey method that is well published, using score differentials to describe the difference in ability between the two players.

Using these base ratings, we then assign a weighting to each match for the entire season to date.  The amount of weighting varies from 1.0 (for matches that occur between opponents who are ranked adjacent to one another - for example Teams #10 and #11) to almost 0.0 (this would be assigned to a match between the #1 team and the last place team).  The amount of weighting varies linearly between these two extremes.  The result is that the larger the difference in ranks between 2 teams, the less it will count for each team's rating/ranking.  Conversely, the closer 2 teams are in rank, the more the match will count toward their final rating/ranking.

This makes intuitive sense, but there is also another reason to weight tennis matches.  Blow-outs contain little information about the difference in true strength between the 2 teams.  In tennis, the scores of "blow-outs" are limited to 6-0, 6-0.  The combined score of 12-0 does not represent the true difference in ability between the two players.  If these 2 players were allowed to play longer, the number of games won by the winner of the match would be significantly greater, whereas the loser would still be expected to win 0 or perhaps a few games at most.  So these games are downplayed in the ratings/rankings by applying much lower weighings based on rank.  However, if a lower ranked team beats a higher ranked team, the result will be counted at full weighting (1.0).

What is a Probability Distribution and why do I need to know this?

Everyone knows that athletes have good days, as well as bad days.  So do tennis players;  sometimes you are in the zone, sometimes you are not. One way to model this varying level of play is with a probability distribution.  The best known probability distribution is a "normal" distribution.

This distribution describes how a variable is distributed over its range.  The center of the graph corresponds to Mu (μ), the average value or mean of the distibution.  The standard deviation is referred to as Sigma (σ) and describes how far away from the mean the variable is distributed.  68.2% of values fall within 1 standard deviation of the mean (+/-),  95.4% of values fall within 2 standard deviations of the mean, and 99.7% of values fall within 3 standard deviations of the mean.

Why is this important?  The Massey method, and all other "non-probabilistic" methods, compute a single value for each team's rating that is assumed to have been constant throughout the entire season to date based on their previous play.  This doesn't really model how sports teams perform.

A better way to model a player's or team's performance would be to assume that it is variable, but with a certain average level of play (mu), and a certain level of uncertainty (sigma), kind of like a normal distribution.  The majority of your play is closer to the center of the distribution (your average).  But on some days, your performance is above average, corresponding to the right side of the curve, and sometimes it is below average, corresponding to the left side of the curve.  Better players and teams have higher μ (average strength).  Players who have matches against other players of similar ability have a lower σ since there is a balance of wins and losses to determine a more accurate μ.  Players who have all wins against inferior opponents are difficult to accurately rate since there are no losses to determine the upper limits of their rating.  As a result, even though their rating μ may continue to increase, their σ will continue to remain high as well.

Knowing this will help you understand the TrueSkill algorithm (used for the player rankings).  Probability distributions are also used for Bayesian Inference (the Stan rankings), although this algorithm is more difficult to understand.

What is Bayesian Inference?

Simply stated...Bayesian inference allows us to take our prior beliefs about some process, and modify them based on new evidence that becomes available.  The evidence may support our prior beliefs or it may force us to formulate a new belief based on the new data.

Historical Note:  Bayes' Theorem was actually used by English mathematician Alan Turing to break the Enigma code in WWII, which allowed the Allies to decipher the German U-Boat messages and win the Battle of the Atlantic.

Any detailed description of Bayesian inference (basically "statistical inference" using Bayes' Theorem) on this website would be grossly incomplete and likely confusing.  But we thought we would at least show you Bayes' Theorem and try to make sense of it in relation to our tennis ratings application.  Note that all terms are probability distributions (see previous question on Probability Distributions).

Bayes' Theorem allows us to compute a "conditional probability" P for the hypothesis A, given that the data B is true.  This is written as (P(A|B)), where the vertical line is read as "given".  For our tennis rankings, our hypotheses are the team ratings (A) given the season match results (B).  That's the left hand side of the equation.

The next term, P(A),  is the probability distributions of our team/player rankings prior to the current season.  In other words, it is last year's end-of-season rankings.  These may be a good representation of the current ratings, or there may be significant changes in team composition (graduations) since last year such that these prior ratings are no longer valid.  But this establishes a baseline to compare the current season results to.  This term is commonly referred to as the "prior" and represents our best knowledge of the team strengths prior to the new tennis season.  Note that no prior information is used in the Massey method, which assumes that all team strengths are equal at the beginning of the season, which we know is not true.

The next term is a little more complicated.  Lets take the bottom first.  P(B) represents the probability of the current season results.  This sounds a bit confusing since you would think the probability should be 100% since they just happened, right?  It turns out we don't even need to go that far, because this term is what is referred to as a normalizing constant.  It's there just to make sure that all the probabilities add up to 1, like they should.  Since we are only interested in the rankings of one team relative to another, we can simply ignore this term (set it to 1) and rescale the results later.  So much for the "marginal".

P(B|A) is referred to as the "likelihood ratio" and is read as "the conditional probability of the match results given the current rankings".  If you think about it, it kind of makes sense.  In order for the winners to beat the losers of each match, the ratings of the players need to be such that the winners, on average,  have higher ratings than the losers. This term is computationally intensive and is derived from the current season data.

Understanding Bayesian theory is difficult.  But implementing the computer code to perform Bayesian inference is really hard.  But for this, we have Stan to automate the process  (see next question).

The following links give a good overview of the Bayesian method for beginners...

Who is Stan?

Stan is a software platform for performing bayesian inference.  It automates the process by taking care of the advanced statistical computations in the background, while letting user focus their attention on building and testing models for various processes using Bayesian inference.

Stan was named for Stanisław Marcin Ulam, a Polish scientist in the fields of mathematics and nuclear physics, who was a pioneer of the Monte Carlo method of computation (used in Stan), and was involved in the Manhatten project, among other things.

Stan was developed in the early 2010s and currently has thousands of users.  There is a fairly steep learning curve, but the site provides ample documentation and case studies.  The website for Stan is located here.

For those interested, there is a video of Dr. Andrew Gelman,  professor of statistics and political science at Columbia University, discussing modeling the European Premier League (EPL) soccer season with Stan.  The techniques and code he presents are very similar to the analysis we are using for our tennis ratings.  The video can be found here and the EPL ratings are discussed in the first 27 minutes.

A recent article online also discusses using Stan to calculate ratings for professional tennis players on the ATP based on match information from the 2019 tennis season.  That article can be found here.

How are the Average Opponent Ratings calculated?

This is a rather easy calculation to perform since the ratings for all players/teams are known.  The AOR is simply the arithmetic average of the ratings for all match opponents that were played in the season to date.

This is an important number to consider in the rankings, since a win-loss record means nothing unless you know the strengths of the opponents played.  For our  team rankings, the two most important numbers listed are the Total Games Win/Loss records and the Average Opponent Ratings.

For Player Rankings, the AOR is the average of all of the player's opponents' Skill Ratings.

What is the Power 10?

Player skill metrics (Mu, Sigma) are calculated using Microsoft's TrueSkill algorithm and are displayed under "Player Rankings".  These ratings are then used to create a Leaderboard rating (Mu - 2*Sigma), which is then mapped onto a scale ranging from 0 to 100 (the "Skill Rating").  The top 4 singles players ratings and top 6 doubles players ratings for each team are added together to form the Power 10.  Theoretically, the Power 10 could approach 1000 (10 players each with a 100 ranking), but in practicality, the best teams are in the 800's.

This is similar to the Power 6 for college teams that is calculated on the UTR website.  The Power 6 represents the sum of the UTRs for the top 6 players on each college team, since colleges typically play 6 singles matches and 3 doubles matches with the same 6 players.

Why aren't we ranked higher than a team we've already beaten?

The season rankings are based on all matches played throughout the season to date.  While upsets do occur in the rankings, it is the entire season of play that determines the rankings rather than individual matches.

Player Rankings

What algorithm is used for player ratings?

 

 

 

 

 

 

 

We use a variation of TrueSkill for player rankings.  TrueSkill was developed by Microsoft Research Lab in 2005 for matchmaking on its XBox Live platform.  The theory behind TrueSkill has been published and is available here.

One shortcoming of the original Trueskill algorithm was that the ratings were sensitive to the order in which the matches were played.  This was solved with enhancements to the original algorithm and was called "TrueSkill Through Time" and was published in 2008.  This is the implementation we are using on our website.  The published paper is available here.

Several additional enhancements were added in 2018, although many of the finer details are unknown and there are no coded versions available in the R community.  The published paper can be seen here.

 

How are TrueSkill ratings calculated?

Trueskill assigns a normal probability distribution to each player which can be described by its mean (mu - μ) and standard deviation (sigma - σ).  The μ corresponds to the skill level of the player while the σ corresponds to the uncertainty of the strength of the player.  Both the μ and σ are updated for each player after each match.  Stronger players have high μ and weaker players have lower μ.  The algorithm to update μ and σ for each player is fairly complex, but results in very reasonable skill levels of the players.

In the chart below, you can see that Natalia has a higher skill level μ, but there is a fairly large uncertainty in this number because of her large σ.  On the other hand, there is a much higher confidence in Eric's skill level μ  since his σ is much smaller.

The current values of  μ and σ for each player are provided in the player rankings table.

For leaderboards (such as as our Player Skill Ratings), Microsoft recommends using the following formula to rank players' skill levels:

Player Rating = μ - 3* σ

This corresponds to a value that is 3 standard deviations below the player's mean rating μ.  This results in a very conservative player rating.  However, we have found that this tends to penalize those players who have not played as many matches as other teams/players (because their sigmas are higher), and so we modify the formula as follows:

MNHSTennis.org Player Rating = μ - 2* σ

This is applied to all players equally.  This ensures that there is a greater than 97% probability of the player beating opponents with a lower rating.  The player ratings are then rescaled from their original values to a range of 0-100, similar to the team ratings, and average opponent skill levels are also calculated.

Note that the leaderboard ranking may be different than the rankings based on μ alone, but remember that players with high σ have greater uncertainty in their μ values compared with players with lower σ.  Hence, there is less certainty regarding a player's average skill level μ if they have a high corresponding σ.  With continued match play, all sigmas should decrease over time, unless there are inconsistencies in play due to injury, etc.  Sigma may also take a while to decrease if the player continues to play far inferior opponents (for example, a top player with continuing blow-out matches against inferior opponents).

In the example above, Eric's leaderboard rating is 26.07, compared to Natalia's rating of 15.09, despite Natalia having a higher mean skill level μ of 33.00 compared with Eric's mean skill level of 29.82.

An excellent description of the algorithm is found here.

The original publication of the method by Microsoft can be found here.

How do TrueSkill ratings compare to UTR?

UTR describes itself as a modified Elo system, and provides details here.  However, the finer details of the system are proprietary.

Several highlights from the UTR system include:

  • Up to 30 of the most recent matches are included in the rating.
  • Only matches within the last 12 months count.
  • The rating is in part based on the percentage of games won by each player.
  • More weight is given to longer matches.
  • More weight is given to matches between players with smaller differences in UTR.
  • More weight is given to matches where the opponent has a more reliable UTR.
  • Less weight is given to older matches.

 

There are some similarities of UTR to Trueskill, although it is generally unknown how close they really are.  For example, reference is made to the reliability of the opponent's UTR, which might be analogous to the sigma parameter of the TrueSkill ratings.  But there are differences as well, including the use of scores by UTR, but not by Trueskill.

All in all, its an apples to oranges comparison.  Each system works well for the environment that it was designed for (UTR for tennis, Trueskill for XBox Live).  But its not surprising that Trueskill seems to work well for tennis matches as well.

How does TrueSkill compare to Elo?

Elo is well published and a description can be found here.  Elo ratings are characterized by a single number to represent a player's rating.  There is also a K-factor that is used for system calibration that needs to be set prior to ranking.

Trueskill ratings are based on individual probability distribution curves and are characterized by 2 parameters, μ and σ, corresponding to the players average skill level and degree of uncertainty of this skill level, respectively.

Both algorithms use wins and losses to updates ratings.  No actual scores are used.

Because of the differences in how the ratings are calculated, the two ratings are not comparable.

One of the benefits to the Trueskill algorithm is its ability to converge on a more accurate rating for a player using far fewer matches compared with Elo.

Why is there only one player rating? Where are the ratings for singles and doubles?

Tennis players generally fall into one of three categories based on their preference to play singles and/or doubles:

  • Some players play mostly or all singles.
  • Some players play mostly or all doubles.
  • Many players play a combination of both singles and doubles matches.

 

In many cases, there may not be enough singles or doubles matches to accurately compute separate ratings.  Conversely, calculating only a singles rating ignores the performances of the player in their doubles matches and vice versa.

TrueSkill was specifically designed to calculate ratings for multiplayer games.  These games may be organized in teams, where teams compete against other teams, or as free-for-alls, whereby everyone competes against everyone.  Now think about a singles match.  It is actually a 1 versus 1 free-for-all match.  Similarly, a doubles match is a team of 2 players matched against another team of 2 players.  All of these matchups are used in computing a single rating for each player.  There is no need to separate out singles and doubles matches to compute separate ratings.  Consequently, there is only one player rating necessary.  TrueSkill was designed for this very purpose.

The additional benefit is that all matches are used for the player ratings, which allows for improved convergence to the true skill level of each player.

How many matches does it take for Trueskill ratings to converge?

Microsoft provides a table that shows the minimum number of matches per player that are needed identify the player's skill level:

  • For a 2 player free-for-all, corresponding to a singles match, the minimum necessary matches is 12.
  • While Microsoft does not provide information regarding a 2 Team/2 Players per Team matchup, corresponding to a typical doubles match,  the following game modes do provide some guidance:
    • For a 4 player free-for-all, the minimum necessary matches is 5.
    • For a 4 Team/2 Players per Team scenario, the minimum necessary matches is 10.

 

Microsoft also states that the actual number of games needed may be up to 3 times higher depending on multiple factors, including availability of well-matched opponents and variations in performance per game.

When evaluating the player rankings early on in the season, it is important to keep these facts in mind:

  • Players who play a lot of matches earlier in the season will have more accurate ratings compared with those players who have fewer matches.
  • Players who have faced more poorly matched opponents will require more matches to converge to an accurate skill level.
  • Players who have significant variations in their performance, such as injuries, or playing with multiple doubles partners with varying skill levels, will also require more matches to converge to an accurate skill level.

Why do the player rankings for 2020 look a little off?

The 2020 Girls tennis season was interesting because of the restrictions in place because of Covid-19.

  • Teams were limited to playing in their conference or section, or against local teams.
  • Total season match play was limited to fewer numbers of dual meets and tournaments.
  • The state tournament was cancelled, which would normally result in additional matches between the top teams and players in the state.

 

All of these restrictions resulted in far few matches throughout the season and less competition among the very best teams and players.

The team and player ratings are sensitive to the quality of the matches.  In order to achieve a high rating, a team or player has to player higher rated teams/players and win.  There is simply no way to significantly increase your rating by playing weaker teams.  Thats why teams who play and beat a lot of weaker teams have a difficult time climbing the rankings.

In other words, the ratings/rankings are only as good as the data (match play) and 2020 was a rough year for quality match play.