College gymnastics is one of the few sports where rankings are determined solely using math rather than polls or a win-loss record. The National Qualifying Score (NQS) is ultimately used to determine which 36 teams qualify to the NCAA postseason as well as the top 16 seeds within that tournament. For a refresher on how NQS is calculated, let’s turn to our Resource Hub:
NQS is determined by taking a team’s top six scores—where at least three must be from away meets—dropping the highest score and averaging the remaining five. This method allows teams to get rid of bad early season meets, as well as accounts for “home scoring” discrepancies.
This method does not cause much controversy since it completely removes the human element from the ranking system. However, that doesn’t mean it can’t be improved. For this exercise, we asked each of our data editors to come up with an alternative method of calculating NQS and apply it to the last five years of regular season scores to see how it would affect qualification.
What is your proposed calculation and what was your motivation in creating it?
Mariah: I don’t think enough scores are factored into NQS. It basically allows for the entire first half of the season to be dropped. I also think only dropping one high score is not enough. I think a good alternative, that’s still simple enough for the average fan to understand, would be taking the top eight scores (three of which must be away), dropping the top two and averaging the remaining six. I think this method rewards consistency over the whole season more than the current method.
Emma: I believe that NQS should factor in the sixth score of a rotation, especially when it is so valuable in regional qualification. If NQS is used to gauge the strength of a team, that should include every single routine competed. A good point of reference is the 2022 NCAA championships selection. When you get down to the very last teams in range for qualification to the postseason, the NQS is barely variable. The NQS range between the 28th ranked team (the top-ranked play-in team) and the 42nd-ranked team (missing regionals by six placements) is only half a point! There is very little differentiation between teams at that point, and it puts excess pressure on teams to perform well at conference meets to clinch a strong road score, rather than focusing on winning the conference title. It adds excess distraction to conference meets when there is an easier way to differentiate between teams that late in the season. Counting the sixth score of a rotation rewards more consistent teams in the regionals qualification process, and a team’s ability to put up a completely consistent lineup would finally take on weight in the rankings process.
Dara: Since NQS is intended to determine which teams qualify to the postseason, NQS should attempt to offer insight into a team’s chances of success in the postseason. For each gymnast on each event, I would calculate the gymnast’s NQS on that event as the weighted mean of all scores achieved on that event across the regular season. The choice of weights would certainly be a subjective part of this methodology, but the idea would be to have later weeks have higher weights (i.e., week 11 should be weighted most heavily and week 1 most lightly), to capture the idea of peaking for the postseason. For each team on each event, I would then calculate the team’s NQS on that event as the five highest gymnasts’ NQS on that event. Theoretically, teams would put their best foot forward in the postseason to achieve success, so since we’re trying to get insight into their chances of success, we look at their top gymnasts.
What do your results show?
Mariah: Oklahoma is often considered one of the most consistent teams, and it topped the leaderboard using this method in each of the last five years. The top five teams saw no change in ranking under this system with the exception of 2021, where many teams experienced a shortened season. Looking at results from the 2022 season, Michigan and BYU had the smallest change in NQS using this new system after pretty consistent seasons, whereas less consistent teams like Illinois, UCLA and LSU had a greater decrease under this formula—and in some cases their overall ranking was a bit lower, too.
In 2018, the biggest winners using this formula include Oregon State and Central Michigan, who rose five and four spots in the rankings, respectively, as well as Arizona and Utah State, who both moved into postseason qualification position over Pittsburgh and Kent State. In 2019, the biggest winners were New Hampshire, who saw a rankings increase of three spots, and UIC, who rose one spot to qualify to regionals over Lindenwood. 2020 saw significant movement from New Hampshire (six spots), Arizona (five spots), Penn State (five spots) and Central Michigan (four spots), which would have placed both New Hampshire and Central Michigan into regionals qualifying position, at least at the point of the abrupt end of season. Due to only 50 teams competing at least eight times in 2021, rankings varied widely for most teams, with only two teams seeing no change in ranking. Most significantly, North Carolina and West Virginia would’ve qualified to regionals after moving up eight and seven spots, respectively. In 2022, the biggest winner was Southern Utah, who saw an increase of five rankings, followed by Rutgers with an increase of four. However, Rutgers would have still missed out on regionals by six spots.
Emma: After running the data to include a sixth score in totals used for NQS calculation, I found that Oklahoma is overwhelmingly the winner of this format. In the five seasons analyzed with this method (2018-22), it topped the rankings in four of them, only missing out to Florida in the 2020 season by one rank. LSU and Florida also excel in this format, staying within the top five teams in every season analyzed. Overall, the very top teams stayed in place at the top. However, teams outside of the top five were more prone to rankings shake ups, and the format effectively rewarded teams who were able to put up entirely consistent lineups week after week.
Some of the losers of this format were Georgia and Denver, who put up solid rotation scores in their meets but more often than not ended up dropping a fall. Those falls added up and put them behind in this version of NQS calculation. In 2020, Georgia would have fallen an enormous 46 spots in the rankings under a six-up-six-count NQS, and Denver 49 spots (with little improvement in the following seasons), although that can partly be accounted for by the fact that Denver sometimes only competes five gymnasts on an event.
Dara: Looking at scores from the past five years, one of the teams that would have benefited the most from this system is North Carolina: It would have placed 10 spots higher in 2018 and eight spots higher in 2019. In contrast, Michigan State and Georgia would have suffered under this system: Michigan State would have placed six spots lower in both 2020 and 2022, while Georgia would have placed five spots lower in 2019 and eight spots lower in 2022.
Large differences in ranking under this system as compared to the actual NQSmethod that would have made a difference for postseason include: In 2019, UC Davis would have ranked 34th instead of 39th, qualifying it to regionals; in 2020, Central Michigan would have ranked 32nd instead of 38th, leaving it in a better position heading toward the canceled regionals; and in 2022, Michigan State would have ranked 18th instead of 12th, causing it to fall out of the seeded spots.
What are your thoughts on your methodology after seeing the results?
Mariah: Like the current method, I think the biggest downfall to my calculation is that it benefits teams that compete more throughout the season because they are able to drop more scores. I would like to see even more road meets or just meets in general factored in, but some teams don’t compete enough or wouldn’t have enough road meets until the conference championship, which could make it harder to predict postseason standings. This method caused tremendous change when calculating NQS for the 2021 season due to the shortened seasons that a lot of teams experienced; only 50 teams competed enough to earn an NQS with this method compared to 65 teams that managed to earn an NQS under the modified system for that season. Michigan only competed eight times and couldn’t drop any low scores with this format, dropping it from second to eighth place going into the postseason where it ultimately went on to win the national championship. In 2022, four teams didn’t compete enough to earn an NQS under this system. I think this would be a great calculation method under normal circumstances, but it only works well when teams are able to compete a full season. In addition to pandemic-related cancelations, this would also unfairly affect teams that have more limited budgets or are located in places that experience more weather-related travel interruptions throughout the season.
Emma: This format is punishing (to say the least), and there were teams that dropped below the top 36 that I believe should’ve earned a regionals berth based on their performances that season. Georgia may not always put up six hit routines, but it isn’t significant enough to warrant completely knocking it out of the postseason, especially when the hit routines are relatively strong. After analysis, I don’t think this would necessarily be the best way to reward consistency in NQS, and any format attempting to do so would need to be less severe. It’s also important to note that if an all-routines-count format were in play, teams would change their lineup and routine composition strategy accordingly, so it’s likely we wouldn’t see such dramatic results in a real-world situation.
Dara: Overall I like the results, as I feel like they reward teams who are peaking at the right time going into the postseason. However, shifting the NQS calculation from using a team’s total meet scores to instead using individual gymnasts’ scores would be a huge change, and it would lead to much more confusion among fans (and likely teams as well). While I would still like to see scores from later in the season weighted more heavily, it may be best to stick to using meet scores when calculating NQS.
READ THIS NEXT: Data Deep Dive: Dropping Scores in the Postseason
Article by Mariah Dawson, Emma Hammerstrom, Dara Tan and Jenna King
Like what you see? Consider donating to support our efforts throughout the year! [wpedon id=”13158″]