#113 Claremont (10-2)

avg: 1312.42  •  sd: 77.88  •  top 16/20: 0%

Click on a column to sort  • 
# Opponent Result Game Rating Status Date Event
291 California-Santa Cruz-B** Win 13-1 1123.06 Ignored Feb 4th Stanford Open
78 Santa Clara Loss 8-11 1109.47 Feb 4th Stanford Open
320 Stanford-B** Win 12-3 913.66 Ignored Feb 4th Stanford Open
287 Portland** Win 10-2 1143.82 Ignored Feb 4th Stanford Open
149 Cal Poly-SLO-B Loss 5-8 701.49 Feb 5th Stanford Open
159 Puget Sound Win 8-7 1239.99 Feb 5th Stanford Open
241 Humboldt State Win 11-5 1357.23 Feb 5th Stanford Open
230 Cal State-Long Beach Win 15-6 1391.9 Apr 1st Southwest Showdown
221 California-B Win 10-8 1117.28 Apr 1st Southwest Showdown
149 Cal Poly-SLO-B Win 11-9 1404.3 Apr 1st Southwest Showdown
152 Northern Arizona Win 14-7 1726.24 Apr 2nd Southwest Showdown
179 Loyola Marymount Win 12-9 1378.09 Apr 2nd Southwest Showdown
**Blowout Eligible


The uncertainty of the mean is equal to the standard deviation of the set of game ratings, divided by the square root of the number of games. We treated a team’s ranking as a normally distributed random variable, with the USAU ranking as the mean and the uncertainty of the ranking as the standard deviation
  1. Calculate uncertainy for USAU ranking averge
  2. Model ranking as a normal distribution around USAU averge with standard deviation equal to uncertainty
  3. Simulate seasons by drawing a rank for each team from their distribution. Note the teams in the top 16 (club) or top 20 (college)
  4. Sum the fractions for each region for how often each of it's teams appeared in the top 16 (club) or top 20 (college)
  5. Subtract one from each fraction for "autobids"
  6. Award remainings bids to the regions with the highest remaining fraction, subtracting one from the fraction each time a bid is awarded
There is an article on Ulitworld written by Scott Dunham and I that gives a little more context (though it probably was the thing that linked you here)