#161 Claremont (2-8)

avg: 589.87  •  sd: 76.68  •  top 16/20: 0%

Click on a column to sort  • 
# Opponent Result Game Rating Status Date Event
187 Cal Poly SLO-B Win 8-7 546.63 Feb 8th Stanford Open 2020
27 California-Davis** Loss 1-13 1145.47 Ignored Feb 8th Stanford Open 2020
49 Puget Sound** Loss 2-13 847.51 Ignored Feb 8th Stanford Open 2020
163 Sonoma State Loss 5-6 437.53 Feb 9th Stanford Open 2020
152 Stanford-B Loss 5-6 575.01 Feb 9th Stanford Open 2020
114 Pacific Lutheran Loss 0-10 400.51 Feb 9th Stanford Open 2020
145 UCLA-B Win 7-6 891.28 Feb 29th 2nd Annual Claremont Ultimate Classic
- San Diego State University-B Loss 4-7 440.65 Feb 29th 2nd Annual Claremont Ultimate Classic
124 California-San Diego-B Loss 7-8 783.98 Feb 29th 2nd Annual Claremont Ultimate Classic
96 Occidental Loss 5-9 559.44 Feb 29th 2nd Annual Claremont Ultimate Classic
**Blowout Eligible

FAQ

The uncertainty of the mean is equal to the standard deviation of the set of game ratings, divided by the square root of the number of games. We treated a team’s ranking as a normally distributed random variable, with the USAU ranking as the mean and the uncertainty of the ranking as the standard deviation
  1. Calculate uncertainy for USAU ranking averge
  2. Model ranking as a normal distribution around USAU averge with standard deviation equal to uncertainty
  3. Simulate seasons by drawing a rank for each team from their distribution. Note the teams in the top 16 (club) or top 20 (college)
  4. Sum the fractions for each region for how often each of it's teams appeared in the top 16 (club) or top 20 (college)
  5. Subtract one from each fraction for "autobids"
  6. Award remainings bids to the regions with the highest remaining fraction, subtracting one from the fraction each time a bid is awarded
There is an article on Ulitworld written by Scott Dunham and I that gives a little more context (though it probably was the thing that linked you here)