**avg:** 621.92 •
**sd:** 103.83 •
** top 16/20:** 0%

# | Opponent | Result | Game Rating | Status | Date | Event |
---|---|---|---|---|---|---|

83 | Cal Poly SLO-B | Win 8-7 | 823.97 | Feb 8th | Stanford Open 2020 | |

21 | California-Davis** | Loss 1-13 | 1139.16 | Ignored | Feb 8th | Stanford Open 2020 |

45 | Puget Sound** | Loss 2-13 | 711.08 | Feb 8th | Stanford Open 2020 | |

63 | Pacific Lutheran | Loss 0-10 | 453.66 | Feb 9th | Stanford Open 2020 | |

101 | Sonoma State University | Loss 5-6 | 418.15 | Feb 9th | Stanford Open 2020 | |

80 | Stanford-B | Loss 5-6 | 665.76 | Feb 9th | Stanford Open 2020 |

The uncertainty of the mean is equal to the standard deviation of the set of game ratings, divided by the square root of the number of games. We treated a teamâ€™s ranking as a normally distributed random variable, with the USAU ranking as the mean and the uncertainty of the ranking as the standard deviation

- Calculate uncertainy for USAU ranking averge
- Model ranking as a normal distribution around USAU averge with standard deviation equal to uncertainty
- Simulate seasons by drawing a rank for each team from their distribution. Note the teams in the top 16 (club) or top 20 (college)
- Sum the fractions for each region for how often each of it's teams appeared in the top 16 (club) or top 20 (college)
- Subtract one from each fraction for "autobids"
- Award remainings bids to the regions with the highest remaining fraction, subtracting one from the fraction each time a bid is awarded

There is an article on Ulitworld written by Scott Dunham and I that gives a little more context (though it probably was the thing that linked you here)