**avg:** 679.71 •
**sd:** 80.23 •
** top 16/20:** 0%

# | Opponent | Result | Game Rating | Status | Date | Event |
---|---|---|---|---|---|---|

95 | Southern California | Loss 0-10 | 329.15 | Feb 4th | Stanford Open | |

47 | Santa Clara** | Loss 1-11 | 768.16 | Ignored | Feb 4th | Stanford Open |

155 | Stanford-B | Win 8-2 | 957.72 | Feb 4th | Stanford Open | |

34 | Portland | Loss 3-5 | 1044.5 | Feb 4th | Stanford Open | |

102 | Claremont | Loss 5-8 | 437.56 | Feb 5th | Stanford Open | |

59 | California-Santa Cruz | Loss 1-12 | 673.6 | Feb 5th | Stanford Open | |

172 | Chico State | Win 4-1 | 790.14 | Feb 5th | Stanford Open |

The uncertainty of the mean is equal to the standard deviation of the set of game ratings, divided by the square root of the number of games. We treated a teamâ€™s ranking as a normally distributed random variable, with the USAU ranking as the mean and the uncertainty of the ranking as the standard deviation

- Calculate uncertainy for USAU ranking averge
- Model ranking as a normal distribution around USAU averge with standard deviation equal to uncertainty
- Simulate seasons by drawing a rank for each team from their distribution. Note the teams in the top 16 (club) or top 20 (college)
- Sum the fractions for each region for how often each of it's teams appeared in the top 16 (club) or top 20 (college)
- Subtract one from each fraction for "autobids"
- Award remainings bids to the regions with the highest remaining fraction, subtracting one from the fraction each time a bid is awarded

There is an article on Ulitworld written by Scott Dunham and I that gives a little more context (though it probably was the thing that linked you here)