It’s a little bit of hard to define how accurate the models are in this discussion, because the models That are being looked out here are projecting a teams percent chance to win.
For example, if a model says that Notre Dame has a 70% chance to win and Notre Dame then loses the game… What is the model wrong or right?
There isn’t necessarily an answer to that question, unless a model actually predicted a 100% chance of victory for a specific team (which can never occur, mathematically)
For this type of analysis, the accuracy of a model typically have to be assessed by a valuating it’s methodology and data sets for any type of logical error, Baez, or oversight. This time of the valuation is inherently subjective, and therefore for a decent model reasonable minds may differ as to its accuracy.
Overall, I have found the ESPN app PI model to be surprisingly affective, both in its methodology and in the results that it outputs.
All that being said, there is no guarantee that a betting spread well actually converge to statistical models. However, when it does, the most reasonable inference off of that occurrence is rational action by bettors being driven either by the model itself or by the same underlying factors that cause the model to predict that outcome.
This is the simple reality that
@hayaka and
@ivan brunetti Either do not understand or intentionally ignoring.