Which is nothing but absolute BS.
So these ranknings are based on ...
wait for it...
assumption!!!!
So if team A which is "perceived" to be VERY good beats team B who is "perceived" to be really good, by 4 points....
and then....
wait for it...
Team C who is perceived to be just "good" obliterates team A...
the rankings will have
#1 team A
#2 team b
#3 team C
Why?
Because of...hypothetical BS!
Team A, Team B play a schedule of full of ASSUMED really good teams. So all those stupid teams in that conference up and down the line all get padded because of hypothetical assumptions.
I take your point, but if the “perception” of an opponent’s quality relies on more than a mere qualitative “assumption,” then in fact there is a way of at least comparatively measuring the strength of various teams.
On one hand, you have the various output/containment metrics; and on the other, a metrics-based formula that measures a team’s strength of schedule.
Where the assumptions come in with a metrics-based analysis of opponent quality, in particular, is in the choice of exactly which metrics to use and how ACCURATELY INDICATIVE they prove to be as per a team’s relative strengths.
In other words, what exactly goes into a given STRENGTH OF SCHEDULE qualifier. Without getting into specifics, I’d argue that that is something that can be REASONABLY CALCULATED.
As in any “quantitative” analysis of this type, there will be better and worse performance predictors. Accordingly, cleverer people will select the better ones and vice-versa. But then, this is exactly what this kind of data analysis is all about:
EXTRAPOLATING PATTERNS.
To dismiss it all as merely TOTALLY SUBJECTIVE, I’d argue, is to overstate it.
I’ve experimented with several of these systems myself and have noted that most of them line up with one another while also CLOSELY TRACKING both the quantitative-based power rankings published in the media and the old EYE-TEST method.
At the same time, these systems have proven equally useful in pointing out which teams are being both underrated and overrated in the polls. For instance, my own analysis pointed out how relatively “weak” teams like UM, Illinois and Navy were when facing better competition. And yet, these teams were being ranked – right up until they FELL ON THEIR FACES – when the NUMBERS indicated that there was no real basis for it other than their WON/LOSS records.
I would also point out what I consider to be the INDICATIVE value of team power rankings vs. the PREDICTIVE value. The former is based on how a team has done to date, while the latter assumes outcomes based on past performance. While I would say that there CAN be correlation, I would never argue that it’s in any way ironclad given the that probability is still a function of RANDOMNESS.
But as a means of measuring relative strength based on performance to date, which in fact could – and in my view should – be used as a basis in selecting playoff teams, I view METRICS-BASED POWER RANKINGS as useful an INDICATIVE tool as we have.
Besides, in which area of society are we not MODELING things in a similar way?
None I can think of.