BCS debate rages throughout college football
By Daily Bruin Staff
Oct. 24, 2001 9:00 p.m.
By Joshua Mason
Daily Bruin Staff
In the realm of BCS politics, participants involved have simply
agreed to disagree.
The debate has been brewing since the inception of the Bowl
Championship Series standings in 1998, when they were seen as a
solution to the inconsistencies of the traditional Associated Press
and Coaches’ polls.
Rather than succumb entirely to the biases of writers and
coaches, the idealized plan of the BCS system was to factor in the
average of the aforementioned polls, strength of schedule, losses
and a computer component. The system was designed to devise a
rational scheme for choosing two candidates to play in an
unprecedented national championship game.
What was meant to be a less controversial method for choosing
the NCAA champion has recently put the football world in an uproar.
The eight BCS computer operators, who collectively devise the
computer component standings, have taken the bulk of the criticism.
This includes claims of formula inconsistencies to home team
favoritism to being labeled number crunchers out of touch with
football altogether.
“The beauty of our standings is we don’t have to
watch the games,” said David Rothman, a retired statistician
who has been producing standings since 1963, and is now an operator
living in Hawthorne, Calif. “My formula is a general one
which can be used for other sports as well. They produce generic
results that don’t go into the details of
football.”
While Rothman flaunts his lack of football knowledge as a badge
of courage, others, like panel newcomer Peter Wolfe, an associate
clinical professor at UCLA, consider themselves football fans.
“I love the game and I watch it all the time,” said
Wolfe, an avid Bruin fan who attends a UCLA game each year.
“But the computers don’t have to watch games to make
ratings. There’s no secret penalty for teams whose mascots
are Trojans.”
Of course, that’s where the debate begins. Or does it?
Opponents of the BCS have wavered over whether the real argument
against the statisticians lies in a geographical bias of the
operators or whether computerized rankings are just inefficient
means of measuring the success of teams.
Earlier this season, Jeff Anderson and Chris Hester, who devise
the Seattle Times standings, received flack from the public for
ranking the hometown Washington Huskies No. 1 in their indexes. It
didn’t help that both were Washington alumni. Washington went
on to lose 35-13 to UCLA soon after their No. 1 ranking was
unveiled.
At the same time, operator Wes Colley had South Carolina and
Richard Billingsley had Oklahoma ranked No. 1 in their indexes,
dangerous given the fact that Colley is poll partners with the
Atlanta Journal Constitution (deep in SEC territory) and
Billingsley is currently an Oklahoma resident.
“The way to get around all that is to publish your
methods,” said Rothman, referring exclusively to himself,
Wolfe and Colley. “You can prove you have no bias if
everything is out in the open.”
Some, like Jeff Sagarin, one of the original three BCS
operators, refuse to divulge their methods to help protect
ownership over standings they have been doing for years.
“What it comes down to is the integrity of the people
doing the ratings,” Wolfe said. “It’s
inconceivable to me that any funny business is going on. A lot of
the talk is just the result of selective media reporting. If he
ranks his team high, that’s news, but if he doesn’t, it
isn’t.”
Geographic disparities or not, the controversies over last
season’s national championship, in which Florida State was
selected No. 2 over a Miami team that had beaten them, alerted BCS
officials that immediate change was necessary in the overall
formulas.
As a result, margin of victory was downsized as a factor in the
standings and wins against opponents ranked in the top 15 was added
as an additional factor. Concurrently, the New York Times and
Dunkel systems used in the old BCS scheme were dumped in favor of
the indexes calculated by Wolfe and Colley.
“We had always noticed that Dunkel’s result was
different than the others,” Rothman said. “I
recommended that the board should look into it, and eventually they
caught onto the discrepancies that were going on.”
While the new formulas were meant to adjust the problem, a new
controversy involving Miami has surfaced in light of the official
BCS standings that were posted on Monday for the first time this
season.
While Miami is ranked No. 1 in both the AP and Coaches’
polls, their fourth-place BCS ranking puts them in position to
potentially go undefeated and be denied a national championship
invitation for the second consecutive year. This could very well
happen if UCLA and either Nebraska or Oklahoma go undefeated this
season.
UCLA head coach Bob Toledo experienced first-hand the
frustrations of the formula when Florida State leapfrogged over his
team that year.
“I believe in a playoff system just like they have in
every other NCAA sport,” he said.
The operators caution not to jump to any conclusions just
yet.
“As far as the what-ifs of the season, there are just too
many games left to play to really make an informed
prediction,” Wolfe said in defense of the current BCS
standings. “It’s premature to judge the standings now,
because the only ones that count are those released in the
end.”
Until those final standings are released on Dec. 9, one thing
seems to be certain: the coaches, writers and BCS brass will
continue to debate over whose system is more credible and whose
authority should have more of a voice in determining
football’s best teams.
