![]() |
|
Friday, January 18 Updated: February 1, 11:54 AM ET College football has always had controversy By Brad Edwards Special to ESPN.com |
|||||||||||||
Naming a college football champion has never been logical. This is far from being a recent phenomenon. Army cruised to undefeated seasons and national titles in 1944 and '45 before the top-ranked Cadets were finally battled to a scoreless tie by No. 2 Notre Dame late in the '46 season. Both teams closed out the year without another blemish, but the teams flip-flopped in the final AP poll, giving the championship to the Irish. Somehow, a team with a 28-game unbeaten streak was suddenly no longer the best in the land. The two-time defending champions were not beaten on the field, but it didn't seem to make any difference to some poll voters. A boxer could never lose his title after fighting to a draw. What made the rules different in this case? College football's methods of determining a champion have always been as subjective as those used in boxing circles, and as of late, these methods have been just as controversial.
Although it was difficult to differentiate between that year's Michigan and Nebraska teams, it certainly seemed inconsistent of voters to abandon a No. 1 team that had won its bowl game. After all, they had not done the same in 1994 to give a deserving Penn State team a share of the title. It was all too confusing, and something clearly needed to be changed. Either the pollsters needed to publish a list of coaches and helmets which would always receive preferential treatment, or somebody had to find a way to ensure that the two best teams would play each other in a bowl game. Before the next season, the Bowl Championship Series was created to bring the Big Ten and Pac-10 conferences, as well as the Rose Bowl, into the national championship mix with the rest of Division I-A. And with so many teams to be evaluated and so little common ground to judge them on, the task of determining the real top two was much too daunting for wishy-washy pollsters. It was finally time for technology to make its grand entrance into college football. A formula devised by the BCS group introduced us to computer ratings and quantified schedule strength. Objective computer programs would be used as a check and balance for subjective voters. It all seemed to make sense. The intention of this formula was to distinguish two teams from a closely-matched group -- presumably, when several had the same overall record. It was put to the test immediately. In 1998, the inaugural season of the BCS, six teams from major conferences finished with one loss behind No. 1 Tennessee. The formula agreed with the polls in determining that Florida State was second best, and there was very little outcry as a result. The next year was easy with exactly two major undefeated teams, but things started to get a little shaky in 2000. The formula determined that Florida State was the best from a group of once-beaten teams, ignoring the fact that one of that group (Miami) had beaten FSU in the regular season. It also ranked Miami ahead of the team that had dealt the 'Canes their only loss -- the Washington Huskies. Adjustments needed to be made -- and they were -- but none of it could stop the tsunami that was brewing in the 2001 season. An unprecedented series of late-season upsets turned the rankings upside down and produced a national title contender that had not even won its division of its conference. Clearly, this was the wrong choice, and the result of the bowl games supported that argument.
So, where do we go from here? Eliminate the computers: Most people believe the nation's second-best team has been watching the national championship game from home each of the last two seasons, and both times, the team that got there instead did so on the strength of its average ranking in the BCS computers. Because of this, some thought will be given to reducing the impact of the computers or eliminating them altogether. The best argument against computer involvement is that it causes schedule strength and quality wins to be overemphasized -- some would say it is counted twice. Each computer, in its own way, is already giving consideration to quality of opposition and quality of victories -- much more than the polls do. This type of duplication is a fair reason to adjust or remove part of the formula. The question is which part. Eliminating the computers would be a popular move with the public since they are the least understood and most criticized element of the equation, but computers did not keep Oregon out of the Rose Bowl this past season; they only placed Nebraska ahead of Oregon. With no computer input and all other categories of the formula remaining the same, Colorado would have been second with Oregon third in the final standings. Bonus points to conference champs: Another suggestion that has come up in light of the Nebraska incident is to give a bonus to teams for winning their conference title. But this still does not alter the issue of twice-beaten Colorado being ranked ahead of one-loss Oregon. Both teams were conference champions, so the Ducks still would have been left out if such a measure had been in place last year. The biggest issue with this idea would be how to distribute the bonus. It is easy to do for conferences with a title game, but what happens when there is a tie for first place? In 2000, the Pac-10 had a three-way tie between Washington, Oregon and Oregon State -- each of whom had a 1-1 record against the other two teams. Would these teams split the bonus points or would they all get full credit? Either way, the rule would be unfair. Go back to the polls: Some people would like to see college football return to the poll system and only use the BCS formula as a "tiebreaker" when the polls disagree on the top two teams. This might be well received from a PR standpoint, but it would be a total departure from the fundamental idea behind the BCS formula. The whole point of making computers, schedule strength, number of losses and (later) quality win points part of the equation was a premise that voters may not always objectively pick the two best teams -- they might subconsciously choose the two most popular teams instead. For example, if Florida, Texas Tech and Ohio State all finished a season undefeated, the polls would have Florida and Ohio State ranked 1 and 2 (and don't even think about disputing this). The intention was to create a system that would give Texas Tech a fair opportunity to play for the national title if they had actually played a tougher schedule and won more impressively than either of those other teams. Where's the fault in that approach? Some change is definitely required, but reverting back to the polls would be nothing more than a knee-jerk reaction to the result of one game. Would this be done if Colorado had won this year's Fiesta? Of course not. The polls sure seem to have been right the last two years, but it doesn't mean they always are. Plus, is it right to give the coaches poll so much power after that group just committed what was arguably the most flagrant display of agenda voting in the history of college football polling (jumping Colorado over Nebraska at the end of the 2001 regular season)? You be the judge. And believe it or not, there are plenty of poll voters who don't want that much responsibility. The truth is that most coaches rarely see a game other than their own each Saturday. They have no basis for even attempting to compare teams. The majority of AP voters are in a similar situation. Because they are usually covering a specific game each weekend, they don't have time to watch many teams on a regular basis. Although they do take the time to research scores and stats of other games before casting their ballots, the level of exposure to the teams they are evaluating is relatively limited. On top of that, there are some media who are uncomfortable with the role of influencing the story they are covering. Many of these people embrace the concept of a system in which their contribution is on a much smaller scale. Form a selection committee: One final recommendation has been to form a selection committee -- similar to that for the NCAA basketball tournament - and allow these people to review the BCS numbers and decide which two teams are the best. But there is a big difference between putting 64 teams into a bracket and choosing two teams to play for all the marbles. Nobody wants or needs that type of pressure, so don't expect it to happen.
What's next for the BCS? The same challenge awaits the group again this year. The people are calling for change, but unless adjustments can be made to account for last season's series of events, it is pointless to do anything drastic. Change for the sake of change will only cause more problems in the long run. The biggest oversight last year was not Nebraska's lack of a conference title or Colorado's two losses. The issue was that the Big 12 was apparently overrated. Wins over fellow conference powers probably carried more weight than they should have in the grand scheme of things. Unfortunately, this was not revealed until after the bowls, when it was too late for Oregon. Now, we have a widely criticized system and very few good ideas for changing it. Hardly anyone sees all of the teams play, and there are never enough common opponents to provide a fair comparison. But reality is that the best team is not always identified by scoring margins, schedule strength and quality wins. The best team is the one that beats all the others between the sidelines. Don't blame the BCS for controversy. Blame college football. As long as the championship is not decided on the field, there will be injustices and inaccuracies in whatever system is in place. It's been happening for as long as the sport has been around. At some point, people should stop being so surprised. Brad Edwards is a college football researcher for ESPN. His weekly Inside the BCS column appears during the college football season. |
|