On Inconsistent Bulls and Power Rankings

NBA Analysis Archive
Sonics Beat
Get Sonics News Delivered to You!
Kevin Pelton, SUPERSONICS.COM | February 1, 2007

When the Chicago Bulls went to Miami and crushed the defending champs by 42 points in the first game of the 2006-07 season, it was a preview of what was to come for the Bulls. A night later, they would lose in Orlando, marking the beginning of a season of inconsistency for Chicago.

The Bulls have added to that 108-66 win over Miami a 111-66 win against Memphis and a 115-76 drubbing of the Charlotte Bobcats - three of the four most lopsided wins in the NBA this season. Blowouts like that would seem to mark the Bulls as the strong favorite in a weak Eastern Conference, but Chicago also has 11 losses by double-figures - one more than the lowly Boston Celtics. As a result, the Bulls stand third in their own division, a game behind the Detroit Pistons and a half-game back of the Cleveland Cavaliers.

Scott Skiles' Bulls have been inconsistent since the start of the season.
Terrence Vaccaro/NBAE/Getty
All the same, Chicago has one big fan - ESPN Insider's John Hollinger. Hollinger's power rankings, which debuted last month, have the Bulls not only atop the Eastern Conference as of Feb. 1 but ahead of two West teams, the L.A. Lakers and Utah, with superior records.

Hollinger's rankings have quickly become a lightning rod for criticism, with the ranking of the Bulls a favorite target. Criticism has come even from within the APBRmetrics community.

"Hollinger's new power rankings will be great for Chicago," commented Dan Rosenbaum, a consultant for the Cleveland Cavaliers, on the APBRmetrics message board. "In a week when they lose three games in the fourth quarter because they can't score, but win the fourth game by 40 over some hapless opponent, they will be able to feel good because they are moving up in the Hollinger power rankings."

Rosenbaum was being partially facetious, but his contention - that the college hoops-style blowouts registered by the Bulls are given too much credit by Hollinger's method - is a common and perfectly reasonable one. While the power rankings account for strength of schedule and recent performance, they are at heart based on team point differential, with win-loss record and the size of individual victories completely ignored.

"This is much less a factor than you might think," Hollinger answered his critics in a FAQ on his rankings. "NBA coaches tend to play their best players most of the fourth quarter as long as the margin is under 20, and as a result, even for the best teams only a small portion of their games are so one-sided that the starters can spend the second half yukking it up on the bench.

"Phoenix is a good example -- even with all the one-sided wins, Steve Nash is playing a career-high 35.7 minutes per game."

(Somewhere in Colorado, George Karl nodded his head in agreement.)

Hollinger's appeal to rhetoric, however, seems misplaced. If, ignoring strength of schedule for the moment, point differential is the ultimate indicator of team ability, this should be demonstrable. If it's true, differential should be the best predictor of how a team will do the following season - when, the theory goes, the randomness that plays a key role in determining the outcome of close games should even out - or even in the postseason.

So I crunched the numbers.

Specifically, I looked at the correlation1 between various different rating methods and a team's performance the following season, using the years 2000-01 through 2004-05 (I did not use 2005-06 because there is not yet complete data for this season).

The correlation of various measures of team performance with record the following season:

Rating Differential
Bell Curve
Capped Differential

In addition to pure winning percentage and point differential, I also used a team's point differential per 100 possessions, which adjusts for pace; the Bell Curve method, which accounts for consistency of play; capped differential, which applies Hollinger's Expected Wins formula to individual games, capping wins and losses at approximately 15 points; and the Pythagorean method2, which uses points scored and allowed instead of differential. The table at right reports the results.

Lo and behold, the strongest predictor of performance the following season, in terms of correlation, is point differential.3 What does this tell us? It appears that points scored, even in blowouts, do convey meaningful information about a team's ability.4 This is not a comprehensive study, so it is possible that some other threshold, like 20 points or 25 points, might improve slightly on the performance of point differential. However, given the added complexity of such a method as compared to point differential - which is easily available on places like ESPN.com's NBA standings - Hollinger seems to be justified in using point differential.

But that doesn't mean we should ignore situations where point differential might not be enough to tell the entire story. The Bell Curve method, pioneered by former Sonics consultant Dean Oliver, tends to be more effective than point differential at the extremes because it takes into account consistency. The chances of a team winning a given game depend both on its overall ability and its consistency.

Inconsistent teams tend to be drawn toward .500, while consistent teams are more likely to be extremely good or bad. Most of the time, the Bell Curve method makes predictions that are similar to those provided by point differential, but this is not true for extremely consistent or inconsistent teams - like those Chicago Bulls.

The standard deviation5 of Chicago's point differential this season, 15.1, is the largest in the NBA. In fact, it's one of the largest in recent NBA memory. To complete the study of the different methods, I needed to compute these deviations for each team since 2000-01. The most inconsistent teams6 of this period are listed at right.

When you add it all up, the Bell Curve method predicts a .618 winning percentage for the Bulls, as compared to .650 from their point differential. (Chicago is actually winning at a .565 clip.) That's still very good - still fifth in the league, in fact - but not quite as good as implied by their point differential.

Chicago's inconsistency might really be a conference issue. The Bulls are 20-8 against the Eastern Conference, including a dominant 15-1 record against East foes at the United Center. They are just 6-12 against the more difficult West, which may spell trouble on a seven-game West Coast swing that started with a 110-98 loss Monday to the Clippers.

The good news? If the Bulls end up facing a team from the West in the postseason, they'll have already gotten as far as Hollinger's ranking predicts.

Footnote 1: Correlation measures the strength of a relationship between two variables. A perfect one-to-one relationship is a correlation of 1; an opposite relationship is -1. A correlation of 0 means no relationship whatsoever. In this case, the higher the correlation the better.

Footnote 2: Pythagorean ratings take the general form PF^x / ((PF^X) + (PA^X)). I used 16.5 as the exponent in my study, as this is the preferred exponent for Hollinger and Dean Oliver. Reducing it to 13 or 14, other common exponents, did not improve the correlation.

Footnote 3: Another way to measure the strength of a relationship is root mean squared error (RMSE), which is more conservative and puts a stronger penalty on predictions that are way off. The lowest RMSE is not differential (.124), but capped differential (.115). The reason for this is that the capped differential predicts records that are closer to .500, and teams tend to regress to .500 the following season. The Bell Curve method (.122) comes out slightly better than point differential in terms of RMSE.

Footnote 4: Aaron Schatz of FootballOutsiders.com has found similar results in football. Despite the shorter season, accounting for lopsided wins or losses does not improve the predictive power of FO's DVOA ratings.

Footnote 5: Standard deviation measures the spread of a set of numbers. In this case, the higher the standard deviation, the more a team's final outcome varies.

Footnote 6: It's worth noting that this is only one way to define inconsistency. Another way would take into consideration the quality of the opposition. In terms of standard deviation, the Minnesota Timberwolves come out as the league's third most consistent team this season. However, when Kevin McHale cited the team's inconsistency in announcing the decision to replace Dwane Casey as head coach, he was referring to who the team beat and lost to, not the margins of victory or defeat.