Friday, December 6, 2013

Ad Rates: How Accurate are Ad Rates Surveys?

As I say every year, the article with 30-second spot prices is one of the most important things written about the TV industry each year. Thousands and thousands of articles are written about Nielsen ratings, but ratings are several steps removed from what really makes and breaks the industry: $$$$$. The ad rate article takes us another step closer to the profitability equation, and doing that can teach us things about how to value shows beyond the adults 18-49 rating starting point.

However, as I also say every year, these articles are estimates, based not on every single advertisement but on those particular people surveyed. Some crazy things can happen if you look at them as exact gospel on a case-by-case basis. I say this mostly because the articles themselves say it, but I've never really known just how much error is to be expected.

This year, something new and helpful happened in the world of ad rates media: competition. I actually found three different articles with complete ad rates tables. (Maybe this always happened, but I sure wasn't aware of it.) I don't know the details behind the process that goes into compiling these tables, but it's pretty clear from the tables that it's not a situation where each publication is fed the exact same numbers from the exact same source. There are disagreements on every show. Unlike a Nate Silver, who can test the polls he uses against actual election results, there are no actual results here. So we won't be able to truly answer the question in the headline. But it will still give us a little better idea of what's going on if we compare these estimates with each other - and with A18-49 ratings.

Here are the articles being used. The first one released, and the one dissected by most independent media, came from Adweek. Then came articles from usual source AdAge, and another from Variety. (Variety's full table was in the October 23 print edition, so there's no link for that, but you can find it in a magazine archive.)

How Much Disagreement?

Rather than do anything fancy statistically, I'll start by simply looking at the range of the ad estimates among the three articles - in other words, the percent difference between the highest price and the lowest price for each show.

The average price per 18-49 point across all 51 shows returning to "approximately the same timeslot" varied by just 2% among the three articles; it was $56,417 in AdAge, $55,776 in Variety and $55,304 in Adweek.

Among the 51 returnees, there was an average 10% difference between the highest price and the lowest price.

Among the 24 new shows listed in all three articles, there was an average 13% difference between the highest price and the lowest price.

So generally speaking, there was a noticeable (though not huge) difference in the size of the disagreement among new shows compared to returnees. This seems reasonable enough, as those shows had no previous ratings data and were basically speculations.

I'll quickly run through the five biggest disagreements for which all three numbers were available:

1. Brooklyn Nine-Nine (35% range: $147,320 Variety, $146,697 AdAge, $96,225 Adweek): One thing that seemed odd in the initial Adweek reports was how the much-reviled Dads could have a noticeable ad rates edge on much-beloved lead-out Brooklyn Nine-Nine. Well, in the other two articles, that isn't the case. AdAge and Variety each have the B99 price at over $50,000 per spot higher, which is a really big difference as these things go. In AdAge, B99 has almost exactly the same rate as Dads, while in Variety it's significantly higher.

2. The X Factor (Thursday) (33% range: $169,255 AdAge, $161,429 Variety, $112,675 Adweek): Among returning shows, The X Factor had the biggest disagreement by a very long shot; again, Adweek went about $50K behind the other two. This was so stark a difference for a returning property that I actually thought it might have been a typo; however, Adweek is also very down on the pricing for the Wednesday edition; its prices were nearly 20% below the other two there.

3. The Millers (31% range: $176,777 Variety, $174,442 AdAge, $122,390 Adweek): Another edition of "Adweek posts something strange that is more reasonable in the other two." Based on applying the average price per 18-49 point from the returnees, I guessed that Adweek's figure implied a measly 2.0 demo average for The Millers, which would've been practically inconceivable for something leading out of The Big Bang Theory.

4. Back in the Game (27% range: $97,000 Variety, $94,213 AdAge, $70,734 Adweek): Same story as the other newbies. The speculated 18-49 rating derived from the Adweek number was 1.2, which seemed pretty unrealistic for something airing after The Middle. (Worse than Family Tools, even!

5. The Mindy Project (22% range: $150,950 Adweek, $136,026 Variety, $117,987 AdAge): Lest you think Adweek is biased against Fox, note here that they are higher on Mindy than the other two. (Though this is the only case out of these five where there's a significant gap between Variety and AdAge.) This perhaps somewhat tempers the "Mindy is massively overvalued" takeway from the last ad rates post, but it certainly doesn't kill it entirely. While Adweek's numbers had Mindy as the "pound-for-pound" third most valuable returnee (in other words, the third biggest price per demo point), it's more like sixth or so on the other lists. AdAge and Variety suggest it's not quite as "overvalued" as a New Girl or a The Simpsons, but still pretty overvalued.

Though the agreements are not quite as interesting, I will note for the sake of fairness that the three articles were all within 5% on 17 of the 51 returning shows and on 5 of the 24 newbies. One thing that sticks out is that there was a high amount of pre-season agreement on CBS' embattled Monday lineup. Hostages actually had the smallest percent disagreement of any new show, then went on to be one of the biggest misses in terms of what actually happened!

Who Did the "Best" and "Worst"?

As noted above, we can't test these things against real numbers. We can only test them against 1) each other; and 2) adults 18-49 ratings. It's not ideal, but it's still something.

For the "each other" test, two extremely simplistic ideas. First, I compared each article's price to the average of the other two articles' prices for the same show. The below is the average of the absolute value of this difference. Then I looked to see how often an individual's price came in second place. Tracking how often a show goes on the high end/low end is another way of tracking its relationship to the "consensus."

For adults 18-49 ratings, the same linear correlation used in years past, using the 51 "approximately same timeslot" shows.

AdAge Adweek Variety
Vs. Avg Other Two

Returnees 5% 8% 6%
Newbies 7% 10% 8%

Second Places

Returnees 22 15 14
Newbies 13 3 8

Linear Correlation w/ A18-49 0.938 0.913 0.942

So while it's not nearly as dramatic as you might expect from some of the extreme outliers above, Adweek has both 1) the most variance from the other two; and 2) the worst linear correlation with adults 18-49 ratings. Perhaps that makes it kind of unfortunate that it was the first one to hit the media, and thus the one most commonly dissected. Variety (just barely) has the best correlation with ratings, while AdAge has the most prices nestled in between the other two.

Now, I want to make this pretty clear: this doesn't definitively mean anything. There are nuances in the differences between ratings/ad rates that aren't fully understood, so a stronger correlation with A18-49 ratings does not necessarily make one article "better." Adweek might have the best numbers (though, with nothing else to go on, I'd say it's less likely). Another thing to note is that there could be some overlap in the Variety/AdAge processes; Brian Steinberg has historically written the AdAge posts, and he moved to Variety in the last year. It's possible they agree more often because they're more accurate, but it's also possible they agree because some of his sources were held over in the AdAge survey.

Combining Prices?

Does a combination of these estimates create something that's closer to the real averages? To reiterate once again: we have no way of knowing for sure. All we can see is whether it connects to Live+SD adults 18-49 ratings better.

So I tried a few different approaches here: I averaged all three articles' prices, all three combinations of two articles, and then I threw out each max and min price and took the middle (second place) price.

AdAge & Adweek & Variety 0.935
AdAge & Adweek 0.929
AdAge & Variety 0.941
Adweek & Variety 0.931
Second Places Only 0.936

Ultimately, none of these really correlate any better with A18-49 than the two best articles individually. (In fact, Variety by itself still technically has the best correlation, by the narrowest of margins.) There doesn't seem to be a systematic way of adding much value, at least this year. But it's worth continuing to monitor. Hey, maybe we'll get five of these articles next year!


Spot said...

What always fascinates me abut those surveys is how people believe those listed prices are set in stone. They're not aware of networks doing "make-good" to advertisers if show fails to meet expectations (price they went for at upfront).

For example, I'm sure many believe 30-second ad in Hostages still costs $133,000. No, it goes for $50,000-ish if ad agency buys ad space at scatter market now. For those that bought it at 133K during upfront, CBS gives back 83K, (usually but not necessary) in form of additional commercials in other networks' shows.

Spot said...

All overlooking the fact that very rarely does an advertiser buy any single show at these rates.

Spot said...

I find this very interesting indeed. I don't think the deviations are all that significant as you've said, but it's still interesting that there are deviations. However, we compare this in a different manner. I calculate the CPR based on the ratings as of the end of last year (and I ignore new shows and moved shows). This means that shows that have approximately the same subdemo breakdown should have the same CPR and that wasn't always the case with the initial values we got (the Criminal Minds-CSI case was one of the best examples). I don't have the time at the moment, but I would like to investigate if this discrepancy is eliminated using the other numbers.

Post a Comment

© 2009-2022. All Rights Reserved.