Tuesday, July 12, 2011

True Strength: Early Fall Hype & The Methodology Adjustment


Last time, we tried to correlate viewing levels across the season with the adults 18-49 ratings of some "stable" shows across the season. We found that for much of the season, they linearly correlate pretty well, but there are two things keeping it from being a really good system:

1) Ratings are higher in the fall than the viewing levels dictate they should be. (Because of something I referred to as "Early Fall Hype."

2) Viewing levels are higher in the late spring than they "should" be because Nielsen changed its definition of "viewing level" during the middle of the season.

Now, here's how I'm gonna try to adjust for each of these!

Early Fall Hype

Last time I proved (if you have a really loose definition of "proved") that, despite viewing/ratings correlating fairly well throughout most of the rest of the season, there is something causing broadcast TV ratings to be higher early in the fall than their viewing levels suggest they should be. I'm not here to explain what exactly this is or why it happens, because I don't really have any way of doing that. Maybe it's the heavy promotion, maybe it's some sort of collective hunger for new programming, I dunno. I just want to try to figure out how big it is.

Let's bring back in the viewing vs. ratings table from last time (leaving out spring, which we'll get to next time):


Fall F/W Winter W/S
18-49 PUT 34.27 34.97 35.56 33.48
18-49 Rating 2.67 2.57 2.65 2.46
Ratio 12.83 13.62 13.42 13.59

As I said last time, there is a pretty strong linear correlation in the last three of these. The ratio between PUT and that particular selection of shows is pretty close. But in the other case, the ratio is significantly lower; in other words, the shows get higher ratings in the first six weeks of the season than it seems like they should based on overall viewing levels.

What "should" these shows be rating? To get that, we'll get the average ratio for the other three sections (13.55) and divide the fall PUT by that. That gives us an expected demo rating of 2.53. With an actual average of 2.67 and an expected average of 2.53, that means the fall ratings "should be" about 5.2% lower than they actually are.

So, I don't really have a formula yet, but the first aspect will be that everything that airs in the first six weeks of the season will get a -5.2% multiplier to its "true strength." Factoring in this Early Fall Hype Factor, I should be able to eliminate the inflations of the early fall.

More on Those Damn Methodologies

This *might* be the last time I ever have to talk about Nielsen's two different ways of calculating viewing levels. (But probably not.) It even bores me, and that is really saying something because I have found most of this stuff pretty interesting.

So I started off basically approaching this one the same way I approached the last one, if a bit in reverse: I took the ratios and applied them to the ratings of the late spring to get an expected PUT for  the last couple months of the season.


F/W Winter W/S Spring
18-49 PUT 34.97 35.56 33.48 35.07
18-49 Rating 2.57 2.65 2.46 2.33
Ratio 13.62 13.42 13.59 15.03

So using that average ratio from last time (13.55) and multiplying it by the 2.33 average of the spring ratings, we get an expected PUT of 31.61. That's 9.8% lower than the "actual" spring PUT. So that's it, right? Just take off 9.8% of all the PUT levels in the spring and we're set, right?

Nope, not that easy.

I'm trying to simplify this as much as possible most of the time, so I could just stick with that. But as we looked at when I was first describing the methodology change, one of the most important things about it is that it treats timeslots very differently. Viewing is much more "juiced" at the end of the evening than at the beginning, because DVR viewing is now attributed to the actual time of evening it happens, not based on the timeslot of the show being DVRed.

So to get a really good breakdown of the evening, I want to apply that 9.8% decline to the evening, but do so as if the evening still broke down the same way it did in the old methodology. To do that, I found the overall difference between the "expected PUT" of the spring and the average PUT of the other three periods (-9.0%) and then applied that to the other three periods' average PUT hour-by-hour. Here's what we got:


8:00 9:00 10:00 AVG
F/W/S 34.02 35.75 34.32 34.70

-9.0% -9.0% -9.0% -9.0%
Exp PUT 30.97 32.55 31.24 31.61

So based on how the old methodology breaks down across the day, that's how 31.61 PUT should look. How's it compare to the "real" PUT in the post-methodology period?


8:00 9:00 10:00 AVG
Post-Meth 32.66 36.05 36.51 35.07
Exp PUT 30.97 32.55 31.24 31.61
Diff -5.2% -9.7% -14.4% -9.8%

As appeared the case when we first started looking at the differences between methodologies, the 10:00 hour PUT is much more "juiced" under the new system than the 9:00 hour, which is much more "juiced" than the 8:00 hour.

So, just to smooth it out, we'll say that to convert from "New Methodology" to "Old Methodology,"* we'll give all "New Methodology" PUTs a -5.2% multiplier in the 8:00 hour, a -9.8% multiplier in the 9:00 hour and a -14.4% multiplier in the 10:00 hour. In other words, another 4.6 percentage points of decline in each hour.

Now, this conversion may not be correct. For all I know, there may be sort of a reverse of the "Early Fall Hype" phenomenon that also affects the ratings, and the viewing doesn't really drop as much as the ratings suggest. But we don't really have a better way of figuring it, so I'll go with this for now. As the regular season develops, we'll have a lot more viewing info using the new methodology and we can get a better sense of how much the viewing is juiced year-to-year using practice, not just the theory above. So this is one aspect I'm open to tweaking.

Update (8/5/11): In doing some early testing of the "final" True Strength formula, it's becoming clear that a lot of the True Strengths in the last few weeks of the season (the "New Methodology" time) are extremely inflated. I think the biggest reason is that those averages are being brought way up by the big drops from the fairly abnormal Saturday Fox shows. I could just take those select shows out, but it felt kind of slimy to just take stuff out till I get what I want, so I decided to approach it from a different "theoretical" way. I decided to just compare the half-hour PUT levels from the last two weeks of Old Methodology (the two that take place after Daylight Saving Time) with everything under the New Methodology. Here's what I came up with by hour.


8:00 9:00 10:00 AVG
Post-Meth 31.85 36.64 36.70 35.06
Pre-Meth 30.85 34.20 33.66 32.91
Diff -3.1% -6.6% -8.3% -6.1%

Update (9/13/11): I've reduced this from an hour-by-hour breakdown down to half-hour-by-half-hour. Here's the new table:


8:00 8:30 9:00 9:30 10:00 10:30 AVG
Post-Meth 30.14 33.55 35.86 37.42 37.31 36.10 35.06
Pre-Meth 29.76 31.95 33.72 34.69 34.46 32.86 32.91
Diff -1.3% -4.8% -6.0% -7.3% -7.7% -9.0% -6.1%

As expected, this "theoretical" calculation indicates I should be adjusting for the New Methodology much less than I was previously. The downside is that it's kind of a small sample size for Old Methodology - just two weeks. The best difference is that it's all after Daylight Saving Time, so it seems like it's closer to something that I could probably use across the whole year (since the post-Meth and pre-Meth numbers are relatively apples-to-apples).

Maybe the actual adjustment should be a bit larger; this assumes that everything post-Methodology has the same "true viewing" as those first two weeks after DST, but in reality it probably keeps declining a bit. So that may be something I take a look at as more New Methodology viewing levels come in early next season. However, most True Strengths don't seem to make drastic moves after the methodology change now, which is about all I can ask for.

This change doesn't require me to go back and redo very much, since there are only minor adjustments at 8:00, so most of the Competition stuff isn't greatly changed. I have gone back and changed the "constant PUT" from 33.75 to 34.12 in the formula due to the viewing levels getting raised a bit after 3/27/11, plus I've changed "normal competition" from 0.24*PUT to 0.23*PUT on weekends and from 0.31*PUT to 0.30*PUT on weekdays (though I think that adjustment mainly came out of my decision to count sports less, driving broadcast PUT levels way down). If you don't remember exactly what I'm talking about, just look for those numbers on the next edition of the "Formula So Far"

The only part of the old Methodology diatribe I'm saving is the below, which explains why I'll make Old Methodology conversions even next season when everything's New:

*- As I said in a previous post, the only real reason to convert New Methodology PUT to Old Methodology is so we can use the ratings in the spring 2011 on a level playing field with the rest of the 2010-11 season. It shouldn't really be an issue in 2011-12, when everything should be New Methodology. But I think I'm going to keep this conversion in the final True Strength formula and try to convert all PUT calculations in 2011-12 to "Old Methodology." This is because I think the old definition of "viewing levels" was something that would correlate closer with Live + SD ratings, since it's about the tendency of a show in a given timeslot to get viewed.

Though most of this stuff seems to work relatively cleanly, we'll take a look at another way of tracking viewing vs. ratings: the big "events." We've looked at how those events affect viewing already, but does that match up with ratings?

No comments:

Post a Comment

© SpottedRatings.com 2009-2018. All Rights Reserved.