This was something I explained in more depth in the intro that I haven't yet published, but when looking at ratings, I'll mostly try to look at shows that seem like they should be about of a consistent "true strength" throughout the season. In other words, they aren't heavily influenced by people changing their minds about the show. This mostly means veteran standalone programs: procedurals, comedies and standalone reality shows like Extreme Makeover: Home Edition. And in this case, looking at ratings across the season, we want to 1) look at shows that actually air across the whole season and 2) try whenever possible to look at shows that didn't have other massive external factors in play, like timeslot changes or huge lead-in and/or competition changes.
I boiled it down to the shows in the below table. Also, to track the whole season and give at least some reasonable sample size in each case, I divided the 2010-11 regular season into five parts: September 20 to October 31 (Fall); November 1 to December 12 (Fall/Winter); December 27 to February 13 (Winter); February 14 to March 27 (Winter/Spring); and March 28 to May 22 (Spring).* Then I'm gonna track the Adults 18-49 Persons Using TV along with the original average ratings of all the shows in each of those five parts.
*- How'd I come up with these? The beginning of November is about when the Persons Using TV (PUT) seems to start swinging up. The next six weeks have higher PUT and take us up to mid-December, when the PUT declines for the holidays. December 27 to February 13 is the rest of the "peak PUT" part of the season. February 14 to March 27 sees the PUT decline until the methodology change. Everything after March 28 is the time with the new methodology. If not for the annoying methodology change, I'd have probably split the season up a little differently, namely pay more attention to Daylight Saving both in November and March. PUT does decline noticeably in the two weeks between Daylight Saving and the methodology change, but two weeks is not enough of a sample size to really be able to track this stuff. Perhaps next year, with a (hopefully!) consistent methodology, I can divide the season up a little "better."
|48 Hours Mystery||1.30||1.33||1.11||1.18||1.13|
|America's Most Wanted||1.58||1.58||1.75||1.71||1.40|
|Extreme Makeover: Home Edition||2.20||2.33||2.33||2.40||2.16|
|How I Met Your Mother||3.50||3.58||3.87||3.40||2.74|
|Law & Order: SVU||2.80||2.20||2.68||2.43||2.56|
|NCIS: Los Angeles||3.47||3.27||3.58||3.40||3.16|
|The Good Wife||2.50||2.17||2.20||2.08||2.12|
Now, let's match those rating averages up with the PUT averages in the same periods.
So here's how it matches up: the three middle periods (F/W, Winter, W/S) actually have a very good linear correlation between viewing and ratings. Ratings and viewing are a little higher in the winter than in the F/W, then both ratings and viewing start coming down in the W/S.
But we've got two problems/tasks ahead (numbered above):
1) As I observed in the last post looking at overall viewing trends across the season, that trajectory doesn't really match up with ratings in the Fall. Fall is a relatively low-viewed time of season, but the ratings are actually highest then. After seeing how well the three middle periods match up, I'm even more convinced there is something else going on in the early fall that's separate from "viewing levels." I call it "Early Fall Hype." So how big is the Early Fall Hype factor?
2) As far as using this system for next year, I wouldn't particularly need to worry about the methodology change*, because there'll just be one kind of viewing level calculation next season. But it would be nice to not have to throw out every single rating after March 27 in devising this formula. And there's clearly a problem as it stands; the methodology change caused viewing levels to suddenly increase in the late spring when they should not be doing so (if the ratings drop-off is any indication). So I'll try to come up with a way to extrapolate an "old methodology HUT" for the last couple months. I certainly won't feel as confident about that as if I had the actual numbers, but it should hopefully at least be something to use.
*- As I mentioned when I was explaining the methodology change, I think that for purposes of this exercise, the "old methodology" is actually the better one. The old definition of viewing levels was basically "how much stuff that starts at this time gets viewed Live + SD," which is really what you want when you're trying to figure out how ratings correlate with overall viewing. So I may throw my conversion to "old methodology" into the final formula. I can see why they changed it (the new calculation is a much more elegant definition of "viewing level") but I like the old one better for this nerdy stuff I'm doing.
We'll tackle those two things in the next post!