Friday, September 6, 2013

Intro to Nielsen Ratings: History, Death of TV, Timeslots, Networks, Averages


This week, I'm unveiling a project I've been picking away at for months, an expanded version of my old Intro to Nielsen Ratings post. The three-part series began with the most basic factual info on how Nielsen ratings work and definitions of common terms. Last time, a deep look into the rationale behind the use of Live + Same Day and adults 18-49 ratings. Today, we wrap it up with a random grab bag of explanations behind some of this site's other tendencies.

As on the old post, please let me know if there's anything I should add, clarify or correct in these posts. I just want to get everything right.



Overly Reliant on One Paradigm?

One other note on the Live+Same Day and adults 18-49 stuff discussed last time. Sure, you've convinced me they're good measurements. But why pretty much stick to Live+SD only and adults 18-49 only when there are so many numbers out there? Surely you're missing some things!

I do want to be transparent here; not all the reasons for the numbers used here are "pure." A strict L+SD A18-49 adherence does well, but it also has the advantages of easy access and convenience. I couldn't do a complex, multi-demo analysis even if I wanted to, because I don't get those demos every night like the networks and many journalists do. And even if I had them, DVR numbers like Live+7 and C3 aren't available to anyone until three weeks after the airdate. Since L+SD is publicly available the next day and it's a better representation of C3 numbers, the deck is somewhat stacked in that metric's favor.

Still, I do wonder how much different this site would really be if everything were out there. Philosophically, I think there's a better chance at finding genuine insight by developing a rich contextual understanding of one good paradigm. Sure, you can float effortlessly among total viewers and teens 12-17 and Live+3 and Live+7 and whatever other metric a network might put out in a press release. But do you truly understand the meaning of all these different things you're vomiting out? What does it really mean that some show beat some other show in some obscure demo that the other show probably doesn't even care that much about? It's often said that statistics can be used to fit any narrative you want, and there are definitely places in the TV ratings media where you can see that happening. I think sticking to one paradigm helps mitigate some of that. There's less room to hide. While this approach might miss some things, those misses are pretty rare at least within the broadcast realm (as TVByTheNumbers' Renew/Cancel results can attest), and the clarity provided in most other cases more than makes up for it.

Additionally, this approach makes for a much cleaner experience. Other sites go on the Live+SD daily grind, but then their "bigger picture" pieces tend to use Nielsen's official Live+7 averages, which are almost like a different language. The raw numbers and the year-to-year changes might paint an entirely different picture from one post to the next. On this site, everything's fully integrated. The numbers I discuss at noon the day after the airing are the numbers I discuss when evaluating the season as a whole during the summer. You don't have to worry about the goalposts being shifted.

Historical Ratings

Analyzing a TV rating is all about comparing it to other ratings, past and present. Without comparisons, there's no way to put a rating in context. It's just a number, flapping in the wind.

But digging into TV's past inevitably runs into problems because ratings were much higher back in the day. Really, you don't even have to go that far back; collective Live+Same Day ratings for original series have been declining at nearly 10% per year recently. This collective decline is definitely worth chronicling; it's quite possible that ratings will eventually get so low that primetime TV must change in truly fundamental ways.

However, the collective decline is so powerful that it tends to overwhelm more specific comparisons. If you're saying that a 2.0 show in 2013 is 33% weaker than a 3.0 show in 2003, you're not properly accounting for the massive collective decline across that period. You can lament that "there are no hits anymore," or you can stop punishing shows for the historical era in which they air. Shows should be compared with their contemporaries.

Enter the statistic I introduced in April 2012 called "A18-49+" and now nicknamed "Plus." (But you can call it whatever you want.) The name of the stat borrows from the world of advanced sports statistics, in which numbers like OPS+ and ERA+ use the same general method: set up traditional stats against the environment in which they're taking place.

Plus compares each 18-49 rating against the "league average," or an average of the 18-49 rating of all original non-sports series airings on the big four in primetime during the "traditional" 35- or 36-week TV season. By combining everything, we get the best possible representative of overall entertainment programming decline. While raw numbers may suggest that "everything's down," Plus is able to set the "everything's down"-ness off to the side and see if the shows would be growing in a steadier environment. For much, much more on the many uses of this number, see the A18-49+ Index!

The Collective Decline and the Death of TV

As I've said, the relative ratings and the collective decline should be evaluated separately. That doesn't mean the collective decline should be ignored. I regularly post updates on the "Climate" of broadcast primetime TV, tracking yearly declines in overall viewing, broadcast viewing and the "league average."

But the problem with studying the collective decline is that we can't really get at the big questions - the money questions - through ratings alone. It doesn't stop people from doing it; when some beloved show is down a big chunk in the Nielsen ratings, social media will often cry out anew about the broken broadcast model.

Thus far, much of the "death of TV" talk seems exaggerated; generally, reports on advertising rates per viewer, or CPM (cost per thousand impressions), suggest that almost all of the ratings declines on broadcast are offset by CPM increases to about the same degree. And there are frequent reports about increased value in various ancillary avenues like off-network and online streaming syndication, which could motivate studios to make their shows more affordable for the networks. Still, there should be a certain point at which the reach of broadcast is so limited that advertisers will stop funneling the same amount of money into broadcast TV. We don't know where that point is just by looking at ratings, so all we can really do is continue to compare CPM vs. ratings declines. CPM increases are usually in the upper single digits percent. So the real red flags will be: 1) ratings declines that more than cancel that out; and/or 2) a marked slowdown in the CPM increases.

Importance of Timeslots

Not all ratings are created equal. A show airing on Friday has a much rougher go of it, as does a show competing against several broadcast hits, as does a show with little support from the preceding (lead-in) program. These circumstances are a frequent subject of debate, but I'm making an ongoing effort to quantify them in every single rating and put everything on a relatively level playing field. That number is called True. It's not as simple as the historical adjustment (Plus), but it's good enough to noticeably improve the ratings standard deviation of a show whose situation changes. The True numbers are posted for every original broadcast series, and you can read much more about it at the True Index!

Grading Networks on Non-sports Series

As mentioned in the Plus section, that number compares not against an average of all programming but against an average of original series programming. Why not just everything? It's because different kinds of programming tend to behave differently in the changing world of TV we live in. A network's average combines sports (the most "DVR-proof" programming because of the compelling cultural desire to watch it live), series repeats (hurt the most by new technologies because of all the different ways to more conveniently catch up) and series originals, which fall somewhere in the middle. Since most of what we talk about here is series original-related, the Plus number zeroes in on a standard that best fits series originals, removing the noise that other programming might add into a wider-ranging number.

While I only occasionally do "state of the network" kind of stuff around here, I also believe these original series-only averages are the best way to evaluate a network's strength. This is because the network races in overall averages are increasingly dominated by differences in sports ratings. "Yeah, but sports count too!" you might say. The problem with putting sports ratings and entertainment ratings on a level playing field is that the networks have to pay a ton of money in rights fees for their high-rated sports programming. So while these programs do create massive ad revenue, they also have massive negative costs. There are articles almost every Olympics about how NBC is losing money or, in 2012's case, very happy to be breaking even. The top-rated National Football League is the same way. They're willing to take these kinds of hits for the good PR that comes with "winning" ratings, because it eases the burden of filling a schedule, and because they hope those huge audiences will funnel into their entertainment programs. But ultimately, an entertainment program pulling a huge demo score is going to be a much bigger profit center for its network than a sports program at the same rating. While there is certainly variance in cost within the entertainment realm too, excluding sports is at least a step in the right direction when it comes to estimating a network's "successfulness."

Definition of "Season Average"

One thing that's always pissed me off about Nielsen is its definition of a show's "season average," which includes every single airing (original and repeat) in its main timeslot.

What's so bad about that? It's simple: it punishes shows that repeat well. If a show gets good repeat ratings, the network will usually simply air encores of the show during its "off weeks." Even for good repeaters, these airings usually get half or less of the typical original rating, and they go into the show's season average and deflate it. But if a show is a terrible repeater, the network will often put a show on an extended break and air some sort of "filler" programming in the timeslot. That programming doesn't count toward the main program's season average.

Repeating well is an asset, not something that should count against you. So I've eschewed that stupid, stupid rule, and all averages you will see on this site count original airings only. (They also count only the Live+SD ratings used all over the site, so even the averages for shows without repeats will vary wildly from Nielsen's official versions. They use Live+7 DVR numbers when available.)

No comments:

Post a Comment

© SpottedRatings.com 2009-2022. All Rights Reserved.