Tuesday, September 3, 2013

Intro to Nielsen Ratings: Basics and Definitions

This week, I'm unveiling a project I've been picking away at for months, an expanded version of my old Intro to Nielsen Ratings post. The three-part series begins today with the most basic factual info on how Nielsen ratings work and definitions of common terms. The next two parts, coming later this week, are more philosophical/opinionated, detailing the reasons why this particular site uses the numbers it uses.

As on the old post, please let me know if there's anything I should add, clarify or correct in these posts. I just want to get everything right.

What are Nielsen ratings? They are measurements of how many people watch TV. "Nielsen" is the name of the company that measures audiences for TV and other forms of media. Nielsen ratings exist to determine the dollar value of advertising in TV programs. Shows are kept around not (directly) because of popularity but because of their ability to generate profits. Since traditional TV advertising is still the most important source of revenue, TV ratings can give us outsiders quite a bit of info about the inner workings of the TV industry.

How does Nielsen come up with this data? Nielsen ratings come from tracking the viewing habits of a sample of about 20,000 households, calibrated to properly represent the demographic makeup of the TV-owning population. This is done primarily with set meters that track viewing minute-by-minute. Some of these meters require each member of the household to "check in" individually when viewing TV, and some (those used in "metered market" ratings) simply measure whether or not anyone in the household is viewing. There's also a limited amount of data collection done via the old "diary" method in which viewers record their viewing history, but these numbers are mostly used on a local basis during sweeps months.

The TV-owning United States population consists of 115.8 million households for the 2013-14 season. That means the Nielsen sample is about 0.02% of the population. That's not very much, but the sample size has drastically increased over the years. It was only about a tenth that large (as a percentage of the population) in 1977, though the granularity (or the importance of small fluctuations) is also much greater now than in 1977.

Why sampling, rather than an account of the complete population? Logistics. So far, the industry hasn't been able to come up with anything better. Counting everyone would be an incredibly expensive pursuit, it would require a ton of cooperation and integration among a wide variety of entities, there would be major privacy issues, and...

Is it reliable? ...all that work probably wouldn't change the ratings picture that much. The Nielsen ratings don't do as much as many people would like them to do, but all indications are they do a good job at what they're supposed to do, which is measure TV advertising audiences. I'll refer again to the 1977 pamphlet posted by USA's Ted Linhart, which (though the specific numbers are outdated) has a lot of general info that still applies today on the accuracy of using small samples. My best answer to the accuracy question is that the various entities in the industry all agree on Nielsen ratings as a standard, and there are very few real accuracy questions that perk up from within the industry. From Craig Engler of Syfy: "All of the other data we look at ... shows people watch on demand, DVD sales (we often get this data even though we don't usually share the profits), digital downloads from iTunes and Amazon, streams on the Internet, visits to show Web sites, even piracy ... give us different metrics to look at alongside TV ratings to make sure nothing really weird is going on."

Still, the use of sampling does mean that individual points are best used as broad indicators rather than completely exact. An ABC exec said they calculate Nielsen's margin of error to be plus or minus 0.2 points in adults 18-49 (the currency on this site). Generally speaking, this means that there's little genuine insight in strongly reacting to (or basing Web headlines on) small week-to-week fluctuations. A 0.1 drop is not a referendum on the creative state of the show. Not that this stops anyone! Small fluctuations are only something I tend to focus on when: 1) large fluctuations are to be expected, like early in the run of a new show; or 2) there are several consecutive small fluctuations in the same direction.

And now we'll run through a glossary of important terms for TV ratings followers.

The Basics
We often see TV ratings as a combination of two numbers; for example, a "3.4/8." The first number is the rating and the second is the share.

Rating. Nielsen ratings are percentages of the United States' TV-owning population. If a show has a 3.4 adults 18-49 rating, that means 3.4% of the adults 18-49 who own a television watched the program.

Calculation: Rating = 100% * number of people/households watching ÷ number of people/households who own TVs

The above calculation is particularly useful in practice because many press releases include their demographics figures in terms of the total number of viewers rather than a rating. It's easy to mistakenly think that "3.0 million adults 18-49" is the same thing as a "3.0 adults 18-49 rating." But that 3.0 million number must be divided by the size of the adults 18-49 population to make it into one of the ratings usually used here. 3.0 million ÷ 127.0 million (the 2013-14 adults 18-49 TV-owning population) = 0.024, or 2.4%, or a 2.4 A18-49 rating.

TVByTheNumbers puts out an annual list of some of these population sizes.

Share. The share is also a percentage. But rather than a percentage of the whole TV-owning population, share only counts people who are watching TV at the time of a show's original airing. Share is basically a crude way of accounting for people's tendency to watch TV in a given timeslot. A 2.0 rating is a very different thing in primetime than it is in the middle of the day when viewing levels are much lower, and share helps to account for that somewhat.

Calculation: Share = 100% * number of people/households watching a program ÷ number of people/households watching any TV in the show's timeslot

Viewing Levels. The denominator of the share calculation is how many people are watching TV in a given timeslot. This number is a very important piece of context when judging a show's rating; the acceptable ratings level is smaller in a slot when there's less tendency to view TV.

The term "viewing level" takes many forms; it might be called "HUT" (Households Using TV), "PUT" (Persons Using TV) or "PVT" (Persons Viewing TV).

Viewing level data is typically not released to the public, so I estimate it by adding up all the ratings in a timeslot and dividing that sum by the sum of all the shares. There's some error in that, since the ratings and shares are all rounded numbers, but it seems to be fairly close to the mark.

Nielsen has tinkered with its definition of "viewing level" over the years. Immediately after the introduction of the DVR, it was based on how much a timeslot's programming got watched within the Live+Same Day window. But in 2011, they returned to the more intuitive definition; how many people have their TVs on at this time on the clock?

Preliminary & Final Ratings
Nielsen produces several different streams of Live + Same Day data describing the previous night, each stream more accurate than the last.

Metered Market Ratings/Local People Meters. Very early in the morning, Nielsen puts out some preliminary data based on a compilation of Nielsen's local samples. The website TV Media Insights regularly publishes household data collected from 56 so-called "metered markets" in its daily newsletter. There's also some demographic info from so-called "Local People Meters" in the top 25 markets available in the early morning; the Sitcoms Online Twitter feed often posts early adults 18-49 ratings from the top 25 markets. Because these numbers exclude large portions of the nation, they're typically several tenths of a point off of where the national numbers end up, but they can be useful as very broad looks at what's to come.

This blog pretty much never refers to these numbers, though I might retweet the TVMI/Sitcoms Online numbers on my Twitter feed early in the morning if they're particularly interesting.

Fast Nationals (and Time Zone-Adjusted Fast Nationals). The first set of data using Nielsen's national sample is released around 11am ET each morning, and it's the set on which most TV ratings ink is spilled across the media.

These numbers are usually within a tenth or two of the final national numbers, with a few notable exceptions. These exceptions arise because, for example, the 8:00 fast nats are strictly a combination of the rating in each market at 8:00. So when a market airs something other than the designated program at 8:00, that other something is counted in the fast national rating as part of the designated program.

Why would they air something else? It might be because the designated program was aired live in all time zones (meaning it aired before primetime on the West Coast, and the fast nats count alternate programming aired out west). It might be because they pre-empted the national broadcast for some piece of local programming. Or it might be because a program didn't start right on the half hour. That might be due to scheduling (some programs are explicitly scheduled to start at 8:31, or 10:02, etc.) or due to an overrun from an afternoon sports event, which pushes the start times for all primetime programs on that network forward.

When a major event like the Super Bowl is scrambled, the networks will occasionally put in a special order for time zone-adjusted fast nationals which account for all these differences in time zone viewing. This allows them to put out fairly accurate numbers more in line with the typical TV ratings news cycle.

During the regular season, this blog puts out some quick reactions to the fast numbers shortly after they come out, but the ratings tables are all based on finals.
Final Nationals. The final nationals come out about five hours after the fast nationals and weed out all of the start time, pre-emption and time zone differences to produce the most accurate look at a program's rating.

DVR Streams
Live, Live + SD, Live +3, Live +7. Since the advent of the DVR in the mid-2000s, Nielsen has started putting out different streams based on when people viewed a piece of programming.

The Live-only ratings, rarely seen in public anymore, measure how many people watched a program as it happened.

The Live+SD ratings, this blog's currency (more on why that is next time), measure live viewing plus a program's DVR viewing until 3:00am local time that night.

The Live+3 ratings measure live viewing plus DVR viewing up to three days later. The networks often put these numbers out in press releases within a week of the airdate.

The Live+7 ratings measure live viewing plus DVR viewing up to seven days later. They're not available till three weeks later. They're the closest thing to a "true popularity" out of what we see from Nielsen.

Commercial Ratings. All of the above numbers measure an average across the full duration of a program. But the reality is that the advertisers who are keeping the TV industry afloat don't really give a damn who's watching the content portions of the program. They just want to know who's watching the ads. So Nielsen also produces commercial ratings, which just measure that. Like with the program ratings, there are live, same-day, three-day and seven-day commercial ratings. The agreed-upon industry standard for setting ad rates is the three-day commercial ratings (also known as C3). But while three-day commercial ratings are the real numbers of importance in terms of determining ad dollars, they're almost never seen in public. More about that problem in the next post!

Coverage Ratings. Just throwing this in because it occasionally comes up in cable press releases. Sometimes their ratings are based not on the whole TV-owning population but on the segment of the population that receives that particular channel. These are called "coverage ratings." While they could be construed as a fairer representation of that channel's performance, they are not exactly apples-to-apples with the usual national ratings.


Spot said...

Why are some shows (like Survivor) usually adjusted up despite meeting none of the criteria for likely adjustments?

Spot said...

And why was The Big Bang Theory adjusted up 3-4 tenths week after week at the end of last season?

Spot said...

I don't know, and maybe someone will come through someday and clarify. My best guess is that there's some segment/market(s) that isn't ready for the initial processing and gets added in after finals, which is why those adjustments are typically upward.

Post a Comment

© SpottedRatings.com 2009-2022. All Rights Reserved.