Here's a little example section:
|The Middle (R)||1.4||4||-22%||n/a||n/a||13.4||n/a||9||-19%||8/9|
|The Middle (R)||1.4||4||0%||0%||0%||13.4||0%||9||-18%||8/10|
|Modern Family (R)||2.3||6||-34%||64%||n/a||11.2||n/a||17||1%||3/10|
|Off the Map||1.4||4||-18%||-22%||-25%||3.6||-14%||28||-25%||9/9|
A18-49 - From Nielsen. Adults 18-49 rating. Percentage of US TV-owning adults 18-49 watching the program. The most common currency of TV ratings. If you need more help getting this, I recommend my Intro to Nielsen Ratings from a few months back. I use it and not total viewers because of A18-49's much closer correlation with ad rates.
Share - From Nielsen. Adults 18-49 share. Percentage of US TV-watching adults 18-49 watching the program. Share is basically what I would call a "HUT-adjusted rating," or a rating that adjusts for viewing levels. In other words, share should theoretically be on a "level playing field" when you compare them across any days of the week or times of the day or anything. Should it be used more? In theory, I tend to think so, but it doesn't end up helping much. See "bcShr" below.
Last - Percent difference from previous episode. This has become sort of the standard-issue number in the great race to make daily ratings interesting. I'm not really sure who popularized it, though I remember it first at TVByTheNumbers. If you've ever read "About Spotted," you know I'm not quite as entranced with it as some, because I think most individual fluctuations are fairly meaningless and boring, and quite a few are misleading, but it's still a pretty reliable way of explaining what happened last night.
Lead - Calculation. Percent difference from lead-in program. I've thought about adding 100% to each of these so that it's the more traditionally discussed term "retention" instead. Either way, lead-in retention is a statistic that I've looked at a lot in an attempt to quantify its meaning. At this point I'm pretty sure the answer is definitely, positively, for sure, "More important than some people think and less important than some other people think." So take that.
LeLa - Calculation. Percent difference between the show's lead-in and its lead-in for the previous episode. When I first started doing this (toward the beginning of February sweeps) it wasn't very informative, since the listings were usually about the same week-to-week so the "LeLa" would just match the "Last" of the show above it. It's a little more useful now as shows frequently alternate between originals and repeats. It's meant to be used in tandem with "Last" as a possible explanation for week-to-week fluctuations. For example, you can see with Mr. Sunshine above that it dropped 22% from its previous original, but a big part of the reason for that may have been the 44% drop in lead-in (with Modern Family in repeats).
Comp - Calculation. Competition. Total rating of all broadcast programs airing against it. I haven't really looked at this number enough to be able to put it in perspective very well. I do know that the shows with the greatest competition are usually the weakest shows, simply because they don't have to face themselves. What I want is a number that takes the rating a show gets and adjusts it for the heaviness of the competition it has to face. Guess that's kinda what "bcShr" below is.
10:00 fix: 10:00 shows currently face two networks where the other shows face 3.5. So multiply two-network competition times a 3.5 / 2 constant = the equivalent to facing 3.5 networks. This constant is applied to all programming airing at 10:00.
CoLa - Calculation. Percent difference between the show's competition and its competition for the previous episode. As with "LeLa," this is another way of providing some more insight into big week-to-week fluctuations. Again, you can see that Mr. Sunshine had the misfortune of facing 44% more competition this week than last week, and that's another part of that show's huge 22% drop week-to-week. I should note that some of these and some of the "LeLa" will read "n/a" where they shouldn't necessarily (like the Modern Family repeat above) because my spreadsheet only has full listings dating back to one week before I started posting them. (Starting Monday, February 7.) So while I did enter complete season individual episode results, I don't have full listings so I don't have lead-in/competition info for shows that last aired before February 7, 2011.
bcShr - Calculation. Broadcast share. Percentage of US national big-5 broadcast TV-watching adults 18-49 watching the program. This is basically my take on "Share." Here's the thing with share. It should be good, in theory, for comparing a Friday success against a midweek success. Friday has fewer people watching, so let's just compare the percentages within the people who are watching. But that gap doesn't cover it. Usually there are about 20% fewer A18-49 watching on Friday than during the midweek. (Or about 5-10 percentage points; it's typically around 30% on Friday and 35-40% on "regular" days, at least at the time of year I'm writing this.) But the gap between "midweek success" and "Friday success" is usually more than 20%, and most shows that move to Friday drop more than 20%. So it's not just a straight-up "overall viewing levels" thing. I think the reason for this is that there are a lot of people watching both during the week and on Friday that aren't really "up for grabs" by the broadcasters. So the gap's really bigger than 20%, it's just made to look smaller by all these people locked into other channels. So the best way I can think of to try to put Friday/Saturday on a level playing field is to just compare broadcast viewing rather than all viewing. Hence, bcShr. How does the show do compared to how all the other broadcast shows are doing? That seems to be the closest thing we can get to an apples-to-apples midweek-to-Friday level playing field
10:00 fix - 10:00 shows are currently one among three networks (1 / 3) where others are one among 4.5. So multiply by a 3.0 / 4.5 constant to make it as if you were one of 4.5 networks (1 / 4.5). This constant is applied to all programming airing at 10:00.
Avg - Calculation. Percent difference from the show's average demo for previous original episodes this season. A nice thing to have because, as noted earlier, the "Last" stat which has so permeated the daily ratings narrative can often be misleading. ("Steady" at a series low is not really an accomplishment, while calling something "down" when it loses one tenth from a big spike to a series high also kinda misses the point.) I understand that the "Last" thing is probably the best way to make this stuff interesting week-to-week, but this is sometimes the more correct story within the context of the show's previous ratings. (But "Avg" can be unhelpful in its own right at times; for example, shows typically decline in the spring meaning lots of well-below average episodes. That's not "bad" for the shows per se, just natural.)
Rank - Calculation. The rating's rank among the show's original episodes that have aired so far this season. Basically the same as Avg, a way of comparing it against the rest of the season. Since it doesn't denote ties, it may not be clear if it's a "15/16" whether it tied a season low or is actually the second-lowest number. Not sure how to elegantly fix that and still make it fit in the space. Sorry. But hey, you can rest assured that "16/16" is definitely an outright season low.
Now let's take a look at the daily Viewing Levels table:
|Slot||Min HUT||Calc HUT||Max HUT|
This estimate of viewing levels (the percentage of all TV-owning adults 18-49 who are watching any TV) has really interested me since I started doing this, and in fact it's the number I most anticipate when I start plugging stuff into the spreadsheet each day. The problem is its inaccuracy. It's mostly about rounding. For example, if a show gets a 3.4/9 rating/share, that means the 18-49 viewing level could hypothetically be anywhere from 3.35/9.5 (35.3%) to 3.45/8.5 (40.6%). With lower-rated programs, that margin of error gets bigger. By combining all results in a timeslot (all ratings plus all shares), I get closer, but it's still usually in the vicinity of ± two percentage points (or higher on a repeat-filled night like the one above). Maybe some day I'll find a way to get more precise viewing level information, but today is not that day!
Min/Max HUT. These numbers are derived by reverse-rounding to the minimum and maximum HUT possibilities for every single rating/share calculation. (like the 3.4/9 -> 35.3 and 40.6 described above) Then I combine all those for a given timeslot and take the highest of the "Min" and the lowest of the "Max" to create the smallest possible definite boundaries for what the viewing levels are. Unlike with the actual HUT estimate, I don't throw in programs that overlap with the hour (see "the BLUE problem" below) just to make this as accurate as possible, though I do a separate 8-10pm calculation that lets me get the two-hour shows into the mix. You can be assured the viewing level will never fall outside of the Min/Max range.
Finally... the BLUE problem. The problem with the stats in blue comes from the fact that I don't usually have half-hour breakdowns within a program, so for now I'm forced to treat each show as if it gets the same rating throughout. Especially with two-hour reality shows that grow a lot during the broadcast, that means that program is "undercounted" in the second half and "overcounted" in the first half, so measurements that are reliant on other shows in the timeslot (Comp, bcShr, Calc HUT) can be thrown off. A good rule of thumb for those numbers is that they are most accurate for programs that don't go up against anything longer than they are. Most two-hour programs will usually have reasonably accurate Comp/bcShr/Calc HUT numbers because they almost never face anything that overlaps with them, so the lack of breakdowns doesn't matter. Again, maybe some day I'll get half-hour breakdowns for everything every day, but we're not there yet.