Tuesday, May 26, 2020

How Much Did COVID-19 Affect Spring TV Ratings?

This has been a spring like no other, both in television ratings and in things that are much more important than television ratings. But since ratings are what we do here, in this post I'm gonna tackle a couple questions: 1) how much did the spring viewing inflation affect TV ratings and 2) why should we care? Or, to try to combine them into one question: was the inflation so profound that we should think of the 2019-20 TV season almost like two separate seasons?

Quantifying the COVID Effect

I'm gonna go about trying to find a number for the spring environment in a couple ways. The first one is simple, and is built on the league average numbers we already have. Through 24 weeks (the last week with no discernible effect on viewing levels) the league average was projected at 0.77. These projections are far from perfect, but this number had gone up a bit at the very start of the winter when Jeopardy! The Greatest of All Time was on and The Bachelor returned and then basically hadn't budged for two months. A 0.77 would be -19% year-to-year, and there was little in the early-2020 data to suggest it was going to be better than that. In fact, most 2020 weeks had been down 20% or more, though it was reasonably expecting that to ease up a touch with a spring season of The Masked Singer on tap.

Instead of the 0.77, we came out of the spring with a final league average of 0.82, down by just 14% from last year. From a raw numbers standpoint, what this basically meant was that the spring drop we're used to seeing every single year didn't happen. The season-to-date league average thru week 24 was 0.817, and the league average the rest of the way was 0.823 (leading to a final league average of 0.819).

How bizarre is this? Over the course of the entire A18-49+ era, it's actually not that bizarre, because the first half of the era was defined by Fox being essentially a test pattern in the fall and then rising like a phoenix from the ashes at midseason with the return of American Idol. But once we get past Idol's prime years, it sticks out quite egregiously.


If this season had followed the trend of other recent seasons, we would've expected about a 15% drop-off in this period and a post-DST league average of around 0.69 or 0.70. Instead, we got 0.82, which is 19% higher.

A slightly more rigorous way of looking at this, using the actual volume of originals aired: if we assume the first 24 weeks (1,194 total hours) were in an environment with an "effective league average" of 0.77, what would the "effective league average" have to be in the last 11 weeks (527 hours) to get to the 0.82 we ended up with? Without going through the entire weighted mean equation, I will just tell you that the answer is 0.93.

Now, there are some dangers in doing this in such broad terms. As I said earlier, we basically have an entire half of the A18-49+ era where the intra-season league averages were incredibly warped by Fox and American Idol. I think you could make the case that maybe the LA was going to tick up a teeny bit from this projection in the spring, since Fox was going to have a The Masked Singer season airing almost three months longer than it did last year. Also, it's a bit unknowable how much things were affected by networks running short on original content this year. The last seven or so weeks of the season consistently had a lot less original volume than a year ago.

So I also wanted to look at this on a show-by-show basis, and see if that adds up to a similar picture as the larger metrics. For this table, I'm comparing year-to-year trends using pre-DST averages against year-to-year trends using post-DST averages (I'm using year-to-year rather than raw numbers just because that is something that should be relatively flat throughout the season). Just to make sure we have a decent sample size, these are the 34 big four scripted shows that aired at least four episodes in each of the four periods in question (pre-DST last year and this year, post-DST last year and this year).

Brooklyn Nine-Nine-34%+10%+44
SEAL Team-24%+9%+33
Chicago Med-9%+17%+26
Chicago Fire-8%+17%+25
The Rookie-18%+7%+25
The Blacklist-15%+8%+24
Blue Bloods-23%+0%+24
Chicago PD-6%+17%+23
Grey's Anatomy-24%-2%+22
American Housewife-43%-21%+22
Will and Grace-38%-17%+21
The Neighborhood-24%-5%+19
Law and Order: SVU-26%-8%+18
Station 19+1%+19%+18
NCIS: Los Angeles-23%-6%+17
Last Man Standing-34%-18%+15
The Resident-26%-11%+15
NCIS: New Orleans-22%-8%+14
Young Sheldon-41%-29%+13
The Goldbergs-29%-19%+10
Family Guy-30%-29%+2
Single Parents-29%-32%-3
The Simpsons-20%-26%-5
Bob's Burgers-26%-35%-10
God Friended Me-26%-36%-10

30 of 34 shows had improved year-to-year trends in the spring. 25 of those 30 had double-digit improvement. The median pre-DST trend was -24%; the median post-DST trend was -7%. And the median show in this chart was had about a 17.5-point improvement in year-to-year trend in its post-DST averages, compared with its pre-DST averages.

So let's ask the question we asked about the league average in total: is this kind of 17-point improvement typical? Unlike last time, this is something that should work across the whole era, since we're talking about individual shows. So I used the same parameters for every other season in the era. And here are the yearly trends:


Once again: this is not normal. There is no real sign we should expect year-to-year trends to change in the spring, in either direction. In the previous 17 seasons, this change is about -0.4 on average, and it had never been more than five points away from zero in either direction.

How does that translate to a league average? A show suddenly improving from -24% to -7% year-to-year is doing about 23% better than it's "supposed to." If we assume that the "effective league average" in the spring was supposed to be a 0.77, and it instead did 23% better than it was supposed to, that would take us to a number that is similar to our first calculation but actually slightly higher: 0.95.

Now, this approach has its own share of problems. Part of the reason it might spit out a bigger number is that a lot of these shows had earlier finales than usual due to production stoppages, and it seems clear from looking at viewing levels that the COVID-related inflation was more pronounced in the early part of the spring. I would still argue this effect was very much in play even in May; after all, the year-to-year trend was -20% through 24 weeks, and no week after the lockdowns even got back to -10%. But this sample may be slanted a bit toward the heavier part of the inflation.

So if I had to go with one number, I think I would opt for the slightly more conservative 0.93. Your mileage may vary, but it seems abundantly clear however I look at it that the spring was simply a different world from the fall.

A18-49+ in the COVID-19 Era

I promised at the top that I would tell you why I'm writing all this. Who really cares if the "effective league average" was 0.77 for 24 weeks and 0.93 for the last 11?

I'm writing it as a lead-in to our annual journey called The Breakdown, which pulls apart each season by network, night, as well as into three sections of the season: fall, winter and spring. The convenient thing from a timing standpoint is that people started staying home almost exactly at the start of what we call the "spring." That week 24 that I've cited so often in this post was the last week of winter and also the last unambiguously "normal" week from a viewing levels standpoint. So we can rather cleanly separate the season the way we've always separated it: fall/winter are "normal times" and spring is "COVID times."

My big question is: with a change this drastic, can it possibly be fair to judge all three of these sections with the same league average? Should we just chronicle this huge ratings upswing in the spring, and try to account for that in our heads at all times, or should we try to create a Breakdown that blends two different "effective league averages" and at least takes a stab at doing something a little closer to apples-to-apples? Whereas the True formula is an attempt to "flatten" ratings across the season, I'm talking about a metric here that would attempt to "un-flatten" the season, simulating the kind of typical spring drop we're used to exploring every year in The Breakdown.

It might be easier to just do an example. Here's a look at the CBS Monday. The first "Breakdown" will use A18-49+ based on one full league average (0.82) for the 2019-20 season.

fall (A18-49+)

2018-19The NeighborhoodHappy TogetherMagnum P.I.Bull93
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull84 (-9%)
107 (-8%)90 (+2%)76 (-16%)77 (-8%)
98 (-4%)

winter (A18-49+)

2018-19The NeighborhoodMan with a PlanMagnum P.I.Bull92
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull81 (-12%)
106 (-19%)88 (-12%)72 (-11%)74 (-8%)
97 (-16%)

spring (A18-49+)

2018-19The NeighborhoodMan with a PlanThe CodeBull74
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull89 (+21%)
117 (+10%)100 (+18%)77 (+34%)82 (+21%)
109 (+14%)

In this one, CBS appears to pull off an absolute miracle in the late season. A lineup that was down roughly 10% in the first two segments of the season is suddenly up more than 20%. Yes, some of that is because last year Magnum P.I. gave way to the major dud The Code, but we're talking about a roughly 30-point improvement in year-to-year trends at 8:00 and 10:00 as well!

Now, here's the same Breakdown using what I am preliminarily calling CVPlus for 2019-20. The fall and winter numbers are higher, because they are compared against a league average of 0.77, while the spring ones are compared against a league average of 0.93.

fall (CVPlus)

2018-19The NeighborhoodHappy TogetherMagnum P.I.Bull93
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull89 (-4%)
113 (-3%)96 (+8%)81 (-11%)82 (-2%)
105 (+2%)

winter (CVPlus)

2018-19The NeighborhoodMan with a PlanMagnum P.I.Bull92
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull86 (-7%)
113 (-14%)94 (-6%)76 (-6%)78 (-2%)
103 (-10%)

spring (CVPlus)

2018-19The NeighborhoodMan with a PlanThe CodeBull74
2019-20The NeighborhoodBob Hearts AbisholaAll RiseBull79 (+7%)
103 (-3%)88 (+4%)68 (+18%)72 (+7%)
96 (+0%)

Using this metric, we can still see that CBS did better in the spring because All Rise compared much more favorably vs. The Code, but the difference is much more reasonable. The 8/7c hour takes a modest dip in the spring rather than going up, and reverts to about the same y2y trends it had in the fall. Bull still had its best segment of the season, in part because its lead-in had a better year-to-year trend, but it's now about a ten-point difference rather than 30. And we should expect it to overachieve somewhat, given that Bull had the fourth-best improvement out of the 34 shows in that big table above.

So, having seen an example of how this would work, it's time to explore the logistical questions in terms of approaching The Breakdown this year. How far should we go in terms of implementing a metric like CVPlus into the analysis of the 2019-20 season?

Should we use "normal" A18-49+ or CVPlus in The Breakdown? The answer to this is almost certainly going to be "both"; I plan to have a toggle that you can use to go between the two. Perhaps the more appropriate question is Which of the two should be the default view? I am leaning toward CVPlus, because I think it is much more suitable for posts like The Breakdown where a big part of the analysis is in sub-sections of the year.

But this is a metric, like any other, that will create winners and losers, and I'm always worried that it might do so in a way that is systematically unfair. When this first began, I was more concerned about the viewing inflation being unequal across different timeslots. Viewing levels suggested a sort of flattening out of overall viewing across the week; tons more people were at home on the weekends, but there was not that much of a spike for the already-high Sunday and Monday viewing levels. Though we don't get to see half-hourly viewing levels anymore, I was also concerned that it was affecting the early hours more than the late ones, since the early ones are usually depressed by DST at this time of year.

Looking at the list of 34 shows above, I am not too concerned about those timeslot things anymore. Maybe there's some effect, but there are still some early-week shows among the highest achievers (The Neighborhood, NCIS) and some 10/9c shows as well (Blue Bloods, Chicago PD). The Rookie is a shining example of both, though of course it had timeslot help from American Idol in spring 2020 as well.

But there is another paradigm that makes me a little more concerned: network-specific effects. Have certain networks gotten a bigger primetime surge during the pandemic, perhaps thanks to a major upswing in their local news departments? On the list of 34 shows above, you can kind of see this with Fox. Not a single Fox show is above the median. The Resident and Last Man Standing are close, Empire and 9-1-1 a little less close, and the Fox cartoons did generally awful. If we included the CW, not yet mentioned in this post, that network had even less of a spring surge; in fact, the CW spring seems explained better with raw A18-49+ than with CVPlus. Comparing the pre-spring and spring year-to-year trends as we did above, they had two qualifying shows that did way better in spring (Charmed and Riverdale), two that did noticeably worse (Supergirl and Legends of Tomorrow) and two about even (The Flash and Dynasty). So they look closer to a wash than to the 17-point change we saw on the big four. So maybe CVPlus should be the default for the big four but normal Plus for the CW? That is a bit too convoluted for my tastes, but we'll see.

Here's another question: if CVPlus is good enough to be the default view on The Breakdown, is CVPlus good enough to take over for A18-49+ completely, on things like The War of 18-49 and Schedules Plus? I don't have an answer for this one because I haven't looked at a whole lot of CVPlus full-season averages yet. My guess is that it will not have nearly as warping an effect on full-season averages, because a lot of shows will have their originals distributed about like the league average was. The original weighted mean equation had 1,194 pre-spring hours and 527 spring hours, which means that 31% of the originals took place in the spring. For individual shows that break down similarly, then the CVPlus for the full season should be about the same as the Plus. We'd of course see a big change with shows that have separate fall/spring seasons, like The Masked Singer and Survivor and The Voice, but it may not be a game-changer with scripted shows. So my instinct here is to stick with regular Plus and make CVPlus just a one-year Breakdown feature rather than something that has to be explained over and over again forever. But I'll dive into some more full-season numbers and see if a pressing need arises.

Anyway, that's how I'm looking at all of this, and I welcome any of your thoughts. I will be making these decisions pretty soon either way, and the goal is to get The Breakdown rolling again later this week or next week.

No comments:

Post a Comment

© SpottedRatings.com 2009-2018. All Rights Reserved.