The Next Frontier: Content Analytics

Audience Measurement 2015

Bill Harvey, Chairman, RMT, ScreenSavants
Bryan Mu, Vice President, Research & Analytics, NBCU Cable Entertainment, UCP


High Program Failure Rates.

  • Over $230 billion annually is spent in the creation of screen content. However, on television:
  • Only ~20% of concepts become
  • Only ~20% of pilots make it into a first
  • Only ~33% of first season TV shows make the second

This high failure rate occurs in large part due to the absence of definitive research emotional insight inputs to decision making about what to create, air and maintain.

Research has been pass or fail measures. It needs to be digested insights that help creatives bring out the best in what they have begun.

The Quest for Understanding People

And What Drives What They Watch

First we boiled the Oxford Dictionary ocean
Dr. Timothy Joyce, 1934-1997

Successive phases of national sample self-scaling and factor analysis produced 20 psychographic questions now used by MRI; Harvey extracted and further developed 1562 words or phrases in subsequent work launching cable networks.


Validated Causal Drivers to TV Program Selection

With various cable operators in the 90s, Harvey and Next Century Media (NCM) tested a Recommender based on STB data and 1562 metatags, out of which these 265 DTags accounted for >95% of the correlation with actual viewing.

For example, when one or more of these 265 DriverTags were used in making a recommendation, the odds of that home becoming a loyal viewer of the recommended show were significantly higher.

Viewer satisfaction scores with the recommendations averaged >90%.

DriverTags Associated with Success or Failure

canceled and higher rated sitcoms

Initial Discoveries 1

Sample Size: 22 new series

Using all traditional metrics plus the two new ScreenSavants metrics (DTags and ScreenSavants Proprietary Historic Factor) results in a vast predictivity increase.

Initial Discoveries 2

Sample Size: 152 new series

The Adjusted R Square (fraction of variance in ratings accounted for, where 1=perfect) was highest for all DTags used together with the ScreenSavants Proprietary Historic Factor.

Initial Discoveries 3

The R Square (fraction of variance in ratings accounted for, where 1=perfect) was highest for Values DTags in concert with average network ratings – in practice all DTags are used together causing an even higher R Squared.

Adjusted R Squared


Aha! Moments – Three Studies for NBCU

property-core-values-key-leverage idea

Schematic of Mature DTag System


NBCU SVP Bryan Mu Talks About His RMT Experiences

Hollywood creatives are used to researchers giving them report cards, not advice as how to get higher ratings, so Bryan Mu was surprised when he saw how the creative people liked the DriverTag analyses and the ideas and possibilities it stimulated. Here’s a minute or so of Bryan talking about it.