In his post “How to Turn Tweets Into Ratings Points“, Zach Rosenberg outlines a formula to convert tweets into some form of T.R.P. equivalent. It goes something like this:

Number of Tweets x Average # of Twitter Followers x % seeing the tweet x Frequency / Twitter Universe x Premium (due to self-selection) = T.R.P..

Zach’s “real-world example” uses the following: 200,000 Tweets about a TV show result in a T.R.P. of 65 points. There’s a catch with this, though – Most of Zach’s “formula” consists of assumptions:

- Average # of Twitter Followers – Zach assumes this is 136.
- % seeing the tweet – Zach assumes this is 5%.
- Frequency – Zach assumes this is 1.
- Premium (due to self-selection) – Zach assumes this is 10.

Normally I’d be excited about every effort to develop a cross-media metric – digital marketers need cross-media metrics to compare, plan, and allocate budget across media. This isn’t a cross-media metric though – it’s a formula that Zach created. Formulas are great – when they give us metrics that have value. Calculated metrics like Zach’s are valuable when they serve one of two purposes:

**Reflect the real world**. If a calculated metric does a better job of reflecting the real world than the standard metrics you have available to you, then it is useful.**Predict the future**. If a calculated metric is a leading indicator of your business or project, it can predict the future, and it’s worth calculating it. For instance, Twitter “moods” can predict the movement of the Dow Jones Industrial Average, up to six days in advance.

You need evidence to prove that a calculated metric (especially this one, where every input is constant) actually reflects the real world or predicts the future. There is a simple (not easy, just simple) way to do so:

Study what would have happened in the past with your metric, and compare it against the actual results.

Here’s how you would do that:

- Calculate the metric for several historical events, in this case TV shows. The more outside factors that could be “correlation not causation” for the outcome you’re interested in, the more calculations you’ll need to do. A good rule of thumb is to use 15 events for every outside factor, plus 15 for your metric.
- Collect the data on the outside factors that relate to the events you’re measuring.
- Compare the results of your calculated metric to the outcomes you’re trying to predict, controlling for the outside factors. You can do this using a formula, but you’re best off with a linear (or non-linear) regression.

So here’s what gets me about Zach’s formula – there’s no evidence that it connects to any outcomes that we’re interested in. The work involved in developing metrics is not in the formula creation, it’s in proving whether or not the formula is useful. If the formula’s not useful, why use it?

All of this points to the differences between * deductive *and

**reasoning:**

*inductive*- Deductive Reasoning: The conclusion follows directly from the premises.
- Inductive Reasoning: The conclusion is supported by the premises, but doesn’t necessarily follow from them.

Zach’s formula is inductive, not deductive. Someone who is interested in the formula should do the work to assess if that number of tweets is a reasonable equivalent to the TV T.R.P.s. Awareness and Recall surveys are good tools to start with, as most traditional media metrics have been heavily influenced by survey usage.