The problem with publishers (and brands) making the wrong assumptions about their metrics is well known, and it often gets in the way of meaningful change. Because how do you convince someone to change their path forward if they are convinced everything is already going well?

There are many examples of this, but there are three specific metrics/assumptions that I come across all the time. So, in this article, I will illustrate why these three specific metrics are misleading. And with each one of them, you will probably realize that you have been using them as well, because we all have.

Before I start though, I just want to point out that I am not going to talk about the obvious problem with Facebook views vs other types of views. We all know that the way different social channels measure views online is completely flawed and does not in any way measure what we think they do.

I have written about this many times before. In 2015, I wrote: "You Can't Compare Facebook and YouTube Views". And, in 2016, I wrote: "A Hard Look at YouTube Views vs TV Ratings".

If you are still comparing social media views, I suggest you read those ... because you really shouldn't be doing that.

But, let's now look at the three problematic metrics:

Social engagement cannot be used for sentiment analysis

The first problem is with how many publishers (and brands) are turning to social engagement in order to understand whether people like a post or not.

Let me give a real example that happened just this week. A Danish footwear brand published a campaign that basically made a mockery of equality. They argued that women needed more than equality because they have to buy more expensive underwear.

Yeah... that has to be the worst argument, ever.

Predictably, many people got very upset about this, and this company suddenly faced a social media backlash. But instead of trying to mitigate the damage being done, they made all the mistakes they could make, and all because their marketing people thought they were doing alright.

First they argued that the campaign was working because:

If the campaign creates dialogue and encourages debate - as it does now - the campaign has already done something for the better.

Yeah... no, that's not how this works. And people obviously didn't buy that argument. Then they did the worst thing any brand can do. They said things like this:

We don't state that this is pro feminism. This is a shoe ad with a very ironic and humorous twist :)

This ad is a commercial for our new spring collection (shoes) - it's very heavy on the irony and the stereotypes to make it obvious that this is in fact, nothing more but a fashion ad.

In other words, they were just using this very important debate about equal pay as a way to get some cheap exposure so that they can sell more shoes.

Here is a simple advice for you. Do not ever do this. As Scott Stratten puts it: "When something bad is happening in the world, you either help or you shut up." You don't try to newsjack important events for the sake of selling more products.

And people's reaction to this was obvious. As one person put it:

Oh please. you tried to jump on a social issue to strengthen your brand (like Always did with fight like a girl) and it backfired massively and now you try to hide behind irony which it clearly isn't. It really is time for you to take responsibility for using a serious topic such as equal pay in this advert which dumb down women and demeans the important message equal pay really is. Time to apologise.

So, why didn't they back down. Why didn't they realize that they were harming their own market? Which for this particular brand is critical information because they have been in deep financial trouble for years.

The reason was that they were looking at their Facebook engagement. As they commented to one person.

We can see that there are around 3200 likes + 344 who laughs + 287 who loves it - so it is clear that there are both some who think it is funny and some that are offended.

And indeed, if I look at their Facebook post (as I was writing this), we see this:

As you can see, 12,000 people have liked the post. 1,700 laughed at it, 1,000 loved it, 614 were angry, 51 were surprised, and only 47 people were sad. In other words, there are about 20 times as many positive engagements as there are negative ones.

As they said (translated)

We created a campaign that could boost awareness for the core audience and which could lead to follow-up tactics for boosting sales. A campaign with a tone that would make it go viral. And we succeeded doing just that.

Sounds good, right?

But this is the mistake that I see so many brands and so many publishers make all the time. There are actually three problems here:

First, you cannot use Facebook engagement for sentiment analysis because most of the people who don't like you won't engage with you. So for every person who expressed a negative sentiment, there may be a 100 or a 1000 more that you don't see.

They think more people like this, but they actually have no idea. And from the way people respond to it in the comments and the social backlash it created, I find it to be far more likely that this campaign was detrimental to them.

The point, however, is that we don't know, because this data is useless for real sentiment analysis.

If you want to do real sentiment analysis, you have to do a very different type of study, one that involves more traditional survey methods so that you don't end up with a skewed audience base. You can also do sentiment analysis online, but only if you expand the time frame and instead look at the overall trend lines. But looking at the specific engagement number for a specific post tells you absolutely nothing.

Secondly, the goal of the campaign was to 'boost sales', but I have seen nothing here that would indicate that this is the case. They have no idea if this campaign worked or not, because to measure that you have to look at the actual sales over the period of time following the campaign (in comparison to similar periods/launches in the past).

So for the marketing person to go out and call this a success at this point is bullshit. This person has no idea if this worked or not.

Thirdly, the idea that just because something goes viral also means that it is a success is equally flawed. It's true that in the old world of print, back when we were living in a world of scarcity, the saying "any awareness is good awareness" was kind of true.

But this isn't how the digital world works. In today's world, bad awareness is ... bad!

Let me give you an example. Back in 2014, people were using the hashtag #WhyIStayed to raise awareness about women who stays in abusive relationships. So DiGiorno Pizza tweeted this:

Yeah... that was monumentally stupid. And it became even more idiotic when their social media person admitted that he hadn't checked what the hashtag stood for before using it. He had just noticed it was trending, and then thought he would get in on the action with an ad for Pizza.

As you can probably imagine, this tweet did get a ton of attention and it also went viral. But it was not because people liked it. It was because people couldn't believe how a company could be this stupid.

I see both brands and publishers make this mistake all the time. They look at their social engagement and think something is working when it probably isn't. Worse, they convince themselves that they are on the right track, even though every other metric tells them that they are not.

For publishers, specifically, I see this when we compare how they are doing financially with what their social media focuses on. It's easy to create awareness, to get views, and to get people to like a post. But unless that actually results in your reputation changing in a way so that your bottom line is improved, or that it expands the way you can be monetized, all that social traffic is just crap.

Don't make this mistake. Don't fool yourself into thinking that you are a success because of vanity metrics that don't mean anything.

Be smarter!

What articles are the most popular?

The second metric we need to talk about is how most publishers fool themselves when they create lists of 'most popular articles'.

The problem is that when publishers create a list of 'most viewed articles', they use flawed ways of measuring views. One being that they neglect to account for 'time', and the other being that not all articles are equal.

Let me explain.

What most publishers do is that once per day, week or month (depending on the publisher), they create a list of most popular articles. For instance, you might look at the most popular articles for the month of January 2017, and you would see something like this:

As you can see, 'Articles 2' was the most viewed article of all, while 'Article 10' was in 8th place. But look at what happens if we add when each article was published:

Now you see that "Article 2" was posted 30 days ago, while "Article 10" has only been online for 3 days.

You see the problem here?

This list is crap. You can't compare views of articles without also taking into account how long they have been online. "Article 10" might end up being the most popular article of all if you give it the same time as "Article 2" have had.

Lists like this one are completely misleading, because as publishers we don't publish online once per month. We publish a continuous flow of content, which means that we also have to measure it that way. You are using ecommerce metrics to measure something that isn't ecommerce.

Don't do this!

But this is only half the problem. The other half is that different articles perform in different ways.

One way to see this is to go over to Chartbeat and look at their list of most popular stories in 2016. Notice the black line underneath each one illustrating how each performed.

You see how different these are? Some articles grow very slowly, others start out with a massive peak and then die out. Other articles just continue along with a steady stream of views. Some have brief spikes interrupted by long periods of nothingness, and some have just a single spike of traffic and then disappear completely.

Each one of these signify different types of behaviors, different moments, and different needs. The last article about Usain Bolt is the usual performance for a news article that people don't really care about. It's the type of article that people see and then they forget about it.

The article about baby names is the usual 'search' article, used by people looking for baby names and coming across this one. As such, it continually performs, but only for a very specific moment.

The election article is very different as well because it ends so suddenly, and the article about Pokémon Go is a classic article where you tap into a momentary trend, and then it stays useful for a while until the trend dies out.

Imagine measuring this per month? What you would get is a divided view like this:

You see the problem here?

What we need to do instead is to first understand that different types of content perform differently because they match different types of moments and different types of audiences. So categories your content based on what behavior you see.

Then, instead of measuring it per month, measure it in terms of the lifetime performance of each article.

Almost every publisher are fooling themselves because they are looking at lists that don't actually tell them what they think they do.

Please stop using monthly article lists.

The 'stupid but fun' trap

The last flawed metric I want to highlight isn't really a metric as such, but more of a problem that is exacerbated by other metrics. I call it the 'stupid but fun' trap.

We have all seen how publishers have been able to boost short term traffic by focusing on shallow social tactics, often in the form of completely stupid but fun posts (like memes, funny pictures, or people acting like idiots). This is a phenomenon that seems to work, but really isn't.

Think about it like this: Imagine that you have two YouTubers who have both created channels about food.

One channel is by a person who is coming up with new and exciting ways to do food. For instance, you might see a bread recipe that use ingredients or methods that you haven't considered before, or maybe you will learn how to bake a cake using Earl Grey tea as one of the ingredients.

In other words, it's new, exciting, creative, inspiring and just awesome!

Think about what you could do with such a channel, and with a reputation of being this awesome. You could extend it into selling cookbooks. You could do very interesting brand partnerships that are about more than just random exposure. You could do so many things.

Right?

The other channel is also about food, but here the people creating it don't really know what they are doing. Instead, each video is about how stupid they can be. How they are seemingly able to mess up even the simplest recipes, but at the same time do it in a fun way. It's the 'stupid but fun' focus.

This channel isn't inspiring in any way. But it is fun to watch in the same way as millions of people think it's fun to watch reality TV stars.

So we have these two choices:

Now think about what we can do with this. First, let's talk about traffic. Which one of these two is more likely to generate the most views, the most Facebook likes, and the most shares?

Well, know that the answer to this is almost always the 'stupid but fun' focus, because this is what people spend their time on when they are having a low-intent micro-moment. So, 'stupid but fun' is better at driving traffic.

Next question is which channel is best at driving low-intent advertising exposure at scale? Again, the answer often is the 'stupid but fun' channel, because low-intent advertising depends on low-intent snacking. So those two are linked together.

So, a lot of publishers start to do the 'stupid but fun' editorial focus because it gives that scalable exposure.

But now let me ask you about actual monetization potential. If you are a brand looking to generate ROI from your ad campaigns, which one is better? Massive exposure at scale, but from a channel where people don't really care about the content and is mostly just watching it because they are bored ...or... influencer based advertising on high-intent channels that focus on inspiring people?

The answer, obviously, is that real ROI is more likely to come from the inspiring channels that people watch because they want to rather than the 'stupid but fun' channels that people merely watch because they are bored.

What about other products? Imagine that you wanted to publish a cookbook. In this case, the awesome and inspiring channel will have a fairly easy time selling their book, because they have built up an audience that cares and would love to see more great ideas.

For the 'stupid but fun' channel, however, you can't really monetize it. Because why would people buy a cookbook from someone who clearly doesn't know how to do it well.

BTW: The exception here is if someone reaches the super-celebrity status. But 99.9% of the publishers in the world will never reach that level.

It's the same with subscriptions, donations, and memberships. People are much more willing to pay those who are awesome and where the interaction is based on a high-intent macro-moment. At the same time, people are not very likely to subscribe to someone just doing stupid things for fun, because that's not really something people care about in that way.

This is why I call it the 'stupid but fun' trap. It's a trap because while it often leads to more traffic, it also completely decimates your monetization options. 'Stupid but fun' content can only be monetized by low-end advertising exposure at scale, while inspiring and amazing content opens up a wide range of possible options, albeit at lower traffic reach.

And, if we look at the trends, having multiple income streams is absolutely critical to future publishers. With the continual decline in ad revenue per view, having more options is the only way forward.

So, don't fall into the 'stupid but fun' trap. It's often a dead end. Even BuzzFeed has discovered that doing inspiring and amazing social content works better than shallow listicles.

Three flawed metrics

These are the three flawed metrics that I come across all the time:

The problem of looking at social engagement and thinking it can be used for sentiment analysis.

The problem with measuring the most popular content and not realizing that defining it per month makes no sense.

The problem with optimizing shallow content where the increased traffic fools you into losing your monetization options.

The problem with all of these is that once a publisher (or a brand) convinces itself that this is working, it's very hard to get them to change. Because why change something that looks like it works?

So, look at what you measure and ask yourself if it really means what you think it means. Try looking for the things that you can't see. Ask yourself, what is it that I don't know here? And take a step back and see things for what they are.