April 4, 2019 8 min read

Jess (not her real name) is a U.K.-based fashion influencer with 230,000 Instagram followers. She worked with 22 different brands in 2018 and charged $1,000 per post.

Those brands didn’t realize that 96 percent of Jess’s engagement is fake, the result of a bot farm she allegedly paid for engagement: $2 for every 1,000 likes, comments or shares. That means that each company Jess worked with likely wasted up to $960.

It’s called “influencer fraud,” a practice spawned after the digitalization of influencer marketing -- the latter of which has, in one iteration or another, been around for hundreds of years. Prominent online personalities purchase fake engagement via bots -- pieces of software designed to automatically like, comment and share social media posts. The other route is to join a community of real users that allows people to “trade” engagement back and forth (e.g., commenting on or liking 250 posts from others in the community to receive 250 comments or likes on your own posts).

Fake followers are another enduring issue on social media platforms, but they’re easier to track and identify -- simply compare an account’s number of followers to its average engagement rate (number of likes, comments and/or shares per post). Fraudulent engagement, on the other hand, can cost companies millions -- especially in a market projected to be worth $5 to $10 billion by 2020. Until now, there was no real way to track it.

Enter Like-Wise. Launched in the U.S. in January 2019, it’s billed as the first tool using AI to comb through influencers’ profiles to detect discrepancies in engagement. It’s a useful way for brands to verify they’re paying for real eyes on their products when they enter into an ad partnership with an influencer, and Like-Wise now counts Amazon, FIFA, Tik Tok, Disney, Nokia, Dreamworks, NBC Universal, Superdry, Häagen-Dazs and more among its users.

Like-Wise collected data from bot farms to build a database of tens of millions of profiles that generate fake engagement on influencers’ pages. The tool uses machine learning to cross-reference those profiles with hundreds of thousands of influencers' accounts, flag suspicious activity and generate an “engagement graph” that compares the influencer’s engagement over time with an organic engagement curve.

“Upon launching the tool, we had over a thousand inquiries in the space of three days,” said Oliver Yonchev, managing director of the U.S. arm of Social Chain, the social media marketing agency that developed Like-Wise. “It was clear to us that it was a systemic issue and people really cared about it.”

Social Chain acts as a middleman in booking about a thousand influencers per month to work with its clients and originally launched Like-Wise for in-house use. Now, they’ll use the tool as an “insurance policy” of sorts for their clients, usually charging an influencer 5 percent of the influencer’s own fee to run the audit (akin to paying for your own background check). The company will also conduct an influencer audit for anyone who asks, client or not -- cost ranges from $7,250 for up to 50 influencers and $35,000 for up to 400. Social Chain said after the launch of Like-Wise, they expect influencer marketing-related activity will account for 40 percent of their business by year-end, but the company declined to share specifics on revenue, citing an audit valuation ahead of a public offering in 2020.

This interview has been edited for length and clarity.

Your technology is among the first of its kind. What made you aware of this problem, and how did you decide to develop your own tool to assess it?

We've seen the influencer marketing industry develop in shifts over the past couple of years. We often refer to it as the Wild Wild West because when I first started working in social media, it was unregulated -- people were able to exploit it, and brands that were brave and quite aggressive achieved tremendous success. Last year was the first time we heard the term “influencer fraud,” and as people that live and breathe social media in every facet 24 hours a day, it was quite a shocking term. As the industry grew, we started to notice disparities as far as engagement numbers, and we knew there was no tool on the market that allowed us to understand whether engagement was being manipulated. We’re not a tech business, but we worked with developers to build a piece of software to bring this tool to life -- coming up with an index of what real engagement looks like, surveying a decent sample size and looking at about eight posts from each account to get an indicative benchmark of their engagement curve and velocity. Then, we used data from all the influencers in our system to calculate the variance. Our tool is the first of its kind, and I think that’s because most other software suites are built by tech companies.

What surprised you most about your results?

We were shocked to find that a lot of influencers our brands were working with at the time appeared to have very abnormal growth patterns. We also found that almost a quarter of the influencers we’d looked at -- about 24 percent -- had manipulated their engagement at some point. There was a real variety: Some manipulated just slightly on their sponsored posts, while in extreme cases, engagement was manipulated up to about 95 percent. That meant that 95 percent of a company’s marketing spend in working with those influencers was wasted. For example, let’s say that in an extreme case, a talent had 10,000 followers and she generated 500 likes, comments and shares on a post -- we could indicate based on growth patterns that 95 percent of that was manipulated and only 5 percent was real engagement. That’s where this starts to become a real problem, and we have an ethical responsibility to help brands navigate that.

You mentioned the industries you looked at may have skewed the data. Can you elaborate?

We work with a lot of retailers in the fashion industry, so our sample size is skewed toward that sector -- especially since we noted that this is a systemic problem in fashion and beauty. The fitness industry also indexed quite high. As you look at other industries -- for example, gaming and such -- you see the manipulation numbers drop. So I would say although our results suggest we can estimate about 25 percent of influencers have manipulated their audience numbers at some point, I think that a fairer reflection across all sectors is closer to 10 percent.

How will you treat influencers who likely used to buy followers or engagement but have since stopped that practice?

A lot of influencers are young, impressionable people, and in many cases, people that may have manipulated their numbers at some point are no longer doing it. When we started this, we took a stance that we wouldn’t make this about individuals but rather make it a collective industry issue to spotlight that we need to try and work towards a better process. When we look at an influencer’s public profile, we can only see realtime data because of the API lockdown and to be compliant with data law. We look at a certain volume of posts, and then -- this is where the manual work comes in -- we reference them against historic data. If there’s a strong indication that they’ve manipulated their engagement in the past but are no longer doing that, we’ll note in our report that we firmly believe that manipulation is no longer an issue.

How will this technology affect the industry as a whole?

One important question right now in the influencer marketing industry is: How do we protect ourselves? Besides influencer marketing, there’s been manipulation in most emerging digital industries, such as paid backlinks for SEO [when a marketing company pays a writer to use stealth marketing by adding specific links in a story to improve its clients’ search engine rankings]. What we’ve seen in the past is that when the platforms themselves take a stance, the providers do, too -- just like when Google took a stand on paid backlinks for SEO and, as a result, the advertising industry did as well. I think we’re at the infancy of the advertising or influencer marketing industry taking a stand on this issue, because it’s for the greater good. I also think third-party software providers will create similar tools and layers of protection and over the next 12 months, agencies will become more vigilant, especially when it comes to the question: How do we measure this? As the media world shifts, it’s important to be able to substantiate effectiveness beyond likes and engagement -- that’s just one metric.