ADVERTISING:

- Gorgon the Wonder Cow

A team that controls the perception of hero value has the upper-hand from before the game starts until after it ends.

Just like the value of a new stereo system or a burger and fries, the value of a hero is relative to what teams will pay for it

We want to measure the perceived value of a hero rather than that hero’s impact,

This gives us an easy scale of 0 to 1 which tells us how highly a specific team evaluates a specific hero.

a 1 means a hero is banned in the first ban phase every game and a 0 means that the hero was never drafted or banned.

A hero with a five here has an above-average draft priority approximately the same caliber as Albert Einstein’s above-average IQ.

Value Across the Top Teams

Next, let’s average these scores together.

This will give us an overview of how the uppermost echelon of Dota teams value heroes in this patch.

In other words, a 6.83 coefficient of .5 means a hero is 50% as valued as the most valued hero, and so on.

Hero Performance and Hero Value in 6.83

a hero is rated based on the type of victories he achieves, not just the number.

(thanks again, Nox!)

It takes bias away from winrates which are distorted by unfair matchups, where one team would likely beat another team regardless of which heroes were drafted (within reason).

there is a loose trend (amounting to a statistically significant 14.4% of the variance of one of these values being explained by the other).



Insider Drafting Controlling Supply & Demand Zhang '

There is a relationship between hero performance and top-tier teams’ draft preferences, although it isn’t clear if this is due to popular heroes being practiced more or heroes who perform better being more highly prioritized (it’s probably a combination therein)



Pro drafters are not basing decisions on individual hero performance, but rather on much grander concepts, including team synergy, personal preference, and (most notably) popularity.



We know because our correlation between prioritization and hero performance isn't stronger, meaning that 85% of pro draft variability cannot be explained by performance trends . In fact, there was no significant correlation between a team's draft priority and their own winrates or the general patch winrates.

Insider DraftingIn The TI4 finals, Newbee drafted Alchemist against Vici Gaming the first three games. VG felt forced to use a ban on Alchemist in game four (reducing their say by 20% and allowing Newbee to draft powerhouse Doom). Alchemist was only drafted 24 times by other teams in the entire tournament, and to mixed results. YetZhang ' xiao8 ' Ning was able to so inflate this hero's value to the point that his opponents willing traded advantage to remove him.There is a relationship between hero performance and top-tier teams’ draft preferences, although it isn’t clear if this is due to popular heroes being practiced more or heroes who perform better being more highly prioritized (it’s probably a combination therein)Pro drafters are not basing decisions on individual hero performance, but rather on much grander concepts, including team synergy, personal preference, and (most notably) popularity.We know because our correlation between prioritization and hero performance isn't stronger, meaning that. In fact,

One drafter's trash is another's $1 million...sometimes literally

prevalence is not a direct measurement of strength or performance. Value is in group perception.

Top teams tend to be trend setters, and just like a poker player can influence the table by investing chips in a less optimal hand, a Dota drafter can influence the metagame by investing higher draft priority in an average hero.

Once again, I have to give an enomous thanks to datdota.com for making this series possible.

tweet me any responses or suggestions you have @TheWonderCow.

Postscript

This week we're getting into the nitty gritty of how we can truly evaluate heroes as they are perceived in pro Dota, along with to what degree the perceptions are grounded in actual hero performance (hint: they are, but not as much as you might think). Explore the metamindgames that make draft trends so interesting... and so hard to predict.We’re going to talk about draft priorities. What we look at on this series is how players think through the lens of how they act, and we focus on the perception of strength in terms of picks and win rates, even though these numbers are very skewed indicators of performance.. What I mean is that heroes are not chosen because they are good, they are chosen because they are seen as good choices for a myriad of reasons that are, in majority, not based whether a hero is truly better than other heroes.The best drafters can not only account for the value that a hero will give his team, but also influence the value other teams give to a hero, indirectly altering drafting patterns and creating advantage.Interestingly, a you get a much stronger sense of what a team is likely to draft if you look at what is popular among other teams than if you look at what heroes win the most. The perception of value is much stronger than actual hero performance.and not necessarily based in how good of a product it is.So let’s really dig into the perception of value. How can we better measure what teams think about heroes? When I was writing about the DAC , I created a metric that was intended to give us an easy estimation of hero impact on the tournament which factored in bans, picks, wins, and losses.This calculation is amazingly useful for general projection of hero value since it gives us both a sense of how much the hero is being seen AND also how well the hero is being utilized. It also uses a 100 point scale which translates into understandable baselines, making it easy to compare and comprehend.Getting a ballpark estimate of how visible a hero’s wins, losses, and bans like this is often much more useful for metagame analysis than more rigorous measurements of hero performance because this is the information players are experiencing.so we can't use that model here. Instead, we'll evaluate heroes based on much more specific measuring of drafting patterns and leave winrate out entirely.In the spirit of this I’ve designed another simple number: the Hero Draft Index. How it works is that any time a hero is drafted or banned by team X, that hero is given points which scale down based on when in the draft he was picked. Points are awarded when opponents ban a hero as well, but no points are awarded if the hero is picked by the other team because the hero is considered to have been undervalued.Then those points are divided by the total number of games played.I’ll give the exact formula for this coefficient as a postscript for those of you who want to get highly technical, but for now just know thatI performed this calculation for patch 6.83 for the currently active top-ranked teams in the world ( Cloud 9* , and Team Empire ) and found some interesting results.Three points here:This measurement indentifies team draft preferences much better than a simple pick rate because signature heroes are often banned (such as Chen for Team Secret) and we are actively accounting for this fact.There are definite trends, including five of my seven predicted heroes from my first two metagamefornights (AA, Troll, Io, Batrider, and Axe have all grown to be highly prioritized this patch), across all of these teams.Note the standard deviation in the third column: this column lists how many standard deviations away from average a hero is. A score of 0 is average while a score of 5 is five times the typical distance from the average (only factoring heroes which have been drafted or banned).This gives us a much more accurate representation of how tier 1 teams are evaluating hero worth.To do this, we give each team an equal weight regardless of how many games they played and average their scores.There are limitations to this measurement, though: for one thing, because bans are factored for both teams, picks are somewhat disproportionately undervalued when considering multiple teams at a time. This is fine for our purposes because bans are more indicative of which heroes are considered overpowered, anyway, but it does skew our interpretations of the data a little bit. To fix this I created a composite score (the 6.83 coefficient) is just the score achieved by a hero divided by the best score achieved by ANY hero.Why is this important? Even though most viewers would call this the patch of Vengeful Spirit, Axe, and Juggernaut, Io was the most highly valued hero by top-tier teams. The problem? He’s banned so often that teams don’t play him, so fans don’t get a sense of his prevalence. Shadow Fiend (labelled by his Dota 1 name of Nevermore) also has made a startling impact on the metagame considering that he was a no-draft hero for the bulk of this patch.These figures were illustrated in the banner for this segment, which has all the above-average heroes scaled in proportion to their worth.Remember what I was saying about using this information to find correlations between performance and value? Well, let’s do that: my friend Noxville has recently used team Elo rankings to create a modified hero performance index: basically,When a hero is on a team that wins Elo, the hero wins the same amount of Elo. Repeat this over hundreds of games and you can see which heroes are impacting Elo changes (instead of just which ones win more games).This helps us differentiate between the heroes who are winning a lot just because they’re being played by good teams and the heroes who are winning a lot because they are well-performing heroes. What is wonderful about this measurementis that it can be converted directly to a percentage which represents how much better teams have performed with a hero than without a hero. The example he uses is that when a team uses Clockwerk, they win 1.1% more often than expected. Like any measurement, there are weaknesses and assumptions built into this, but it's the strongest indicator of hero-only performance we have readily available.Why use this?This only gives points in proportion to the calibur of the victory, telling us which heroes are truly performing rather than which are just picked by stronger teams.And thus was born the most accurate representation of hero strength vs hero priority ever measured (I suspect). You’ll notice that lots of heroes below .18 priority sit directly on average win expectancy, and that’s because heroes prioritized this low typically don’t have enough games to have a statistically significant measurement of win expectancy impact. These are largely heroes that are used as tier 3 picks by top-tier teams, but largely not played by lower-tier and mid-tier teams (or used equally at all brackets so sporadically that they don’t total at least 5% pick rate).The first thing that should be apparent is thatThis means two things:Here’s the biggest take away:There obviously needs to be some sort of performance benefit to picking a hero, but far more heroes are viable than those which are popular—more importantly, many heroes are popular who underperform compared to their peers.The ability to control the metagame perception of value is possibly the most beneficial skill a Dota team can have.This value inflation can create enormous advantage for teams with specialized players (such asArtour ' Arteezy ' Babaev orSumail ' SumaiL ' Syed Hassan’s Shadow Fiend) who not only manage to succeed with a hero but also convince opponents to spend resources banning or picking that hero as well... even though he leads to a reduction in winrate for most of them.When people ask me why I thinkPeter ' ppd ' Dager orClement ' Puppey ' Ivanov are such good drafters, my answer has very little to do with their technical knowledge or understanding of hero interactions. All pro drafters have an understanding of these concepts, and most know the strengths and limitations of their own players. What exceptional drafters do is inflate and deflate hero value not just in one game but across a series of games to their advantage.I've been getting fewer suggestions, which hopefully means I'm doing a better job. Or maybe people have stopped caring enough to send me feedback. Either way, if you have an idea for a topic, change, stat, or any general inquiry,This article was written byGorgon the Wonder Cow, joinDOTA's Elder writer.Gorgon is an analyst and freelance caster for joinDOTA, CEVO, and anywhere needing a fast tongue with top insight. He is jD's resident "new patch" guy, and has a weekly segment on Defense of the Patience podcast.Location: Ann Arbor, MIFollow him on @TheWonderCow.The formula for the hero draft index:click to enlargeWhen I say that the correlation between performance and preference is 14.4%, what I mean is that the R Square score on a regression analysis finds is .1442. The Adjusted R Square is .1197 and the P value is is .0204 (well within the standard .05 significance level).