Determining the best deck to play in a tournament takes testing, research, and preparation. This is done to gather data for you to use to help you make the right deck choice for the event. The amount of data you gather, the manner in which you gather it, the types of data you gather, and how you interpret the data are all variables that can play a crucial but often overlooked role in whether you make the right choice and are prepared to play that deck optimally. The key to dissecting your data to your best advantage is to understand the things that bias your data and taking those biases into account.

Sample Size

data. With big enough sample sizes, other types of data biases almost wouldnât matter. Unfortunately, few people can afford to playforty or more hours a week to generate the kind of sample sizes needed for overwhelming evidence in every matchup. There are two critical things to remember about sample size. The first one is the more data you can gather, the better. This leads to tough choices: Do you do a small amount of testing of a large number of matchups or do you do a large amount of testing of a small number of matchups?

Your knowledge of the metagame is the key to helping you find the right balance here. The second important thing to remember about sample size is not to overreact to data gathered from a small sample size, which will usually be most of your data. As you become a more experienced Magic player and tester, you will start to get a feel for how to find the most accurate information from the sample size that you have. For example, youâll become able to lose a game and yet realize that the information you gathered from that game actually indicates that the matchup is probably favorable.

Piloting Bias

When I was working with Team Your Move Games to prepare for Pro Tours, we called this âHumpingâ the testing. If you gave Dave Humpherys a good deck, he could beat most players with it even if the matchup should have been a bad one. Dave was such a skilled player that he created false data by finding ways to win games that his deck shouldnât have. This was one of the reasons it was so great to have Hall of Famers like Dave, Rob Dougherty, and we-hope-to-be-soon Hall of Famer Justin Gary to test with. When I relied on the testing results of other players to tell me how various matchups fared, I was often misled. My best bet was always testing a deck for myself and against elite competition. The important thing to realize about piloting bias is that whoâs playing what deck will often influence testing data. Often, your best bet is to have players frequently exchanging whoâs playing each deck.

Metagame Bias

Many people fail to keep up with the speed that the metagame changes, especially the Standard metagame. The decks you test with and against during one week will often be different than the decks that are ruling the metagame during the next week. Thanks to Magic Online, the world shapes and responds to the metagame faster than ever. Sure, maybe youâve tweaked your deck to beat Delver with Swords and Invisible Stalkers, but what if now everyoneâs playing Angels and Pikes? Or worse yet, what if people are playing a different deck entirely, like Zombies or Naya Pod? Your testing loses much of its validity if youâre testing against the wrong metagame.

Matchup Bias

Itâs important to test against a lot of matchups. Many people think they only need to test against what they perceive as their bad matchups. Worse yet are the people who stop testing without getting to their bad matchups. You need to determine what your good and bad matchups are. Learn how to play and sideboard the good matchups so that you can count on winning them. Learn the tricks to improving the bad matchups so that you always have a realistic shot of winning them. Recognize that if you have too many bad matchups, youâre playing the wrong deck.

False Result Bias

Many testers are pure stat junkies: How many wins and how many losses do you have in each matchup? Unless you have a prodigious sample size, this method will be greatly hurt by virtual false results. Itâs easy to complete a game without its result being a meaningful piece of data. What if you were mana screwed or mana flooded? What if you had to mulligan to four? What if your opponent top-decks a one-outer the turn before you were going to win? What if you have a bad draw and the opponent draws the one card in his deck that gives yours trouble? Do these wins and losses deserve to be given equal weight to more typical games? This data needs to either be accounted for orâmore likelyâdisregarded completely. Itâs important to take note of mana problems and mulligans to see if you need to fix your deck, but itâs not data that will help you understand the dynamics of a particular matchup.

Theory Bias

This is especially true given how often the difficulty of a matchup will often be radically changed depending on the approach you use when playing it. For example, perhaps Delver will seem like a bad matchup for your deck in theory because you just canât matchup with Delverâs high-tempo aggression, but after some testing, you find yourself able to beat Delver by taking a more defensive long-game approach to the matchup.

Version Bias

Sometimes, testers will play a matchup a few times and determine that the matchup is so bad and so important that the decks theyâre testing need to be abandoned. This will often have meritâit makes no sense to keep testing a deck thatâs just bad in your metagame. However, sometimes the matchup can be easily fixed by changing the version of the deck youâre testing. Choosing the right version of a deck to play is almost as important as choosing the right archetype in the first place.

Emotional Bias

Letâs face it: Itâs really easy to become emotionally attached to a deck. I donât just mean your Angels and Unicorns deck either. There was a fellow who calls himself Benzo who used to hang out at Your Move Games. He only ever played black or red. His motto was, âMore Swamps means more death, and more Mountains means more violence!â He even liked to play in local competitive events such as Pro Tour Qualifiers and States. The problem was there wasnât always a good red or black deck for a given metagame or environment. Of course, when Buried Alive was the best deck in Standard, he went on to be Massachusetts state champion that year. Many other events saw him riding much lower in the standings, of course, while preaching to his opponents about the merits of death and violence. Itâs okay to have a pet deck as long as you recognize thatâs what it is and that always playing it may be limiting your ability to win events.

Peer Pressure

It can be tempting to let others decide for you what to think about things, and Magic is no exception. Perhaps youâre part of a big testing group who all like to play the same deck as each other. Perhaps youâre worried about playing an unpopular deck and feeling embarrassed if you do poorly with it. After all, if you play a deck thatâs popular with your testing group or on the Internet, if you do poorly, youâll at least avoid criticism of your deck choice. Just remember that peer pressure isnât data. It might make your life easier to submit to it, but it isnât a good way of determining which deck gives you the best chance of winning.

Sideboard Testing

Itâs seems only appropriate to leave sideboard testing for last, since thatâs what most people do in real lifeâ.â.â.âif they test with sideboards at all. Skipping sideboarded testing is a classic time saver that Iâm frequently guilty of, but itâs also an important type of data to gather for the deck-selection process. More than half the games you play will be with sideboards, so how realistic is it to only test how your deck does before sideboarding? Iâve played plenty of decks that were bad in a particular matchup before sideboarding but that became a great matchup after changing seven or eight cards. If youâre serious about gathering the best data for your deck selection, be sure to play a lot of sideboarded games.

Wrapping Up

In many cases, itâs difficult to completely avoid the influence of these biases on your testing. Thatâs okay; thatâs part of the game. What you need to do is be aware of the biases and to take them into account when youâre parsing your testing data. Given the unlikelihood of having a statistically compelling sample size in your testing, the thing you need to concentrate on is getting a feel for the various matchups. Itâs okay to base some of your decisions on theory as long as you have some testing and data to base that theory on. Itâs important to remember that there is a direct correlation to how much preparation time you put into preparing for an event and your ability to maximize your results in that event.