Sealed standard is the flavor of the game where you open 6 packs of random cards and build a small deck using 40 cards. Then you play other people who do the same thing. It’s a great way to play without all of the commitment and to get to know the newest iteration of the game.

After a 2 decade hiatus, I started playing again, online and at my local game store . To no one’s surprise, the game has changed a bit over 2 decades. While I enjoy playing the game, I had no interest in studying up and no interest in filling in 2 decades of a missing collection. I was just trying to have fun. So I played the flavor of the game that only used a limited set of the most recent cards; Sealed Standard.

Two years ago I was in the midst of starting a gut rehab on my home, starting a new job, and ramping up PERRO’s lead in the water campaign. Someone suggested that it might be good to do something that wasn’t productive to relax. In this midst of this presidency, a lot of activists are getting burnt out, and the reaction to that has been to remind people that self-care is an act of political warfare . I had just heard a Planet Money podcast about Magic and I was blown away that it was still huge!

What follows is an algorithm for building an optimal Magic The Gathering standard sealed deck using linear programming. I also wrote up a quadratic programming version of it, but getting it to work required proprietary libraries… I’m sure my employer will look forward to that iteration. If you want to see the code, or add to it, or modify it, it’s on my github.

Seriously though, in my last 5 years in industry, these algorithms have blown people away. They’ve generated so much revenue that it’s clear to me that these algorithms and this field needs more attention! And the release of the Guilds of Ravnica set provided a great opportunity.

Womp, womp. No beer money for me! Regardless, it’s interesting enough that I figured if can’t make money off of it, I should at least open source it for a teachable moment. And after all, in these times of Nazi resurgences, we need to teach people how to beat Nazis again!

So I wrote the algorithm up, played around with it, made it into an app , and posted it on the Magic sub-reddit in hopes that others might find it useful and I could make a little cash on the side… that didn’t happen. It was pointed out that using this algorithm in a competitive setting was against the rules; specifically, tournament rules section 2.11 and 2.12 .

After some time I realized that building a reasonable deck with a random assortment of cards is an optimization problem of the variety that I studied while getting my Master’s degree in game theory. It’s also very, very similar to a previous blog post of selecting a fantasy football team for daily fantasy football.

The Algorithm

If you’re really interested but don’t know about linear programming, there are plenty of resources online. Here’s Wikipedia to get you started.

This algorithm isn’t going make you a pro, but it will help you be competitive. This is also a very basic implementation of the algorithm. It doesn’t account for interactions between cards; e.g. lifelink cards won’t be valued more if you include Dawn of Hope in your deck.

Last time I had a linear implementation of this, but this is more naturally solved using quadratic programming.

Data Data! Data! Data! I can’t make bricks without clay. – Sherlock Holmes I’m happy to report that mtgjson.com has most of the data required. It took a little bit of effort to parse the structure of the json (as a data scientist, I dislike json), but eventually I got what I needed. You can download the zip from here. As always, you can download the file with code, but I got excited and started clicking away 🙂 I used the jsonlite package to parse the data and then flattened it out. We want one row per card. install.packages("jsonlite") library(jsonlite) mtg <- read_json(path = "AllSets.json") # Doing some checks on the structure length(mtg[[which(names(mtg)=="GRN")]]) names(mtg[[which(names(mtg)=="GRN")]]) GRN <- mtg$GRN$cards col.names.GRN <- unique(unlist(lapply(GRN, names))) GRN.flat <- data.frame(matrix("", ncol = length(col.names.GRN), nrow = length(GRN)), stringsAsFactors = FALSE) colnames(GRN.flat) <- col.names.GRN for(i in 1:length(GRN)){ for(j in 1:ncol(GRN.flat)){ if(sum(names(GRN[[i]])==col.names.GRN[j])){ GRN.flat[i, j] <- GRN[[i]][[which(names(GRN[[i]])==col.names.GRN[j])]] } } } GRN.clean <- GRN.flat[, c("name", "rarity", "colorIdentity", "types", "cmc")] Now we want to clean things up for our linear programming GRN.clean <- cbind(GRN.clean, "score" = 0, "B" = 0, "G" = 0, "R" = 0, "U" = 0, "W" = 0, "quantity" = 0) # Some of the colors identities got dropped in the flattening, so this: for(i in 1:length(GRN)){ card <- GRN[[i]] for(j in 1:length(card$colorIdentity)){ GRN.clean[i, card$colorIdentity[[j]]] <- 1 } if(length(card$colorIdentity) > 1){ GRN.clean[i, "colorIdentity"] <- "M" } } # Fixing "colorIdentity" to handle these how we want them to GRN.clean[which(GRN.clean$type == "Land"), "colorIdentity"] <- "Land" GRN.clean[which(GRN.clean$type == "Artifact"), "colorIdentity"] <- "Artifact" # We want artifact creatures as creatures for the optimization GRN.clean[which(lapply(sapply(GRN,function(x) x["types"]), length)>1), "types"] <- "Creature" # We don't want basic lands to be optimized, you can figure that out yourself basic.land <- which(GRN.clean$rarity=="Basic Land") GRN.clean <- GRN.clean[-basic.land, ] # Next time I should remember to delete Planeswalker deck cards before reordering # Finally I have to reorder to be able to copy down scores by hand more easily GRN.clean <- GRN.clean[order(GRN.clean$colorIdentity, GRN.clean$name), ] write.csv(GRN.clean, "GRN.csv", row.names = FALSE) The above data is enough to define the problem constraints matrix, but I needed some card rankings to define the objective function. In trying to get a friend unacquainted to Magic to play, I stumbled upon Magic Community Set Review. Of course, this site is build using javascript, so scraping the data was going to be a headache. Instead of doing that, I spent about 20 minutes just copying the scores manually. No programming wizardry there, just good ol’ manual labor. You could also use LSV’s reviews or your own… just change the entries in the spreadsheet, it’ll still work! There was also some cleaning up to do with the rows of the csv. The planeswalker cards had to be removed and cards like Assure/Assemble had two rows that had to be combined into one. # Read back in and sort by color and cmc GRN.clean <- read.csv("GRN.csv") GRN.clean <- GRN.clean[order(GRN.clean$colorIdentity, GRN.clean$cmc), ] write.csv(GRN.clean, "GRN.csv", row.names = FALSE)