The practice of incorporating micro transactions and loot boxes into video games has grown from sporadic to omnipresent in recent years. 2017 saw the loot box trend explode and even bleed over from a "cosmetic" model to one that affects gameplay. But in-game items like loot boxes—which commonly appear in multiplayer games—are worthless to publishers if players don't engage with them.

Game publisher Activision has already patented a way to drive in-game purchases by manipulating "matchmaking," or how players are paired up with strangers in online multiplayer games. This week, eagle-eyed YouTuber YongYea deserves credit for discovering a similar, though not identical, matchmaking-manipulation scheme being researched and promoted by researchers at game publisher EA.

The discovered papers emphasize ways to keep players "engaged" with different types of games, as opposed to quitting them early, by manipulating their difficulty without necessarily telling players. These papers were published as part of a conference in April 2017, and they indicate that EA’s difficulty- and matchmaking-manipulation efforts may have already been tested in live games, may be tested in future games, and are officially described as a means to fulfill the "objective function" of, among other things, getting players to "spend" money in games.

Fair’s fair? Not to EA

While other EA documents or research may exist, YongYea focused his attention on two of EA's published papers in a video he uploaded to YouTube on Sunday: "Dynamic Difficulty Adjustment [DDA] for Maximized Engagement in Digital Games," and "EOMM: An Engagement Optimized Matchmaking Framework."

The EOMM paper, which is co-authored by researchers from EA and UCLA and was funded in part by an NSF grant, applies more directly to EA's latest online-gaming controversies. This paper outlines a way to adjust games whose difficulty begins and ends not with computer-controlled difficulty issues (enemy strength, puzzle designs, etc.) but with real-life opponents.

"Current matchmaking systems... pair similarly skilled players on the assumption that a fair game is best player experience [sic]," the paper begins. "We will demonstrate, however, that this intuitive assumption sometimes fails and that matchmaking based on fairness is not optimal for engagement."

Elsewhere in the paper, the EA researchers point out that other researchers seem to assume that "a fun match should have players act in roles with perceivably joyful role distribution. However, it is still a conceptual, heuristic-based method without experiment showing that such matchmaking system indeed improves concrete engagement metrics [sic]."

In other words, the researchers are operating in a data-driven manner, clarifying that they don’t necessarily see concepts like “fun” or “fairness” driving the engagement that embodies their thesis. And, as the paper notes, it's engagement, not fairness or fun, that's linked directly to a player's willingness to continue spending money in the game.

EA’s researchers don’t necessarily see concepts like “fun” or “fairness” contributing to their thesis.

To test this thesis, in early 2016 EA ran a test on 1.68 million unique players engaged in 36.9 million matches of an unnamed 1v1 game whose matches can end in wins, losses, or draws. Though the paper doesn't offer further specifics, EA Sports series like FIFA and NHL would fit the description given.

During the testing period, players were analyzed based on their skill level (itself based on wins, losses, and draws) and also their likelihood of "churning" away for at least eight hours after the match. The players were then assigned into one of four pools of different matchmaking techniques: skill-based; EOMM-sorted (the new matching algorithm intended to reduce churn); "WorstMM" (the complete opposite of the EOMM algorithm); and completely random matching.

The paper describes "existing matchmaking methods that heuristically pair similarly skilled co-players," suggesting that live players were unwittingly dropped into EA's experimental matchmaking pools for this engagement research. But thanks to vague methodology descriptions and repeated discussion of "simulations" on existing player and match data, the paper makes it hard to determine if actual, live matchmaking was affected. (EA has yet to respond to Ars Technica's request for comment.)

This EOMM paper also isn't entirely clear about how a player's perceived attributes—including "skill, play history, and style"—correlate with the same player's churn likelihood. This means the paper's thesis can't be written out as simply as something like "bad players will play more often if they're paired with even worse players."

Ultimately, the paper concludes that this EOMM method of matchmaking reduced churn compared to the existing, skill-based matchmaking standard. In four of its five player-count studies, EOMM bested skill-based matchmaking by up to 0.9 percent; the exception was a smaller pool of players, in which skill-based matchmaking reduced churn more than EOMM by a factor of 1.2 percent. In all cases, EOMM bested both the random and "WorstMM" results.

The authors concede that this matchmaking system must evolve to account for factors such as team-battle video games, larger multiplayer scenarios, network connectivity issues, friends lists, and more. They say that "we will explore" all of those scenarios in future tests. The authors also make clear where this modeling could eventually lead: "we can even change the objective function to other core game metrics of interest, such as play time, retention, or spending. EOMM allows one to easily plug in different types of predictive models to achieve the optimization."

If our guess about EA Sports 1v1 games is correct, then that division's "Ultimate Team" products, driven by loot boxes and micro transactions, are already prime for the picking.

Missing whale metrics

The Dynamic Difficulty Adjustment [DDA] paper had previously been found and circulated by fans and critics in late 2017, though perhaps it didn't receive much widespread attention because it didn't declare much new in the games industry. This research paper is a higher-level version of automatic difficulty adjustment features that have appeared in single-player games for decades. Simpler versions of this mechanic have appeared in the likes of Crash Bandicoot and newer Super Mario games.

This EA research-driven take worked, according to the paper, by analyzing and auto-adjusting games of a mobile, EA-published match-three puzzle game. The paper wanted to see whether automatic adjustments would keep players engaged instead of churning away out of frustration or dissatisfaction. (The unnamed game in question could be a version of Bejeweled, the biggest match-three series made by EA-owned studio PopCap.)

The paper's opening abstract could have settled on simply saying that its preliminary DDA system netted a nine-percent "improvement in player engagement," but the researchers chose to attach an economic model to the findings: that the DDA system had a "neutral impact on monetization." (Certain free-to-play versions of Bejeweled allow players to spend real money to earn a performance-boosting "coins" currency faster.) The researchers go on to speculate that this was because its algorithms retained players that have a high risk of churn but who are also "less likely to spend [money]."

Coincidentally, the paper's conclusion mentions a desire to expand DDA testing to "more complicated games with non-linear or multiple progressions, such as role-playing games (RPGs)." We'd also like to see further research to show whether games with more robust online communities or social features, such as online score comparisons, might influence higher-spending "whale" players to spend more, or at least attract more likely whales.

Coming soon? Already here?

Separately, the papers analyze retention methods that, as described, have not been disclosed to players—unlike the clearly marked boosts and aids in newer Super Mario games and the "safe mode" added to horror game Soma. It's unclear whether EA would actively inform players of these kinds of systems, should they be employed in either single-player or multiplayer games, or whether they've already arrived unannounced in EA-published games that launched after these early 2016 tests.

Meanwhile, EA has two big games on the horizon that may marry the single-player challenge tweaks of the DDA study and the matchmaking-driven augmentations of the EOMM one. In addition to Bioware's upcoming Anthem , an apparent space-combat co-op RPG that looks similar to Destiny, EA recently announced sweeping changes to an unnamed Star Wars game . Those changes should add "a broader experience that allows for more variety and player agency," which suggests a switch from its original single-player-only vision to a shared-multiplayer one. This 2017 research strongly suggests that EA has a keen interest in applying these methodologies to its future games, but how these single-player and multiplayer systems might combine to quietly and simultaneously manipulate a game's playerbase is not yet clear.

EA did not immediately respond to Ars' questions about the studies.