Our approach is to build functioning prototypes and set up real creative experiences with emerging technologies, such as artificial intelligence, machine learning, virtual- and augmented reality, and large-scale dynamic virtual worlds.

At SEED, we explore what interactive entertainment will look like in the longer term. While we do some academic research, we’re not a pure research unit. Trying to guess what the distant future holds has a tendency to become abstract, so we try to be as practical as possible and keep our horizon to technology that we think will impact interactive entertainment three to five years from now.

I joined EA six years ago, after having worked two decades as a computer scientist in various capacities. My first job at EA was with DICE and I later moved to SEED when it was founded two years ago.

First, tell us about yourself. What’s your background, what do you do and what exactly is SEED?

Technical director Magnus Nordin discusses how the Search for Extraordinary Experiences Division (SEED) — a team at EA that explores the future of interactive entertainment — built a self-learning AI-agent that taught itself how to play Battlefield 1 multiplayer from scratch.

One of your latest projects has been to train a self-learning agent to play Battlefield 1 multiplayer. How did that project come about?

Upon learning how an AI created by DeepMind had taught itself how to play old Atari games, I was blown away. This was back in 2015, and it got me thinking about how much effort it would take to have a self-learning agent learn to play a modern and more complex first person AAA game like Battlefield. So when I joined SEED, I set up our own deep learning team and started recruiting people with this in mind.

First we figured out the basics, and built a bare-bones three-dimensional FPS to test our algorithms and train the network. After seeing some good results in our own basic game, we worked with the team at DICE to integrate the agent in a Battlefield environment.

How do you think your self-learning agent performs versus a human Battlefield player?

We have conducted playtests, pitting AI agents against human players in a simplified game mode, restricted to handguns. While the human players outperformed the agents, it wasn’t a complete blowout by any stretch.

The agent is pretty proficient at the basic Battlefield gameplay, and has taught itself to alter its behavior depending on certain triggers, like being low on ammo or health. But Battlefield is about so much more than defeating your opponents. There’s a lots of strategy involved, stuff like teamwork, knowing the map and being familiar with individual classes and equipment. We will have to extend the capabilities of the agents further for the AI to be able to crack these nuts.

Still, after the playtests, a few participants asked us to clearly mark the agents so that they could be properly distinguished, which to me is a good testament to how well the agents perform and how lifelike they are.