I've always dismissed any advancements in AI vs Humans by saying "Yeah, but they haven't beaten anybody in Starcraft yet". Snobbishly, I might add. Having created a basic Starcraft AI for my thesis and been following the yearly Student Starcraft AI Tournament ever since, I am somewhat familiar with the challenges: incomplete information, long-term planning with the ability to replan when things are not going as expected, micro-managing units, resource optimizations, map analysis, build placements; the list goes on. Sure, some of them can be found in Chess and Go, but it becomes a lot harder when you think that all of this need to happen in real-time.

As amazing as the AIs developed by universities from all around the world are, with many utilizing machine learning and planning, convolution networks, agent-oriented architecture, and most techniques under the sun, when it comes time to go against a human player, as is tradition, year after year, the AI loses. Of note is that the human players are nowhere near professional players, but high level ladder players.

So color me surprised when last week AlphaStar, Google's latest video game overlor... bot, beats human players with a staggering 10-0 score. The two human players who both lost 5-0 to AlphaStar? Two professional Starcraft 2 players from Team Liquid, The Little One (LTO) and MaNa.

Even considering some of its limitations, that is, it can only play Protoss vs Protoss and can only play on a specific map (CatalystLE), the results are still amazing. Given more time, the AI can train against all races and maps. And sure, Blizzard provided Google an enormous amount of match replays the AI could train against, but there are many replay sites out there, so this wasn't an advantage AlphaStar had against others. And before you ask, the AI was artificially limited to the same limitations of human players: the AI could not issue commands faster than a human player can. In fact, the average Actions Per Minute (APM) for AlphaStar was almost half that of the professional players. The AI could not cheat and have vision of the entire map, unlike many in-game AIs. And finally, the only match that the AI lost, was when it was limited to moving the camera like a human player, a mode that was recently added and wasn't fully tested.

I highly recommend watching Winter's video on the matter where he explains what happened and casts most of the matches with interesting non-technical details. Of course there's a post on DeepMind's blog about it which goes more in depth.

For me, we have finally entered the post-Starcraft AI beats human player era. Even if you can find issue with some of the details, the fact is that these conditions and challenges where always there, more than a decade, and yet no AI was able to beat humans, never mind professional ones. I personally thought we were at least 5 years before we saw something like this happen, and yet, here we are. Not many years after AlphaZero and AlphaGo beat professionals in Chess and Go respectively.

DeepMind, you have my attention.

-pek

Articles

(Jan 16) #react

State Management in software applications is an important issue for developers, and there are many dedicated state management libraries out there for developers to use. However, in some situations, developers can easily manage the state of their app in much simpler ways. In this article, Swizec Teller demonstrates how to implement simple state management in React using hooks, using a login form as an example.



(Jan 15) #data-structures

A Radix Tree is an ordered data structure that speeds up the performance of data searches. It has several advantages over a standard binary trees and hash tables, making it a valuable asset in certain performance-dependent applications. In this article, Emile Hugo demonstrates the utility of Radix Trees in a case study involving the blocking of thousands of IP addresses in an application under cyber attack.



(Jan 14) #deep-learning

Generative Adversarial Networks (GAN) are a class of artificial intelligence (AI) algorithms that facilitate unsupervised learning by introducing competition between two neural networks. There are many GAN algorithms out there and the list is growing, making it important for AI developers to be acquainted with the potential benefits of each. In this first article in a series on GAN algorithms, the author introduces a type of GAN known as an Adversarial Autoencoder, describing how it works and covering its strengths and limitations.







Programming language of the day: Kelvin. "Based on Java Algebra System's powerful engine, Kelvin is a powerful programming language built with Swift for algebraic computation. Find more about JAS here."



And that's it for today! Discuss this issue at our subreddit r/morningcupofcoding.

Did you like what you read? Let us know by clicking one of the links below.

Liked - Disliked

I hope you enjoyed reading the latest issue of Morning Cup of Coding. If you did, consider supporting it by becoming a patron (Patreon), buying me a coffee (PayPal), donating anonymously (coinbase), or purchasing an MCC mug (RedBubble); it helps me keep this going.

Cheers,

Pek