For a fleeting moment, the humans thought they had a chance.

Four professionals poker players were convinced they found a flaw in the sophisticated artificial intelligence software that was beating them in a tournament of no-limit Texas Hold 'em. If they bet in odd sizes, it seemed to trip up the computer. Within a day or two, though, that weakness vanished – and hope of beating the machine went with it.

"It became very demoralizing showing up everyday and losing this hard," said Jason Les, who has played professional poker for a decade.

And a hard loss it was. When the 20-day tournament was done, the artificial intelligence, called Libratus, won a princely $1,766,250. All four professional players – Dong Kim, Daniel McAulay, Jimmy Chou and Les – finished in the negative.

The win demonstrates the increasing sophistication of artificial intelligence software as computer scientists work to digitally replicate, and ultimately improve upon, the human thought process. In this case, scientists demonstrated that AI can outwit the human brain in situations where at least some of the information needed to make smart decisions is unknown.

Artificial intelligence systems have mastered and beaten humans at other strategy games, such as Go or chess, where both players have a full view of the game board. But poker is tricky; it doesn't know the hand an opponent has been dealt, or what decisions he or she will likely make as a result.

In this way, poker is more reflective of the decisions people face every day – using just the information available to them and their experience to make their decisions while also trying to account for the decisions that others will make in response. In economics and social science, it's a discipline called game theory.

Games with imperfect information have made it hard for artificial intelligence to outwit the human brain. AI has beaten players at limit Texas Hold 'em, which has more rules and structure, but not the more freewheeling no-limit Texas Hold 'em poker. That changed Monday night with Libratus's win at the Rivers Casino in Pittsburgh.

Fortunately for the professional players, no money will change hands. The tournament was conducted for research purposes by the computer science department at Carnegie Mellon University. Professor Tuomas Sandholm and doctoral student Noam Brown hope Libratus can ultimately be used in a number of game theory scenarios, such as business negotiations, cybersecurity attacks or military operations. Basically, any situation in which two parties with different objectives and incomplete information need to find a resolution.

"There is usually hidden information that one side has or the other side has and they want to keep hidden," Brown said. "We see this as research that is fundamental to dealing with uncertainty in the real world."

"This is not necessarily replacing humans, but it's taking their negotiation and strategic reasoning ability to another level as a support tool," Sandholm said.

In particular, the artificial intelligence developed at Carnegie Mellon was able to assess and revise its strategy each time an opponent made a move so that its decisions were optimized based on the most recent information. Past AI software did not adapt as quickly or as often while a hand was being played.

Even still, real-world decision-making is more complex than a game of heads-up Texas Hold'em poker.

The tournament proved artificial intelligence can be successful in one-on-one competitions – each of the professionals played solo against the machine – but it becomes less effective when there are multiple actors to take into consideration. What's more, poker is a zero-sum game with a defined hierarchy of outcomes – a royal flush always beats a full house. Real-world decisions, on the other hand, often require compromise, and whether one outcome is more or less desirable than another may depend entirely on the player's perspective, said Vincent Conitzer, a computer science professor at Duke University.

"Applying it to the real world does require a few more steps typically," Conitzer said.

Sandholm and Brown say the tournament's outcome will help determine those next steps for their research.

Participants were dealt 120,000 hands over the course of 20 days. Each played poker for 10 to 11 hours per day, then the four human competitors convened in the evenings to trade strategy tips and study the computer's performance.

But while the humans plotted, Libratus was also preparing. Each night, a super computer would analyze all of the day's plays, compare them to past data, and learn lessons to make the algorithm smarter the next day. As an opponent, Libratus got stronger, not weaker, with each hit the professionals managed to land.