Standings | Report | Crosstables | Ratings | Links

last updated: March 20, 2001.

Both of these gentlemen have written excellent Rock-Paper-Scissors players that dominated their respective categories.

The field was very strong, but Greenberg clearly demonstrated that it was a cut above the rest. MemBot finished a remarkable fourth overall, behind Doug's main entry, MEMinator, and in a virtual tie with last year's champion, Iocaine Powder by Dan Egnor. Doug also wrote LittleMemBot, which is only nine C statements long, but still would have won the mini-bot competition!

In the match-play component of the competition the top ten finishers were all competitive, with Greenberg and MEMinator tying for highest honours. That result was overturned in the "Best of the Best" series, as TinyMetaDave by David Ozenne and Land War in Asia by Jeff Wilkinson eventually rose to the top, against progressively stronger opposition. TinyMetaDave was another entry in the surprisingly tough mini-bot category (40 statements or less).

The round-robin event wasn't as close, with Greenberg winning by an impressive margin, ahead of General Predictor by Don Beal. Greenberg's decisive victories against certain opponents, and overall aggregate score, make it clear that it is the best RoShamBo program to date.

Several thousand Swiss-system tournaments were also conducted to compute numerical ratings, like those in chess, using a program provided by Michael Gherrity for this purpose. Although emphasizing match-play strength rather than margin of victory, Greenberg again emerged as the highest rated player. Eight programs have now achieved master status.

On average, the Second International RoShamBo Programming Competition produced much stronger programs than the first. For instance, improved versions of MegaHAL and Russrocker4, which finished 3rd and 4th in the first contest, were relegated to 19th and 22nd in this year's event; and Roshambot_Off dropped from 9th place into the bottom half of the crosstable. This is a natural progression, of course, since this year's contestants had the advantage of a strong test suite, comprised of last year's best programs.

Statistics vs Direct History

To date, two major approaches have been used to produce strong RoShamBo players: purely statistical techniques, and the "direct history" method.

The statistical approach, which can be thought of as generalized pattern matching, is intuitive and popular. Programs based on this method include MegaHAL and Simple Predictor. In contrast, "history bots" like Iocaine Powder and Phasenbott search for specific patterns, matching the current context with the best fit occurring previously in the match. They also encorporate the notion of "Sicilian reasoning" (i.e. the six levels of meta-reasoning for any particular prediction method). This was very successful, and they easily outdistanced the other competitors in the first contest.

Naturally, many of the entries in the second contest were modeled on Iocaine Powder. Somewhat surprisingly, Dan Egnor's original program still finished third overall, despite the fact that everyone had access to documented source code, plus a nice description of the underlying ideas.

Trying to beat Iocaine Powder is a daunting task, and many people did not enter because of the difficulty of writing a competitive bot. In fact, there was only a modest increase in the total number of entries for this year's contest, despite a much broader exposure (there were over 20000 visitors to the website after an announcement on SlashDot). The presense of a strong champion, and the realization that the problem is not trivial, probably discouraged dozens of would-be participants.

Of course, it was not necessary to defeat Iocaine Powder directly (it turned out to be possible to win by a small margin, but that alone is not sufficient to ensure a high finish). It is much more important for a program to be more successful against the rest of the field. This wasn't a minor challenge either, and the authors who succeeded can be proud of their accomplishment.

Several participants read Dan Egnor's description of "Sicilian reasoning", and incorporated that into their own programs. But ironically, this may not be the main strength of Iocaine Powder. Less appreciated, but at least as important, is the direct history algorithm which provides a powerful predictor as a foundation. To fully understand this aspect of the program, one must dissect and analyze the actual code.

The most obvious "direct descendant" of Iocaine Powder was Greenberg. Andrzej Nagorko clearly understood the algorithm at its lowest level, and constructed an improved version using a number of new ideas. This is not easy, as the embellishments must be fully integrated, rather than just added piece by piece. (This point is clearly demonstrated by Michael Callahan's meta-program Concat, discussed below).

Doug Meserve developed a new history matching method that was the basis of MEMinator, which finished second overall, and MemBot, which won the mini-bot competition. The technique, which he calls "associative memory", is a faster and more compact method for specific pattern matching. In fact, a toy version called LittleMemBot, which does not use "Sicilian reasoning" and is only nine C statements long, still would have won the mini-bot competition in MemBot's place! This very elegant procedure will be discussed in more detail later.

Before the competition, one might have speculated that the given methods of direct history and meta-reasoning would be difficult to improve. That suggests that the most significant improvements would come from better statistical analysis (which can always be refined further). Don Beal's General Predictor was a healthy step in that direction. It was written immediately after the first competition, and played on par with Iocaine Powder despite being purely statistical in its approach. This is an intriguing result, and suggests that future RoShamBo players should not neglect this aspect of good strategy.

Nevertheless, the primary reason for the success of Greenberg and MemBot were improvements to the direct history approach. It seems that there is room for future improvement in this form of analysis as well. It should be feasible to overtake the new champions by synthesizing the best of both worlds, but that will likely be an even tougher chore than it was to outperform Iocaine Powder. The magnitude of improvement may also decrease, so we may be approaching a point of diminishing returns in the strength of computer RoShamBo players.

Bigger is not better!

The RoShamBo programming competition is a contest of algorithms. The best ideas have been expressed with short, elegant implementations. While a sophisticated algorithm will generally outperform a simplistic one, the best players have all been written with striking economy and efficiency.

In this year's contest, several authors wrote a smaller version of their submission, or a second entry to qualify for the mini-bot side competition. In a number of cases, the smaller program performed comparably, or sometimes even better than the full-sized version. Specific examples include Mini LZ Bot vs LZ Bot (26th, 25th), El vs Newton (49th, 59th), and Henny vs Littlestone (35th, 48th). MEMinator was the fourth largest program in the tournament, but it outperformed its predecessor, MemBot, by only a small margin (albeit an important difference in the final result).

Another good RoShamBo player, finishing tenth in two exhibition events, was NeuroHog, by game designer Frank Nestel. This program is a complex composition of ideas, starting as a neural net and later evolving to encorporate pattern recognition, online learning, and adaptive parameter optimization. It is over 700 lines, and pushes the limits of time and memory utilization. Unfortunately, added complexity often carries hidden costs, and a sporadic memory overwrite error caused occasional segmentation faults. We failed to track down the bug despite numerous attempts, and it had to be relegated to the unofficial events.

The exhibition program Concat weighs in at over 3000 lines, including copies of selected bots from the first test suite. It did well, but not nearly as well as one might have expected, given the nature of this meta-program (more on this later).

However, the grand champion of oversized programs was unquestionably Bitlbot, produced by a trio of computing science graduate students in Bristol, England: Nigel Jewell, Chris Setchell, and Jackson Pope. They built a large neural network, and packaged it up in a submission that was more than 22000 lines in length, with over 88000 commas and 1600 C statements, and requiring more than 1.2 megabytes of disk storage. While predominantly filled with large tables of numbers, the sheer mass of this program exceeded all other submissions from the first and second competitions combined.

Unfortunately (or fortunately, depending how you look at it:), Bitlbot did not qualify because it required twice the allotted time on average. In an exhibition event, it was an above average player, but did not rank among the top contenders, finishing 29th in points and 24th in match-play. I expect these three authors are quite knowledgeble about the construction of neural nets, but it may not be a particularly viable technique for computer RoShamBo players.

At the opposite end of the scale were the top mini-bots: MemBot, LittleMemBot, TinyMetaDave, Black Box, SonOfPsychoRock, and AbRisiF, all under 40 C statements in length. Add to that list Doug Meserve's latest creation, TinyMemBot which is only five C statements, and may be stronger than LittleMemBot, at least in match-play.

But the winner in the "most for the least" category is the very cute one-liner called, appropriately, Henny, by Johannes Fuernkranz:

int henny() { return((*opp_history?opp_history[random()%*opp_history+1]+1:random())%3); }

This is a solid positive expectation RoShamBo player, in both point collection and match-play. It narrowly missed the cut for the top 16 in the "Best of the Best" series, and might have gone much further with a bit of luck. What it does is left as an exercise to the reader.

Programs and Authors