Monte Carlo

iment became almost irresistible, at least

for the fortunate few who enjoyed the lux-

ury of a hands-on policy. When shared-

time operations became realistic, exper-

imental mathematics came of age. At

long last, mathematics achieved a certain

parity-the twofold aspect of experiment

and theory-that all other sciences enjoy.

It is, in fact, the coupling of the sub-

tleties of the human brain with rapid

and reliable calculations, both arithmeti-

cal and logical, by the modern computer

that has stimulated the development of

experimental mathematics. This develop-

ment will enable us to achieve Olympian

heights.

The Future

So far I have summarized the rebirth

of statistical sampling under the rubric

of Monte Carlo. What of the future—

perhaps even a not too distant future?

The miracle of the chip, like most mir-

acles, is almost unbelievable. Yet the fan-

tastic performances achieved to date have

not quieted all users. At the same time we

are reaching upper limits on the comput-

ing power of a single processor.

One bright facet of the miracle is the

lack of macroscopic moving parts, which

makes the chip a very reliable bit of

hardware. Such reliability suggests par-

allel processing.

The thought here is

not a simple extension to two, or even

four or eight, processing systems. Such

extensions are adiabatic transitions that,

to be sure, should be part of the im-

mediate, short-term game plan. Rather,

the thought is massively parallel opera-

tions with thousands of interacting pro-

cessors-even millions!

Already commercially available is one

computer, the Connection Machine, with

65,536 simple processors working in par-

allel. The processors are linked in such

a way that no processor in the array is

more than twelve wires away from an-

other and the processors are pairwise con-

nected by a number of equally efficient

routes, making communication

both flex-

ible and efficient. The computer has been

used on such problems as turbulent fluid

flow, imaging processing (with features

analogous to the human visual system),

document retrieval, and “common-sense”

reasoning in artificial intelligence.

One natural application of massive par-

allelism would be to the more ambitious

Monte Carlo problems already upon us.

To achieve good statistics in Monte Carlo

calculations, a large number of “histories”

need to be followed. Although each his-

tory has its own unique path, the under-

lying calculations for all paths are highly

parallel in nature.

Still, the magnitude of the endeavor

to compute on massively parallel devices

must not be underestimated. Some of the

tools and techniques needed are:

●

●

●

●

A high-level language and new archi-

tecture able to deal with the demands

of such a sophisticated language (to the

relief of the user);

Highly efficient operating systems and

compilers;

Use of modern combinatorial theory,

perhaps even new principles of logic,

in the development of elegant, compre-

hensive architectures;

A fresh look at numerical analysis and

the preparation of new algorithms (we

have been mesmerized by serial com-

putation and purblind to the sophistica-

tion and artistry of parallelism).

Where will all this lead? If one were

to wax enthusiastic, perhaps—just per-

haps—a simplified model of the brain

might be studied. These studies, in turn,

might provide feedback to computer ar-

chitects designing the new parallel struc-

tures.

Such matters fascinated Stan Ulam. He

often mused about the nature of memory

and how it was implemented in the brain.

Most important, though, his own brain

possessed the fertile imagination needed

to make substantive contributions to the

very important pursuit of understanding

intelligence.

■

Further Reading

S. Ulam, R. D. Richtmyer, and J. von Neumann.

1947. Statistical methods in neutron diffusion. Los

Alamos Scientific Laboratory report LAMS–551.

This reference contains the von Neumann letter dis-

cussed in the present article.

N. Metropolis and S. Ulam. 1949. The Monte

Carlo method. Journal of the American Statistical

Association 44:335-341.

S. Ulam. 1950. Random processes and transforma-

tions. Proceedings of the International Congress of

Mathematicians 2:264-275.

Los Alamos Scientific Laboratory. 1966. Fermi in-

vention rediscovered at LASL. The Atom, October,

pp. 7-11.

C. C. Hurd. 1985. A note on early Monte Carlo

computations and scientific meetings. Annals of the

History of Computing 7:141–155.

W. Daniel Hillis. 1987. The connection machine.

Scientific American,

June, pp. 108–1 15.

N. Metropolis received his B.S. (1937) and his

Ph.D. ( 1941) in physics at the University of Chi-

cago. He arrived in Los Alamos, April 1943, as

a member of the original staff of fifty scientists.

After the war he returned to the faculty of the

University of Chicago as Assistant Professor. He

came back to Los Alamos in 1948 to form the

group that designed and built MANIAC I and II. (He

chose the name MANIAC in the hope of stopping

the rash of such acronyms for machine names, but

may have, instead, only further stimulated such use.)

From 1957 to 1965 he was Professor of Physics

at the University of Chicago and was the founding

Director of its Institute for Computer Research. In

1965 he returned to Los Alamos where he was made

a Laboratory Senior Fellow in 1980. Although he

retired recently, he remains active as a Laboratory

Senior Fellow Emeritus.

130