In 1980, computer engineering was based on starting with clearly-defined things (primitives or small programs) and using them to build larger things that ended up being clearly-defined. Composition of these fragments was the name of the game.



However, nowadays, a real engineer is given a big software library, with a 300-page manual that’s full of errors. He’s also given a robot, whose exact behavior is extremely hard to characterize (what happens when a wheel slips?). The engineer must learn to perform scientific experiments to find out how the software and hardware actually work, at least enough to accomplish the job at hand. Gerry pointed out that we may not like it this way (”because we’re old fogies”), but that’s the way it is, and M.I.T. has to take that into account.

Jerry Sussman has taught the last class ever of 6.001, Structure and Interpretation of Computer Programs, the introductory computing course at MIT. Here's a summary of the reasoning, taken from Dan Weinreb My own take on this is that you need both. Yes, one of the great advances of modern times is that we have large systems with large libraries, and students must learn to work with that. But, no, that doesn't mean that fundamental skills taught in SICP are any less relevant.As Joel Spolsky points out in The Perils of JavaSchools , it is vital that schools of computing continue to explode the minds of incoming undergraduates by exposing them to Scheme (or Haskell or Erlang or Prolog or anything else far from the mainstream).