Steven Strogatz on math, from basic to baffling.

Long before I knew what calculus was, I sensed there was something special about it. My dad had spoken about it in reverential tones. He hadn’t been able to go to college, being a child of the Depression, but somewhere along the line, maybe during his time in the South Pacific repairing B-24 bomber engines, he’d gotten a feel for what calculus could do. Imagine a mechanically controlled bank of anti-aircraft guns automatically firing at an incoming fighter plane. Calculus, he supposed, could be used to tell the guns where to aim.

Every year about a million American students take calculus. But far fewer really understand what the subject is about or could tell you why they were learning it. It’s not their fault. There are so many techniques to master and so many new ideas to absorb that the overall framework is easy to miss.

Calculus is the mathematics of change. It describes everything from the spread of epidemics to the zigs and zags of a well-thrown curveball. The subject is gargantuan — and so are its textbooks. Many exceed 1,000 pages and work nicely as doorstops.

But within that bulk you’ll find two ideas shining through. All the rest, as Rabbi Hillel said of the Golden Rule, is just commentary. Those two ideas are the “derivative” and the “integral.” Each dominates its own half of the subject, named in their honor as differential and integral calculus.



Roughly speaking, the derivative tells you how fast something is changing; the integral tells you how much it’s accumulating. They were born in separate times and places: integrals, in Greece around 250 B.C.; derivatives, in England and Germany in the mid-1600s. Yet in a twist straight out of a Dickens novel, they’ve turned out to be blood relatives — though it took almost two millennia to see the family resemblance.

Next week’s column will explore that astonishing connection, as well as the meaning of integrals. But first, to lay the groundwork, let’s look at derivatives.

Derivatives are all around us, even if we don’t recognize them as such. For example, the slope of a ramp is a derivative. Like all derivatives, it measures a rate of change — in this case, how far you’re going up or down for every step you take. A steep ramp has a large derivative. A wheelchair-accessible ramp, with its gentle gradient, has a small derivative.

Every field has its own version of a derivative. Whether it goes by “marginal return” or “growth rate” or “velocity” or “slope,” a derivative by any other name still smells as sweet. Unfortunately, many students seem to come away from calculus with a much narrower interpretation, regarding the derivative as synonymous with the slope of a curve.

Their confusion is understandable. It’s caused by our reliance on graphs to express quantitative relationships. By plotting y versus x to visualize how one variable affects another, all scientists translate their problems into the common language of mathematics. The rate of change that really concerns them — a viral growth rate, a jet’s velocity, or whatever — then gets converted into something much more abstract but easier to picture: a slope on a graph.

Like slopes, derivatives can be positive, negative or zero, indicating whether something is rising, falling or leveling off. Watch Michael Jordan in action making his top-10 dunks.

Just after lift-off, his vertical velocity (the rate at which his elevation changes in time, and thus, another derivative) is positive, because he’s going up. His elevation is increasing. On the way down, this derivative is negative. And at the highest point of his jump, where he seems to hang in the air, his elevation is momentarily unchanging and his derivative is zero. In that sense he truly is hanging.

There’s a more general principle at work here — things always change slowest at the top or the bottom. It’s especially noticeable here in Ithaca. During the darkest depths of winter, the days are not just unmercifully short; they barely improve from one to the next. Whereas now that spring is popping, the days are lengthening rapidly. All of this makes sense. Change is most sluggish at the extremes precisely because the derivative is zero there. Things stand still, momentarily.

This zero-derivative property of peaks and troughs underlies some of the most practical applications of calculus. It allows us to use derivatives to figure out where a function reaches its maximum or minimum, an issue that arises whenever we’re looking for the best or cheapest or fastest way to do something.

My high school calculus teacher, Mr. Joffray, had a knack for making such “max-min” questions come alive. One day he came bounding into class and began telling us about his hike through a snow-covered field. The wind had apparently blown a lot of snow across part of the field, blanketing it heavily and forcing him to walk much more slowly there, while the rest of the field was clear, allowing him to stride through it easily. In a situation like that, he wondered what path a hiker should take to get from point A to point B as quickly as possible.

One thought would be to trudge straight across the deep snow, to cut down on the slowest part of the hike. The downside, though, is the rest of the trip will take longer than it would otherwise.

Another strategy is to head straight from A to B. That’s certainly the shortest distance, but it does cost extra time in the most arduous part of the trip.

With differential calculus you can find the best path. It’s a certain specific compromise between the two paths considered above.

The analysis involves four main steps. (For those who’d like to see the details, references are given in the notes.)

First, notice that that the total time of travel — which is what we’re trying to minimize — depends on just one number, the distance x where the hiker emerges from the snow.

Second, given a choice of x and the known locations of the starting point A and the destination B, we can calculate how much time the hiker spends walking through the fast and slow parts of the field. For each leg of the trip, this calculation requires the Pythagorean theorem and the old algebra mantra, “distance equals rate times time.” Adding the times for both legs together then yields a formula for the total travel time, T, as a function of x. (See the Notes for details.)

Third, we graph T versus x. The bottom of the curve is the point we’re seeking — it corresponds to the least time of travel and hence the fastest trip.

Fourth, to find this lowest point, we invoke the zero-derivative principle mentioned above. We calculate the derivative of T, set it equal to zero, and solve for x.

These four steps require a command of geometry, algebra and various derivative formulas from calculus — skills equivalent to fluency in a foreign language and, therefore, stumbling blocks for many students.

But the final answer is worth the struggle. It reveals that the fastest path obeys a relationship known as Snell’s law. What’s spooky is that nature obeys it, too.

Snell’s law describes how light rays bend when they pass from air into water, as they do when shining into a swimming pool. Light moves more slowly in water, much like the hiker in the snow, and it bends accordingly to minimize its travel time. Similarly, light also bends when it travels from air into glass or plastic as it refracts through your eyeglass lenses.

The eerie point is that light behaves as if it were considering all possible paths and automatically taking the best one. Nature — cue the theme from “The Twilight Zone” — somehow knows calculus.

NOTES

In an online article for the Mathematical Association of America, David Bressoud presents data on the number of American students taking calculus each year. For a collection of Mr. Joffray’s calculus problems, both classic and original, see: S. Strogatz, “The Calculus of Friendship: What a Teacher and a Student Learned about Life While Corresponding About Math” (Princeton University Press, 2009). Several videos and websites present the details of Snell’s law and its derivation from Fermat’s principle (which states that light takes the path of least time). Others provide historical accounts. Fermat’s principle was an early forerunner to the more general principle of least action. For an entertaining and deeply enlightening discussion of this principle, including its basis in quantum mechanics, see: R. P. Feynman, R. B. Leighton and M. Sands, “The principle of least action,” The Feynman Lectures on Physics, Volume 2, Chapter 19 (Addison-Wesley, 1964). R. Feynman, “QED: The Strange Theory of Light and Matter” (Princeton University Press, 1988). In a nutshell, Feynman’s astonishing proposition is that nature actually does try all paths. But nearly all of them cancel out with their neighboring paths, through a quantum analog of destructive interference — except for those very close to the classical path where the action is minimized (or more precisely, made stationary). There the quantum interference becomes constructive, rendering those paths exceedingly more likely to be observed. This, in Feynman’s account, is why nature obeys minimum principles. The key is that we live in the macroscopic world of everyday experience, where the actions are enormous compared to Planck’s constant. In that classical limit, quantum destructive interference becomes extremely strong and obliterates nearly everything that could otherwise happen.

Thanks to Paul Ginsparg and Carole Schiffman for their comments and suggestions, and Margaret Nelson for preparing the illustrations.

Need to print this post? Here is a print-friendly PDF version of this piece, with images.