The following example made an impression on me when I first saw it years ago. I still think it’s an important example, though I’d draw a different conclusion from it today.

Problem: Let y(t) be the solution to the differential equation y‘ = t2 + y2 with y(0) = 1. Calculate y(1).

If we use Euler’s numerical method with a step size h = 0.1, we get y(1) = 7.19. If we reduce the step size to 0.05 we get y(1) = 12.32. If we reduce the step size further to 0.01, we get y(1) = 90.69. That’s strange. Let’s switch over to a more accurate method, 4th order Runge-Kutta. With a step size 0.1 the Runge-Kutta method gives 735.00, and if we use a step size of 0.01 we get a result larger than 1015. What’s going on?

The problem presupposes that a solution exists at t = 1 when in fact no solution exists. General theory (Picard’s theorem) tells that a unique solution exists for some interval containing 0, but it does not tell us how far that interval extends. With a little work we can show that a solution exists for t at least as large as π/4. However, the solution becomes unbounded somewhere between π/4 and 1.

When I first saw this example, my conclusion was that it showed how important theory is. If you just go about numerically computing solutions without knowing that a solution exists, you can think you have succeeded when you’re actually computing something that doesn’t exist. Prove existence and uniqueness before computing. Theory comes first.

Now I think the example shows the importance of the interplay between theory and numerical computation. It would be nice to know how big the solution interval is before computing anything, but that’s not always possible. Also, it’s not obvious from looking at the equation that there should be a problem at t = 1. The difficulties we had with numerical computation suggested there might be a theoretical problem.

I first saw this problem in an earlier edition of Boyce and DiPrima. The book goes on to approximate the interval over which the solution does exist using a combination of analytical and numerical methods. It looks like the solution becomes unbounded somewhere near t = 0.97.

I wouldn’t say that theory or computation necessarily come first. I’d say you iterate between them, starting with the approach that is more tractable. Theoretical results are more satisfying when they’re available, but theory often doesn’t tell us as much as we’d like to know. Also, people make mistakes in theoretical computation just as they do in numerical computation. It’s best when theory and numerical work validate each other.

The problem does show the importance of being concerned with existence and uniqueness, but theoretical methods are not the only methods for exploring existence. Good numerical practice, i.e. trying more than one step size or more than one numerical method, is also valuable. In any case, the problem shows that without some diligence — either theoretical or numerical — you could naively compute an approximate “solution” where no solution exists.

Related: Consulting in differential equations