I find that there are many still wrapping their heads around what exactly is dynamic programming. Some think it's just memoization plus recursion -- but really, it's more than that. It's an approach that exploits properties of the problem that allow for an elegant method to finding the optimal solution.

The approach is a bit like induction: first we solve a base case, then we show that the solution method gives an optimal solution for a problem of size $n$, then we show that the solution also works for a problem of size $n+1$ using one more step of the solution method.

What differentiates dynamic programming from other recursion-based search methods is that we cache the results of the solutions to subproblems so that the computation need only occur once instead of $O(2^m)$ times, where $m$ is the level of the subproblem. We also exploit a recursive definition of the problem so that we end up with an elegant and compact solution method. Dynamic programming is thus the happiest marriage of induction, recursion, and greedy optimization.

The "dynamic" part of this approach is that we only have to apply one function repeatedly to the problem, and this function will return optimal values of the full problem as well as any sub- or superproblem.