In harmonic analysis and PDE, one often wants to place a function on some domain (let’s take a Euclidean space for simplicity) in one or more function spaces in order to quantify its “size” in some sense. Examples include

As the above partial list indicates, there is an entire zoo of function spaces one could consider, and it can be difficult at first to see how they are organised with respect to each other. However, one can get some clarity in this regard by drawing a type diagram for the function spaces one is trying to study. A type diagram assigns a tuple (usually a pair) of relevant exponents to each function space. For function spaces on Euclidean space, two such exponents are the regularity of the space, and the integrability of the space. These two quantities are somewhat fuzzy in nature (and are not easily defined for all possible function spaces), but can basically be described as follows. We test the function space norm of a modulated rescaled bump function

(1)

where is an amplitude, is a radius, is a test function, is a position, and is a frequency of some magnitude . One then studies how the norm depends on the parameters . Typically, one has a relationship of the form

(2)

for some exponents , at least in the high-frequency case when is large (in particular, from the uncertainty principle it is natural to require , and when dealing with inhomogeneous norms it is also natural to require ). The exponent measures how sensitive the norm is to oscillation, and thus controls regularity; if is large, then oscillating functions will have large norm, and thus functions in will tend not to oscillate too much and thus be smooth. Similarly, the exponent measures how sensitive the norm is to the function spreading out to large scales; if is small, then slowly decaying functions will have large norm, so that functions in tend to decay quickly; conversely, if is large, then singular functions will tend to have large norm, so that functions in will tend to not have high peaks.

Note that the exponent in (2) could be positive, zero, or negative, however the exponent should be non-negative, since intuitively enlarging should always lead to a larger (or at least comparable) norm. Finally, the exponent in the parameter should always be , since norms are by definition homogeneous. Note also that the position plays no role in (1); this reflects the fact that most of the popular function spaces in analysis are translation-invariant.

The type diagram below plots the indices of various spaces. The black dots indicate those spaces for which the indices are fixed; the blue dots are those spaces for which at least one of the indices are variable (and so, depending on the value chosen for these parameters, these spaces may end up in a different location on the type diagram than the typical location indicated here).

(There are some minor cheats in this diagram, for instance for the Orlicz spaces and one has to adjust (1) by a logarithmic factor. Also, the norms for the Schwartz space are not translation-invariant and thus not perfectly describable by this formalism. This picture should be viewed as a visual aid only, and not as a genuinely rigorous mathematical statement.)

The type diagram can be used to clarify some of the relationships between function spaces, such as Sobolev embedding. For instance, when working with inhomogeneous spaces (which basically identifies low frequencies with medium frequencies , so that one is effectively always in the regime ), then decreasing the parameter results in decreasing the right-hand side of (1). Thus, one expects the function space norms to get smaller (and the function spaces to get larger) if one decreases while keeping fixed. Thus, for instance, should be contained in , and so forth. Note however that this inclusion is not available for homogeneous function spaces such as , in which the frequency parameter can be either much larger than or much smaller than .

Similarly, if one is working in a compact domain rather than in , then one has effectively capped the radius parameter to be bounded, and so we expect the function space norms to get smaller (and the function spaces to get larger) as one increases , thus for instance will be contained in . Conversely, if one is working in a discrete domain such as , then the radius parameter has now effectively been bounded from below, and the reverse should occur: the function spaces should get larger as one decreases . (If the domain is both compact and discrete, then it is finite, and on a finite-dimensional space all norms are equivalent.)

As mentioned earlier, the uncertainty principle suggests that one has the restriction . From this and (2), we expect to be able to enlarge the function space by trading in the regularity parameter for the integrability parameter , keeping the dimensional quantity fixed. This is indeed how Sobolev embedding works. Note in some cases one runs out of regularity before p goes all the way to infinity (thus ending up at an space), while in other cases p hits infinity first. In the latter case, one can embed the Sobolev space into a Holder space such as .

On continuous domains, one can send the frequency off to infinity, keeping the amplitude and radius fixed. From this and (1) we see that norms with a lower regularity can never hope to control norms with a higher regularity , no matter what one does with the integrability parameter. Note however that in discrete settings this obstruction disappears; when working on, say, , then in fact one can gain as much regularity as one wishes for free, and there is no distinction between a Lebesgue space and their Sobolev counterparts in such a setting.

When interpolating between two spaces (using either the real or complex interpolation method), the interpolated space usually has regularity and integrability exponents on the line segment between the corresponding exponents of the endpoint spaces. (This can be heuristically justified from the formula (2) by thinking about how the real or complex interpolation methods actually work.) Typically, one can control the norm of the interpolated space by the geometric mean of the endpoint norms that is indicated by this line segment; again, this is plausible from looking at (2).

The space is self-dual. More generally, the dual of a function space will generally have type exponents that are the reflection of the original exponents around the origin. Consider for instance the dual spaces or in the above diagram.

Spaces whose integrability exponent is larger than 1 (i.e. which lie to the left of the dotted line) tend to be Banach spaces, while spaces whose integrability exponent is less than 1 are almost never Banach spaces. (This can be justified by covering a large ball into small balls and considering how (1) would interact with the triangle inequality in this case). The case is borderline; some spaces at this level of integrability, such as , are Banach spaces, while other spaces, such as , are not.

While the regularity and integrability are usually the most important exponents in a function space (because amplitude, width, and frequency are usually the most important features of a function in analysis), they do not tell the entire story. One major reason for this is that the modulated bump functions (1), while an important class of test examples of functions, are by no means the only functions that one would wish to study. For instance, one could also consider sums of bump functions (1) at different scales. The behaviour of the function space norms on such spaces is often controlled by secondary exponents, such as the second exponent that arises in Lorentz spaces, Besov spaces, or Triebel-Lizorkin spaces. For instance, consider the function

, (3)

where is a large integer, representing the number of distinct scales present in . Any function space with regularity and should assign each summand in (3) a norm of O(1), so the norm of could be as large as if one assumes the triangle inequality. This is indeed the case for the norm, but for the weak norm, i.e. the norm, only has size . More generally, for the Lorentz spaces , will have a norm of about . Thus we see that such secondary exponents can influence the norm of a function by an amount which is polynomial in the number of scales. In many applications, though, the number of scales is a “logarithmic” quantity and thus of lower order interest when compared against the “polynomial” exponents such as and . So the fine distinctions between, say, strong and weak , are only of interest in “critical” situations in which one cannot afford to lose any logarithmic factors (this is for instance the case in much of Calderon-Zygmund theory).

We have cheated somewhat by only working in the high frequency regime. When dealing with inhomogeneous spaces, one often has a different set of exponents for (1) in the low-frequency regime than in the high-frequency regime. In such cases, one sometimes has to use a more complicated type diagram to genuinely model the situation, e.g. by assigning to each space a convex set of type exponents rather than a single exponent, or perhaps having two separate type diagrams, one for the high frequency regime and one for the low frequency regime. Such diagrams can get quite complicated, and will probably not be much use to a beginner in the subject, though in the hands of an expert who knows what he or she is doing, they can still be an effective visual aid.