We all know (or should know) that Haskell is lazy by default. Nothing is evaluated until it must be evaluated. So when must something be evaluated? There are points where Haskell must be strict. I call these "strictness points", although this particular term isn't as widespread as I had thought. According to me:

Reduction (or evaluation) in Haskell only occurs at strictness points.

So the question is: what, precisely, are Haskell's strictness points? My intuition says that main , seq / bang patterns, pattern matching, and any IO action performed via main are the primary strictness points, but I don't really know why I know that.

(Also, if they're not called "strictness points", what are they called?)

I imagine a good answer will include some discussion about WHNF and so on. I also imagine it might touch on lambda calculus.

Edit: additional thoughts about this question.

As I've reflected on this question, I think it would be clearer to add something to the definition of a strictness point. Strictness points can have varying contexts and varying depth (or strictness). Falling back to my definition that "reduction in Haskell only occurs at strictness points", let us add to that definition this clause: "a strictness point is only triggered when its surrounding context is evaluated or reduced."

So, let me try to get you started on the kind of answer I want. main is a strictness point. It is specially designated as the primary strictness point of its context: the program. When the program ( main 's context) is evaluated, the strictness point of main is activated. Main's depth is maximal: it must be fully evaluated. Main is usually composed of IO actions, which are also strictness points, whose context is main .