Not the kind of reflection (I think!) one finds in Java, or even for meta-programming in Coq or Idris or such. I am speaking of the principle from the algebra of programming that is called reflection (not quite sure why). It is a property relating eliminations and constructions of datatypes. It says that folding with the constructors is the identity. For example, take lists in Haskell, with foldr (I specialized this from Foldable a to just [a])

foldr :: (a -> b -> b) -> b -> [a] -> b

and constructors

(:) :: a -> [a] -> [a]

[] :: [a]

Instantiating (implicitly) type variable b with [a], we can write

foldr (:) []

and this is equivalent to the identity function. All it does is go through the list and interpret cons as cons, nil as nil. So we have the valid equation

foldr (:) [] = id

It is known (although I am not sure where you can find this in a paper) that if you have parametricity and reflection, you can derive induction. Unary parametricity for Church-encoded lists, say, is essentially the predicate

\ l : [a] .

forall x : * . forall c : a -> x -> x . forall n : x .

forall R : x -> * .

(forall h : a . forall t : x . R t -> R (c h t)) ->

R n ->

R (l c n)

In words, it says that if you are doing a foldr with c and n, and n satisfies R and c preserves R, then R holds of the foldr of l with c and n (i.e., the elimination l c n).

Now, if we just instantiate this with (:) for c and [] for n, we have a statement of list induction — almost. We get to conclude the predicate R for (l (:) []), not just for l. But hey, if we have reflection, we know that doing a fold right with (:) and [] is the identity, so we can get R l from R (l (:) []).

That is great as a technical trick — and we have used this in Cedille — but it seems to come out of nowhere. Actually, reflection is a consequence of initiality for the initial algebra of the datatype, so it is definitely not out of nowhere. But still it is not clear why parametricity + reflection should yield induction. What is the intuition for that?

Actually, I have a little intuition to offer. With induction, we are trying to summarize an infinite number of cases in some finite way. This finite way is with the constructors for the datatype. There are infinitely many natural numbers, say, but they are all constructed by finite application of successor to zero. So the constructors summarize our infinite set of values of the datatype.

If we are in the mindset — as we are here in Cedille land — of not taking constructors and datatypes for granted, we might ask: but how do you know you correctly summarized that infinite set of values using your functions you call constructors (since with a lambda encoding, the constructors are just more lambda terms, not different in kind from all the others you are using). My intuition about reflection is that it contributes an essential property towards knowing you have the correct summary of your infinite set of data. You offer some set of functions and claim that all the data can be built using those. For this to be correct, we would reasonably expect that doing an elimination with a piece of data and those functions should rebuild that piece of data. Otherwise, what kind of a representation scheme is it that one has? Perhaps your functions define some permutation of the data? So eliminating l with your functions f and x would result in a different list l’? At least this seems exotic. And maybe one could argue that if reflection does not hold, then parametricity will not give you induction, except for predicates that respect your representation scheme (i.e., for predicates where the elimination does act like the identity).

So for purposes of deriving induction from parametricity, I guess we can say that reflection is telling us that our representation scheme is universal: any predicate we can prove from parametricity will also hold inductively.

Writing this out suggests the intriguing possibility that using different representation schemes, one could derive induction for predicates respecting the scheme, without needing reflection. Or similarly, one could demand, in the statement of induction, that the predicate one is proving respects reflection, and then reflection would not need to be proved once and for all.

Hope you are in good health, and thanks for reading.