$\begingroup$

Is it true that adding axioms to the CIC might have negative influences in the computational content of definitions and theorems? I understand that, in the theory's normal behavior, any closed term will reduce to its canonical normal form, e.g. if $n : \mathbb{N}$ is true, then $n$ must reduce to a term of the form $(succ ... (succ (0)))$. But when postulating an axiom - say the function extensionality axiom funext - we just add a new constant to the system

$$ funext : \Pi_{x : A} f (x) = g (x) \to f = g$$

that will just "magically" produces a proof of $f = g$ from any proof of $\Pi_{x : A} f (x) = g (x)$, without any computational meaning at all (in the sense that we cannot extract any code from them?)

But why is this "bad"?

For funext , I read in this coq entry and this mathoverflow question that it will cause the system to either loose canonicity or decidable checking. The coq entry seems to present a good example, but I still would like some more references on that - and somehow I can't find any.

How is that adding extra axioms could cause CIC to have a worse behavior? Any practical examples would be great. (For example, the Univalence Axiom?) I am afraid in this question is too soft, but if anyone could shed some light on those issues or give me some references would be great!

PS: The coq entry mentions that "Thierry Coquand already observed that pattern matching over intensional families is inconsistent with extensionality in the mid 90ies." Does anyone know in which paper or something?