$\begingroup$

So, I'm familiar with two main strategies of having higher-ranked polymorphism in a language:

System-F style polymorphism, where functions are explicitly typed, and instantiation happens explicitly though type application. These systems can be impredicative.

Subtyping-based polymorphism, where a polymorphic type is a subtype of all of its instantiations. To have decidable subtyping, polymorphism must be predicative. This paper provides an example of such a system.

However, some languages, like Haskell, have impredicative higher-ranked polymorphism without explicit type applications.

How is this possible? How can type-checking "know" when to instantiate a type without an explicit instantiation or cast, and without a notion of subtyping?

Or, is typechecking even decidable in such a system? Is this a case where language like Haskell implement something undecidable that happens to work for most peoples' use cases.

EDIT:

To be clear, I'm interested in the uses, not definitions, of polymorphically-typed values.

For example, suppose we have:

f : forall a . a -> a g : (forall a . a -> a) -> Int val = (g f, f True, f 'a')

How can we know that we need to instantiate f when it's applied, but not when it's given as an argument?

Or, to separate ourselves from function types:

f : forall a . a g : (forall a . a) -> Int val = (g f, f && True, f + 0)

Here, we can't even distinguish the use of f as applying it versus passing it: it's instantiated when passed as an argument to && and + , but not g .

How can a theoretical system distinguish these two cases without a magical "you can convert any polymorphic type to its instance" rule? Or with a rule like that, can we know when to apply it, to keep decidability?