Type Inference
Creating unknowns
Consider inferring the type of the expression \x -> x
. This is the lambda abstraction so its type must be of the form α -> α
. The type variable α
, at the time, stands for an unknown type; it is pulled from the Greek alphabet, that is used as a source of fresh names.
If this is a standalone expression, it would have to be a top-level declaration, and bound to a name, like f
, and then we'd have:
The signatures shows that the type of f
is α -> α
, where α
is still some unknown type. It also shows that the type of the formal parameter x
must also be α
; thus, a future potential arg must also have the same type α
. Since, the lambda's body consists of just x
, the return type also must be α
, which is already stated in the signature.
The final thing that must be done in case when this is a standalone (toplevel) expression, or a RHS of a let-expression is generalization. In case of let expression, it is called let generalization, which is the concept originating from ML, where it is only possible to generelize an expression in a let
construct, but not a standalone lambda abstraction even if they can be expressed in terms of each other).
Type generalization means assigning the most general type to the free type variables, which in these two cases (toplevel and let-bound expressions) means universally qualifying the type var x
:
The body of the let-bound expression (3) is repr with (…) since, at the moment, we don't know what it contains; in fact, the overall type depends on this, but for now we leave the type as forall α. (α -> α)
.
If this is not the entire term, that is, if we then reveal an arg in the larger expression, then this is in fact a redex. The type of the lambda abstraction may retain the polymorphic type, but the type of the entire expression (applying the lambda to the arg) dependes on the arg's type.
Note: In Haskell, type annotation has higher precedence then lambda abstraction (1), but application has higher precedence then type annotation (2):
So, if the larger term was in fact (\x -> x) False
, then I discover the solution u ~ Bool
, and conclude that the overall type is Bool
(\x -> x) False :: Bool
In particular, we don't generalize, instead u
stays as an unknown, and we hope that the context tells us what to do with it. This doesn't look like an important point, but it is in the big picture. By default, unless there is a reason to do otherwise, we treat u
as an unknown that stands for only one type (monomorphic), even if we don't know what type is at the moment.
We'll use the names prefixd with u
for unknowns (u1
, u2
). When it is correct to generalize to a polymorphic type, we'll rename the type variables to a
, b
, etc.
Solving for unknowns
Assuming 5 :: Int
, infer the type of
We can also rely on language constructs to deduct type info: if-then-else
expresion requires both branches to have the same type, so we can conclude (u1,Int) ~ (Bool,u2)
. Solving this equation gives u1 ~ Bool
, u2 ~ Int
. So we conclude that the expr e
must have the type Bool -> Int -> (Bool, Int)
.
Type environment
This is esentially the same as the previous expression
First, we focus on this inner exp, knowing that b
must be a Bool
in order to type-check.
This indicates that an algorithm keeps track of a type environment consisting of the pairs of variables (in scope) and their types.
We should always use a type environment to keep the track of all the assignments of types (known or unknown) to expressions. When we unify two types, we also update the type env to reflect it.
Type mismatch error
If the previous situation was instead
it'd amount to this equation, (u1,Int) = (Bool,u1)
, which implies that Int ~ Bool
- this is a type error of the kind type mismatch.
Infinite type error
Another kind of type error goes like this:
so the equation becomes u1 = [u1]
, which usually denotes an infinite type. (theoretically, there's nothing wrong with infinite types; there are some PL that support it, but not Haskell). It turns out they are rarely useful and they are extremely hard to reason about. So almost all PL ban it.
In general, if an equation to solve is of the form:
u1 = ... something that mentions u1 ...
(but not simply u1 = u1
) we say that it is the infinite type error.
This error is also known by the name occurs check, which is the procedure the type checker perferoms: it checks if the LHS's defining name (here u
), also appears on the RHS (in its own definition).
In the rare case that you find an infinite type useful, the recommendation is to define an ordinary recursive wrapper type:
You now have to keep un/wrapping, but the logic is clearer because the code is explicit about when it refers to a T
vs [T]
.
Algorithm requirements
Inference algorithm requires
source of fresh names
unification procedure
substitution table
type environment
We have equations like (u1,Int) = (Bool,u2)
that need to be solved to find out that u1 = Bool
, u2 = Int
. This solving is called unification; we unify the type (u1,Int)
with (Bool,u2)
.
Because of that, we need a mutable table of solved unknowns that is we updats every time we solve an equation. Since each entry is telling us what unknown to to replace by what solution (type), this table also goes by the name substitutions.
We also need the type inference algorithm to take one more parameter: the type environment that will tell it which variables are in the current scope and what their types are. It is also a good place to hold the types of library functions (predefined, builtin, functions).
Unification
Unification solves two problems at once: checking whether two types match; if they contain unsolved unknowns, it solves them by discovering if and what can be substituted for the unsolved unknowns to make the two types match.
Unification doesn't return any meaningful value; it gives its answer in one of two ways: if the two types match (perhaps after substitution), it returns normally and updates the substitution table; otherwise, it aborts with a type error message.
Substitutions
A mutable table of substitutions is initially empty. Each call to the unification function may update the table. The substitutions are expresed by u := type
.
For clarity, we shouldn't allow solved unknowns appearing on the RHS; that is, this situation should be avoded:
(Note: proper implementations allow this form and do clever tricks elsewhere, where it's more efficient.)
The notation to apply all substitutions in the table to a type, applySubst ty
, returns the rewritten type (impl omitted). For example
Unification algorithm
The type inference algorithm is not always aware that an unknown is already solved or that an entry already exists in the substitution table. It can call the unification function with those unknowns in parameters. For clarity, we first use applySubst
on the parameters, which amounts to having several simpler cases. (NOTE: real impl don't need this, they play clever and efficient tricks elsewhere to compensate).
Another example
Type inference algorithm
Notation:
infer env term
Procedure call to infer the type of term; look up variables and types in env, the type environment. The answer is meant to be monomorphic, i.e., if there are unsolved unknowns, let them be, don't generalize to a polymorphic type.
inferPoly env term
call infer, then generalize to polymorphic type.
Example use case:
to infer let f = \x -> x in ...
we will call inferPoly on \x -> x
If success, both return the type inferred; if failure, both throw an error.
Again, to makethis a simple impl, we also use applySubst
to all answers before returning them.
Case: literal
Similarly for other literals...
Case: variable
Example steps of needing instantiation:
Case: lambda
Remark: Although u starts as a new unknown, after the recursive call for expr, u may have been solved, so it may need a rewrite, even though T doesn't.
Example presentation of steps:
For lambdas of two or more parameters, you could treat as nested lambdas, \x -> \y -> expr
, that's tedious. Let's shortcut it:
Example:
Case: Function application
For multiple parameters, e.g., f e1 e2
, again you can either treat as (f e1) e2
or do the obvious shortcut.
Example steps:
Case: let (single definition, non-recursive)
People expect these to be legal:
This means when "let" is used for local definitions (and similarly for "where"), we need to generalize an inferred type to a polymorphic type, e.g., \x -> x
is normally just u -> u
where u
is an unknown, but here it is generalized to ∀a. a -> a
so f
is polymorphic.
Example:
The following example explains why we don't generalize an unknown that appears in the type environment: We expect these:
Explanation: A lambda gives the parameter x one single type, so if y=x then y also takes that single type.
Case: if-then-else
Algebraic data types (unparameterized)
An algebraic data type definition gives you data constructors so you can make terms of that type, and pattern matching so you can use terms of that type. Accordingly, it adds a group of type inference rules for making, and a type inference rule for pattern matching.
I will illustrate by an example type, then you extrapolate to other types.
Type construction: add these rules:
Do you have to add those rules? Not really, but if you are hand-executing type inference on examples, it is faster to add and use them.
If you are coding up a type inference algorithm, then it is less coding to: Allow type environments to also have data constructors by including
and recall that Has expr
can be treated as function application. This is easier to code up, but more tedious and less insightful to execute by hand.
Using pattern matching:
For simplicity I just cover case-expressions
with simple patterns:
We definitely need to add a custom rule for pattern matching against the data constructors.
Algebraic data types (parameterized)
Parameterized algebraic data types are similar, but there are additional unknowns to create for instantiating the polymorphic types.
Example
The data constructors are polymorphic:
Type construction:
Two ways to explain why Just expr
is that simple:
If
expr :: T
, thenJust expr :: Maybe T
, "obviously"If you do it the careful way:
That's always Maybe T
.
Using pattern matching
Extrapolate this to standard parameterized types such as lists and tuples.
The 2-tuple type can be understood as
except for the syntactic change from MkTuple e1 e2 :: Tuple2 T1 T2
to (e1, e2) :: (T1, T2)
The list type can be understood as
except for the syntactical change from []
to Nil
, Cons e1 e2
to e1 : e2
.
Not covered this time
Multiple definitions
recursive definitions
Programmer-provided type signatures
Type classes
Last updated
Was this helpful?