So, far we’ve learned how to model complex data structures with algebraic and recursive data types. However, a major remaining issue is duplication—functions that could theoretically apply to many different types have to be re-written for each new type. In the simplest case, consider the identity function. Each type requires its own identity:

This same issue holds for our data structures. Recall our list type from last time could only hold ints:

In this lecture, we’re going to explore different ways to write generic, or polymorphic, programs that apply to many different types.

## Polymorphism by example

The essence of polymorphism is a program that works on multiple types. The most basic kind of polymorphic is parametric polymorphism (also called “generics”), which is a program that works on all possible types, like an identity function for every type or a list for every type. For example, in OCaml, a polymorphic list:

type 'a list = Nil | Cons of 'a * 'a list

let list1 : string list = Nil in
let list1 = Cons("Hello world", list1) in

let list2 : int list = Nil in
let list2 = Cons(0, list2) in
...


Note the different in type syntax–while most languages use List<T> or List[T], i.e. “postfix” polymorphic types, OCaml uses t list, i.e. “prefix” polymorphic types.

For container types like lists, it’s critically important to define generic functions that work for lists of any type. For example, to create a function that computes a list’s length:

let rec list_len l =
match l with
| _ :: l' -> 1 + (list_len l')
| [] -> 0
in
Printf.printf "%d %d" (list_len list1) (list_len list2)


The type of l is 'a list, read as “alpha list”. Here, 'a is a type variable (just like we saw last lecture). This means “this function is defined over lists that contain any type ‘a”. We can use list_len with our string list and int list above. Here’s a few more examples of polymorphic functions:

(* Pairs are polymorphic. Note that 'a * 'b means the two types can be different.
* If we wrote 'a * 'a, this would mean a pair where both types are the same.
* reverse_pair : 'a * 'b -> 'b * 'a *)
let reverse_pair pair =
let (x, y) = pair in
(y, x)

(* Option types are polymorphic. We can define generic functions over options,
* like this one that applies a function to the result if it exists.
* option_map : ('a -> 'b) -> 'a option -> 'b option *)
let option_map f opt =
match opt with
| Some(x) -> Some(f x)
| None -> None
in
option_map (fun n -> Printf.sprintf "%d" n) (Some 3) = (Some "3")


Aside: if you’ve seen templates in C++, that is not an example of typed polymorphism. C++ templates are instead untyped polymorphism, i.e. they allow you to define functions that operate over many types, but the function is type-checked when used, not when defined. For example, this program compiles:

#include <iostream>

template<typename T>
void print(T t) {
t.print();
}

class Foo {
public:
void print() { std::cout << "Foo"; }
};

int main() {
print(Foo());
}


But if I attempt to use print over a type that doesn’t have the print function:

class Bar {};

int main() {
print(Bar());
}


Then this raises an error:

test.cpp:5:5: error: no member named 'print' in 'Bar'
t.print();
~ ^
test.cpp:12:3: note: in instantiation of function template specialization
'print<Bar>' requested here
print(Bar());
^
1 error generated.


This happens because C++ templates are actually a metaprogramming mechanism–they are only typechecked when applied for a specific type, instead of when the function is generic over any type.

## Polymorphism theory

Recall that lambdas represent the essence of abstraction—they say, “this function works over every possible input expression [of the given type].” The encoding of polymorphism in the lambda calculus is this same idea, except lifted into types: “this function works over every possible input type.” For example, here’s the polymorphic identity function:

Here, $\Lambda$ (capital $\lambda$) represents a type function that parameterizes its function body with a type variable $\alpha$. To apply an argument to a type function, we use the bracket syntax to distinguish expression function application from type function application. The type of $\msf{id}$ is written as:

Just like we introduced $\mu$ last time for recursive types, now we introduce the “for all” type constructor that specifies a type valid over all input types $\alpha$. Formally, our syntax is now:

As with recursive types, all the interesting semantics for polymorphic types lie in the statics, not the dynamics.

The T-Poly rule says that when the body of a type function has some type $\tau$ (that likely includes type variables), then the type function over that body has type $\tpoly{\alpha}{\tau}$. To apply a type to a type function in T-Poly-App, we check that the left expression actually has a type function type, and then do a substitution, except in the type instead of the expression this time. For example, here’s a type derivation for using the identity function:

Now we can combine our polymorphic functions with recursive types to finally build our polymorphic list type:

For example, a polymorphic empty list function:

At this point, it’s worth looking back at OCaml to see how far the syntax has diverged. Here’s our custom OCaml polymorphic list:

type 'a list = Nil | Cons of 'a * 'a list

let empty () = Nil
let append l x = Cons(x, l)


There are a few big differences:

1. Labeled variants. Having “Nil” and “Cons” instead of “L” and “R” make the OCaml code more readable.
2. No explicit recursive types or folds/unfolds. This is because OCaml implictly makes every variant a recursive type, and then automatically adds folds/unfolds where necessary behind the scenes.
3. No explicit polymorphism. In our typed lambda calculus, we need to add type functions and type function application, while these also happen behind the scenes in OCaml.

Even cooler, OCaml can infer literally every type, zero annotations required (this uses polymorphic variants):

let empty () = Nil
let append l x = Cons (x, l)
let rec len l = match l with
| Cons (_, l') -> 1 + (len l')
| Nil -> 0
(* val len : ([< Cons of 'b * 'a | Nil ] as 'a) -> int = <fun> *)


## Functional programming

At this point in the course, you’ve now encountered many of the big ideas in functional programming. Since this is our last lecture on type theory and OCaml, I want to take a step back and reflect on the programming patterns we’ve seen so far.

1. Functions first. Features of powerful functions include:
• Define them anywhere. Languages like C, C++, and older versions of Java cripple functions, since they can be only defined at the top level. Functional languages allow functions to be defined basically anywhere, including inside other functions that “close” over their environment (sometimes referred to as “closures” in other languages).
• Higher-order functions. Functions can take other functions as inputs, and return functions as outputs. You’ll see this in greater detail on the coming assignment, but higher-order functions allow expressive interfaces that enable us to interact with data structures using better tools than just for loops.
• Partial application. When functions are written in “curried” form, i.e. multiple arguments are defined not through tuples but through multiple functions, we can pass them less than all of their arguments and get back a function that remembers the ones passed in.
2. Expressions over statements. While most languages have statements that implicitly modify some global or local store (e.g. heap, stack, registers, …), everything is an expression in functional programs. Variable bindings (let statements) are expressions, functions are expressions, if clauses are expressions, match clauses are expressions . This means our syntax is more consistent, and means we can compose the language is new ways. (Try putting a printf inside a ternary operator in C.)

3. Reified state. Perhaps the biggest departure from imperative programming is that state (and subsequently mutability) must be managed explicitly. For loops don’t exist, instead they must be recursive functions, where the state of the loop is captured in the parameters of the function. Mutable data structures don’t exist, you get a new data structure every time you change an old one. The benefits of this are hard to capture in bite-sized homeworks, but in the real world, minimizing state is one of the most critical vectors for eliminating bugs in production code.

4. Expressive types. Functional languages come with wonderfully useful tools for modeling data types like the ones we’ve seen the last three lectures: sum types, recursive types, and polymorphic types. These languages also usually come equipped with powerful type inference that can reduce the verbosity in dealing with these types while preserving maximum safety. Pattern matching in both match and let expressions reduces boilerplate when dealing with complex data structures.