In the past few lectures, we looked at how to apply our PL metalanguage to formally specify a low-level language, WebAssembly. There, the goal was to see an imperative analog to all the work you did with formalizing the lambda calculus. Most of the type theory we explored in OCaml and WebAssembly was simply a formalization of ideas you’ve probably already seen in one language or another.

For the remainder of the course, we’re shifting away from formalisms of existing type theories to exploration of exciting new type theories. We’re going to look at different kinds of invariants we can encode in our type systems, like memory safety, data race avoidance, provably correct state machines, and more. At each step, we’re going to ask: what’s a source of bugs in systems programming? And how can we use a type system to statically identify those bugs?

To aid in this exploration, we are going to shift to our final programming language: Rust. Rust is perfect for this course because it sits squarely at the intersection of programming language theory and systems programming, using functional programming ideas to improve the correctness and speed of systems software. In learning Rust, I really want you to identify connections between Rust and its precursors, C (which you know from before) and OCaml (which you know now from this course). Don’t learn Rust from scratch—map it onto the set of concepts you have, and realize you already know most of Rust!

Beyond the surface differences that you’ll see in lab and on the assignment, Rust has two main features that distinguish it from OCaml and C: borrow checking and traits. Over the next three weeks, we will explore the many ways in which these features can be applied to improve systems programming. Today, we’re going to start with borrow checking and its most essential application: ensuring memory safety.

Memory safety

Part of the reason WebAssembly is amenable to formalization is because its type system is inexpressive. Like most assembly languages, it only has numeric types, lacking any concept of composite data structures (e.g. algebraic data types, recursive types, polymorphic types). That means when using memory in WebAssembly, there’s no meaningful notion of an “uninitialized” value, since any (in-bounds) memory address is a valid number.

By contrast, in the lambda calculus, we also had no concerns about uninitialized values because there was only ever one way of constructing or accessing values. But in languages like C, memory represents a way to “back door” into accessing values created by the program. Memory is interpretable as both a sequence of numbers, but also a container for more complex data types if you squint hard enough. If we aren’t careful, we can easily convert between these two representations in unsafe ways. For example, this is valid C program:

#include <stdio.h>

struct point_t { int x; int y; };

int main() {
  struct point_t p;
  p.x = 5;
  p.y = 65;

  struct point_t p2;

  int* ptr = (int*) &p;
  printf("%d\n", *ptr);                // 5
  printf("%d\n", *(ptr + 1));          // 65
  printf("%c\n", *((char*)(ptr + 1))); // A
  printf("%d\n", *((int*)(&p2 + 1)));  // 5

  return 0;

It’s worth appreciating how horrendous this is. I can take a valid pointer to a real value (a point), cast it to a different type of a different size and perform arbitrary arithmetic on the pointer. I simultaneously rely on multiple assumptions about the size of types, the layout of structs, and even the relative layout of two variables on the stack!

Ideally, when programming with memory, we want two program properties:

  1. Memory safety is the property of a program where memory pointers used always point to valid memory1, i.e. allocated and of the correct type/size. Memory safety is a correctness issue—a memory unsafe program may crash or produce nondeterministic output depending on the bug.
  2. Memory containment (a term of my own invention2) is the property of a program where memory does not leak, i.e. if a piece of memory is allocated, either it is reachable from the root set of the program, or it will be deallocated eventually. Memory containment is a performance issue—a leaky program may eventually run out of memory3.

In garbage-collected (GC) languages (e.g. Python and Java), memory safety is guaranteed for all data allocated within the language runtime, assuming a correct implementation of the garbage collector. These languages abstract away memory as an implementation detail, e.g. there are no raw pointers in Python. Memory containment is guaranteed for tracing garbage collectors (like Java), but not necessarily for reference counting garbage collectors (like Python) in the presence of cycles.

In non-GC languages, i.e. low-level languages like C, C++ and Rust, these memory properties must either be guaranteed by the compiler via static analysis (C++ RAII, Rust’s borrow checker), or they must be carefully managed by the programmer at runtime (malloc/free, new/delete).

Difficulty of memory safety

It is a worthwhile exercise to work through an example of moderate complexity to understand the depth of problems that can occur when dealing with memory in C, and to appreciate how modern static analysis tools can prevent such bugs. Below, I have provided an implementation of a vector library (or resizable array) specialized for integers written in C. It contains at least 7 bugs relating to the properties of memory safety and containment. Take a few minutes to find them, and then we will compare it with an equivalent Rust implementation4.

#include <stdio.h>
#include <stdlib.h>
#include <assert.h>

// There are at least 7 bugs relating to memory on this snippet.
// Find them all!

// Vec is short for "vector", a common term for a resizable array.
// For simplicity, our vector type can only hold ints.
typedef struct {
  int* data;     // Pointer to our array on the heap
  int  length;   // How many elements are in our array
  int  capacity; // How many elements our array can hold
} Vec;

Vec* vec_new() {
  Vec vec; = NULL;
  vec.length = 0;
  vec.capacity = 0;
  return &vec;

void vec_push(Vec* vec, int n) {
  if (vec->length == vec->capacity) {
    int new_capacity = vec->capacity * 2;
    int* new_data = (int*) malloc(new_capacity);
    assert(new_data != NULL);

    for (int i = 0; i < vec->length; ++i) {
      new_data[i] = vec->data[i];

    vec->data = new_data;
    vec->capacity = new_capacity;

  vec->data[vec->length] = n;

void vec_free(Vec* vec) {

void main() {
  Vec* vec = vec_new();
  vec_push(vec, 107);

  int* n = &vec->data[0];
  vec_push(vec, 110);
  printf("%d\n", *n);


Don’t look past here until you’re ready to see the answers.

Let’s review. Here’s the bugs:

  1. vec_new: vec is stack-allocated. This is an example of a dangling pointer. The line Vec vec; allocates the struct on the current stack frame and returns a pointer to that struct, however the stack frame is deallocated when the function returns, so any subsequent use of the pointer is invalid. A proper fix is to either heap allocate (malloc(sizeof(Vec))) or change the type signature to return the struct itself, not a pointer.

  2. vec_new: initial capacity is 0. When vec_push is called, the capacity will double, but 2 * 0 = 0, resulting in no additional memory being allocated, so space for at least 1 element needs to be allocated up front.

  3. vec_push: incorrect call to malloc. The argument to malloc is the size of memory in bytes to allocate, however new_capacity is simply the number of integers. We need to malloc(sizeof(int) * new_capacity).

  4. vec_push: missing free on resize. When the resize occurs, we reassign vec->data without freeing the old data pointer, resulting in a memory leak.

  5. vec_free: incorrect ordering on the frees. After freeing the vector container, the vec->data pointer is no longer valid. We should free the data pointer and then the container.

  6. main: double free of vec->data. We should not be freeing the vector’s data twice, instead only letting vec_free do the freeing.

  7. main: iterator invalidation of n. This is the most subtle bug of the lot. We start by taking a pointer to the first element in the vector. However, after calling vec_push, this causes a resize to occur, freeing the old data and allocating a new array. Hence, our old n is now a dangling pointer, and dereferencing it in the printf is memory unsafe. This is a special case of a general problem called iterator invalidation, where a pointer to a container is invalidated when the container is modified.

Wow! We managed to pack a lot of bugs into a single program. Still, this program is valid C code; it will successfully compile with no extra flags. All of this is to show that memory safety is a hard problem. Manual memory management, as in C/C++, is easy to get wrong, and the causes can be subtle. The null pointer is, after all, the billion dollar mistake.

Automatic memory mangement, then, has clear upsides. That’s why every major programming language today outside C, C++, and Rust use a garbage collector, because it makes programmers significantly more productive and less bug-prone while imposing an acceptable runtime overhead. However, a garbage collector is not a panacea for memory management. Garbage collectors are an issue if:

  • You have significant memory constraints. Programmers for embedded devices prefer to carefully craft programs for minimal memory footprint when their device has only 64KB of RAM.
  • You have real-time performance constraints. Most tracing garbage collectors involve some kind of unpredictable stop-the-world overhead, which can be killer for apps that have functionality that needs to be always running. I know there are people in high-frequency trading paid absurd sums of money solely to optimize the Java GC.
  • You want high performance computation. Careful control over memory layouts can provide avoid cache thrashing by improving spatial locality, or permit vectorized operations over adjacent memory regions. Low-level memory control can often reduce accidental/unnecessary memory allocations.
  • You have more than one garbage collectors. Garbage collectors rarely interoperate with one another, and they usually assume complete control of a system.

What if we could have the best of both worlds: automated, yet static garbage collection. A tool that can automatically determine allocation and deallocation points at compile time that will never cause a segfault. In Rust, this tool is called the borrow checker, and today we’ll look at how it works.

Rust vs. OCaml

Before we dive into the borrow checker, let’s briefly get a feel for Rust’s syntax and semantics. We cover this in the Rust lab, but here’s a brief refresher. On its surface, Rust is a reskinned OCaml with a swirl of C. For example, this OCaml program:

let f (m : int) (n : int) : int =
  let mm = 2 * m in
  mm + n

let main () =
  Printf.printf "%d" (f 3 5)

Is the same as this Rust program:

fn f(m: i32, n: i32) -> i32 {
  let mm = 2 * m;
  mm + n

fn main() {
  println!("{}", f(3, 5));

Note that Rust does not have currying, but it is expression-oriented. You should treat the semicolon in let x = y; z like the in keyword in OCaml—it defines an expression, not a statement as in C. Rust also has strings, algebraic data types, recursive types, and polymorphism just like OCaml.

// Polymorphic record for points. Equivalent to type 'a point = {x : 'a; y : 'a}.
struct Point<T> {
  x: T, y: T

// Polymorphic function. T: ToString means "all types T where T has a to_string
// method." ToString is a trait, which we will discuss in depth next class.
fn point_print<T: ToString>(p: Point<T>) {
  println!("({}, {})", p.x.to_string(), p.y.to_string());

// Polymorphic recursive sum type for lists. Note that we have to wrap the
// recursive reference in a Box<...>. This ensures the type has a known size in
// memory (more on this later).
enum List<T> {
  Cons(T, Box<List<T>>)

fn main() {
  let p1: Point<i32> = Point { x: 1, y: 2 };
  let p2: Point<f32> = Point { x: 3.1, y: 4.2 };
  point_print(p1); // (1, 2)
  point_print(p2); // (3.1, 4.2)

  // Rust has type inference, so it can infer the type of `l` as List<i32>.
  let l = List::Cons(5, Box::new(List::Nil));
  // Rust has match statements just like OCaml.
  let n = match l {
    List::Cons(n, _) => n,
    List::Nil => -1
  println!("{}", n); // 5

However, our OCaml-driven expectations start to diverge once we attempt a particular kind of programming pattern:

fn main() {
  let s: String = "Hello world".to_string();
  let s2: String = s;
  println!("{} {}", s, s2);

If we run this by the compiler, it says:

error[E0382]: use of moved value: `s`
3 |   let s2: String = s;
  |       -- value moved here
4 |   println!("{} {}", s, s2);
  |                     ^ value used here after move
  = note: move occurs because `s` has type `std::string::String`, which does not implement the `Copy` trait

Here, we’ve run into our first conflict with the borrow checker.

Borrow checking

The Rust Book already has some excellent sections explaining the basics of the borrow checker. Read through all of What is Ownership? and References and Borrowing before proceeding.

Guaranteed memory safety

Tying this back to the motivation: how do these features ensure memory safety? Let’s take a look at a few examples of programs that should be memory un-safe and see how Rust identifies their errors.

Dangling pointers

fn foo() -> &i32 {
  let n = 0;
error[E0106]: missing lifetime specifier
1 | fn foo() -> &i32 {
  |             ^ expected lifetime parameter
  = help: this function's return type contains a borrowed value, but there is no value for it to be borrowed from
  = help: consider giving it a 'static lifetime

Here, we attempt to return a pointer to a value owned by n, where the pointer would nominally live longer than the owner. Rust identifies that it’s impossible to return a pointer to something without a “value for it to be borrowed from.”


use std::mem::drop; // equivalent to free()

fn main() {
  let x = "Hello".to_string();
  println!("{}", x);
error[E0382]: use of moved value: `x`
5 |   drop(x);
  |        - value moved here
6 |   println!("{}", x);
  |                  ^ value used here after move
  = note: move occurs because `x` has type `std::string::String`, which does not implement the `Copy` trait

Due to Rust’s ownership semantics, when we free a value, we relinquish ownership on it, which means subsequent attempts to use the value are no longer valid. THis also protects against double frees, since two calls to drop would encounter a similar ownership type error.

Iterator invalidation

fn push_all(from: &Vec<i32>, to: &mut Vec<i32>) {
  for elem in from.iter() {

fn main() {
  let mut vec = Vec::new();
  push_all(&vec, &mut vec);
error[E0502]: cannot borrow `vec` as mutable because it is also borrowed as immutable
9 |   push_all(&vec, &mut vec);
  |             ---       ^^^- immutable borrow ends here
  |             |         |
  |             |         mutable borrow occurs here
  |             immutable borrow occurs here

error: aborting due to previous error

Because Rust enforces uniqueness of mutable borrows, it’s impossible to accidentally cause an iterator invalidation above where the from.iter() would become invalid after pushing to to when from == to.

Malloc bugs

It’s worth noting that because Rust automatically inserts allocs/frees, the compiler will always get the size of the allocation correct, and always free the memory at some point.

  1. I’ve seen “memory safety” used to refer to any kind of memory-related bug (e.g. so says Wikipedia), but I think it’s more useful to distinguish between issues of correctness and performance rather than lumping them under the same term. 

  2. Also called “safe-for-space” in the PL community. 

  3. Assuming a program properly checks for failures during memory allocation, I don’t consider a memory leak a correctness issue since it doesn’t necessarily induce a crash. 

  4. Although Rust is the language of choice, C++ also contains many constructs to help ameliorate the issues contained in the C implementation–however, they are usually less strictly enforced by the compiler.