Vaporization and Modern Memory Management

Programming languages have always fascinated me. When I first started programming as a kid, I was oblivious to the fact that the very language I was writing code in was built using code itself. While quite obvious now, this fact blew my mind when I finally found out.

An interesting problem in the field of Programming Language Theory (PLT) is memory management. In short, programs produce data while running. This in of itself isn't much of a problem. Alas, computers have a finite The more difficult task is figuring out when to get rid of data that is no longer needed. Originally, this work was done by the programmer; older languages (like C) require memory to be explicitly malloced and freed.

While giving more low-level control to the programmer, the overhead of keeping track of all the lifetimes of data currently in scope is hard. Some have even theorized that memory management is Turing-complete

Manual Memory Management

If you're already familiar with how memory management works, i.e. you've written some c or know what pointer means, this next section will probably be very boring; just skip it.

To better understand memory management, we first must better understand memory.

All computers ship with some amount of RAM — my laptop, for instance, has a measly 4gb of RAM, while some higher-end desktops can ship with as much as 128gb (which I find ridiculous). RAM can be thought of as a very long list of addressed slots of the same size; on modern 64-bit computers, for instance, each slot stores 8 bytes, or 64 bits (hence the name). The number of bits a computer stores per slot is known as the word size of the machine.

These 8 bytes can represent whatever you like: an integer, a floating-point number, or even another set of 8 bytes. You can do a lot with 8 bytes, but you can't do everything.


To build data-structures larger than 8 bytes, series of slots can be reserved. In compiled programming languages, these sum-typed regions of memory are known as structs. Here's a struct representing an (x, y) coordinate pair in c:

struct coordinate {
    double x;
    double y;

A double is an IEEE double-precision floating point number — it requires 64 bits of memory. Because this struct has two doubles, 128 bits of memory, or 16 bytes, is required to store it.

It's important to note that in theory, memory can be used however you'd like — there is no higher-level representation enforced by the operating system — a struct is just a convenient and reliably abstraction that we can use to talk about

Memory slots are required to be used in full, so structs are also padded to the nearest slot. For example, a c char is usually 8 bytes of memory.

struct char_pair {
    char first;
    char second;

struct char_number {
    char letter;
    double number;

char_pair would only take one memory slot to store on >16 bit systems, as the struct is 2 bytes long, then padded to the nearest 8 bytes. Likewise, on a 64 bit system, char_number would take 2 slots to store. Although it only stores 9 bytes of actual memory, the struct is padded to the nearest 8 bytes.

Optimization: Memory Alignment

Instead of storing values back-to-back, individual values are aligned to the nearest slot. Instead of layout out char_number like so:


The compiler tries to align number to the nearest slot:


This makes is faster to access individual values, as less bit-twiddling has to be done. Data can also be aligned along cache lines for even faster access, but that's a bit out-of-scope at the moment.


The primary issue with structs is that they're static; once one is constructed, it can't grow in size. This isn't much of an issue for values that rely on a constant number of bits, but it is a large issue when a variable-sized collection datum needs to be kept.

Arrays are inherently statically sized — an array can only hold the number of items it is allocated with — to store a larger array, the previous array needs to be reallocated.

A dynamically sized vector, like in languages such as Rust, is simply a wrapper around an array that automatically reallocates it to the needed size as more values are added or removed. These allocations occur as the array grows by an order of magnitude.


The main issue with arrays is that they can take up a variable amount of space. If you tried to fit an array into a struct, for instance, the entire struct would have to be reallocated along with the array. In this case, it'd be much easier to describe where the array is that what the array contains.

Incidentally, to reference another location in memory, one must use a pointer. As memory is simply a long series of slots, any slot can be referenced by its index. This index, or pointer is usually equal to or less than the word-size of the instruction set.

So, for example, the structure definition of a dynamically sized vector of doubles might look like so in c:

struct vector {
    int capacity;
    int count;
    double* doubles;

the * operator in c represents a pointer to another location in memory. Because pointer and int are both usually the size of a machine word, this struct will only take up 3 slots, even if it contains millions of values.

Note that this struct definition alone is not enough to define an dynamically resizing vector. Methods to allocate, resize, access, and modify it must be implemented. Instead of updating the entirety vector when the underlying array is updated, only the pointer to that array must be changed.

Aside: Linked Lists

Pointers can reference any location in memory, including other structures. Another way to implement a growable collection of elements is to use a linked list A linked list is a structure that contains a pointer to the next node in memory:

struct link {
    struct link *next;
    double item;

The above is a singly-linked list of doubles. to add an item, construct a new link and update next to be the a pointer to that link.

Program Memory

After that grueling section, We've finally finished the appetizer and started the first course. From here on out, I assume you're comfortable talking about memory.

NOTE: this post is a work-in-progress


The Heap

Garbage Collection

Mark and sweep, etc.

Automatic Reference Counting

Escape analysis

Lifetimes and Borrow-checking


Compile-time Garbage-Collection