Having fun with queues

9 minute read

Recently I’ve been reading Okasaki’s Purely Functional Data Structures. In the first few chapters, the book discusses several really interesting ideas, that I will attempt to summarize and introduce below. TLDR: persistence; lazy evaluation; amortization; scheduling. Before getting into algorithms, there are some prerequisites to understanding the context of functional programming.

Immutable objects, persistence

Normally in imperative programming, we have variables, instances of classes and pointers that we can read or write. For example we can define an integer and change its value.

int x = 4;
x = x + 1;
cout << x << endl;

However in the functional version of the same code, even though it looks as if it does the same thing, variables are not modified.

let x = 4 in
let x = x + 1 in
printf "%d\n" x

What the above OCaml code does, is it first binds the name x to the number 4, then binds the name x to the number 5. The old value is not overwritten but shadowed, which means the old value is theoretically still there, just that we cannot access it anymore, since we lost the name that refers to it. In purely functional code, everything is immutable. But if we use a new name for the new value, we can keep the old value, obviously:

let x = 4 in
let y = x + 1 in
printf "%d %d\n" x y

Now let’s look at some nontrivial example, modifying a list.

let x = [ 1; 2; 3 ] in
let y = 0 :: x in
print_list x;
print_list y
(* 1 2 3
   0 1 2 3 *)

Note that the double colon operator adds a new value on the head of a singly linked list. What this means is that after adding a new element to the linked list, not only do we have a linked list of [ 0; 1; 2; 3 ] called y, but we also still have the old list of [ 1; 2; 3 ] called x. We did not modify the original linked list in any way; all the values and pointers in it are still unchanged.

Now this, in itself, is already interesting: first, you can have y = 0 :: x and z = -1 :: x, essentially creating 3 linked list in total. But since they all share the same tail of length 3, so only 5 (-1, 0, 1, 2, 3) memory locations are being allocated, instead of 3+4+4 = 11. It is also worth the time to go through common list operations (reverse, map, reduce, fold, iter, take, drop, head, tail…) and verify that you can implement them in the same time complexity without making modifications to the original list. You cannot apply the same techniques to random access arrays, and that is why lists are the basic building blocks of functional programs the same way arrays are the basic building blocks of imperative programs.

Lazy evaluation, memoization

One of the advantages of having immutable objects is that imagine for a function f that takes in only a list as input, the output of the function will always be the same as far as the list has the same physical address. Then we do not have to repeat the same work, given the technique of memoization: after executing a pure function with an immutable input, we can write down the input output pair in a hash table and simply look it up the next time. What is even better is we can do this at the language/compiler level, so memoization is hidden from the programmer. A nice consequence is that we can simply write a recursion to solve problems that automatically turn into DP solutions.

Lazy evaluation capability is also built into many functional languages. The point of lazy evaluation is to delay computation as late as possible until we actually need the values. With this technique, you can define a sequence of infinite length, such as a sequence of all the natural numbers. There are two basic operations here: creating a lazy computation, and forcing it to get the output. This will be important later in our design of queues.

Queue: 1st gen

Now let’s get to the point: implementing purely functional queues. The first challenge: actually have a queue, regardless of time complexity. Given building blocks of lists, how do we implement a queue? Note that it is not desirable to build from arrays, because they are intrinsically mutable, using them will not help to create an immutable data structure. Fortunately we have a classic algorithm for this: implement a queue with two lists, F and R. We pop from the front of F and push to the front of R, and do (F, R) = (List.rev R, []), reverse the list R and put it in the front, whenever F becomes empty. Here’s the obligatory leetcode link.

This implementation claims to have amortized costs of O(1) for push, pop and top operators. This is obvious for push and top, and pop takes a little bit of analysis. Sometimes when popping, we will reverse a linked list of n elements, which takes time O(n). However in order to trigger this O(n) operation, we need to already have done n O(1) operations. That means approximately every n operations, we can at most have done n*O(1) + O(n) work, which is still O(n). Therefore on average we still only had O(1) work per operation. This is a very brief description, assuming prior knowledge in readers.

Amortization breaking down

However this amortization bound does not work for our functional version of queue. This is perhaps obvious to see: say we have one element in F, and we have n elements in R. These two lists together form a functional queue instance, called x. Calling y = Queue.pop x would be O(n), then calling y = Queue.pop (Queue.push x 1) would be another O(n). In fact, we can create many such O(n) operations. Obviously, the cost now for Queue.pop could be as bad as O(n), even in the sense of amortized cost. This is because now that we can access the entire history of the data structure, we can force it to perform the worse operation over and over again, hence breaking time complexity bound. Because of this way of repeating the expensive operation, it seems unlikely that persistent data structures in general can utilize amortization in run time analysis.

Queue: 2nd gen

The key idea to fixing the amortized cost is, spoiler alert, lazy evaluation. And this is supposedly the aha moment of this post as well. Instead of reversing the R when F becomes empty, we “reverse the list R and append it to F” when R’s length becomes greater than F’s length. The operation of F = F @ (List.rev R), where @ means concatenation, is written in quotes, because it is computed lazily. The intuition is that, say we have F and R both of length n, and pushing one element on R (or popping one element from F) will create a lazy computation of reversing R in O(n) time. However, this reversal will not actually be carried out until O(n) more pops are executed, such that the first element of (List.rev R) becomes exposed. Then, the O(n) cost of reversal can be amortized to the O(n) steps forcing the reversal, hence all operations become O(1).

Let’s compare this amortization with the 1st gen amortization. How come this works and the 1st gen didn’t work? The key distinction is we need to amortize expensive operations into future operations, not those in the history. The old method of forcing an expensive operation to happen multiple times to break amortization does not work anymore, because we must call pop O(n) times before we force a lazy reversal of O(n) to happen, and once it is reversed, we memoize that result, so there is no way to force that O(n) to duplicate itself. Meanwhile, memoization of list reversal doesn’t help with the 1st gen design, because we can add any element to R to make it a different list, which would constitute a different input to the List.rev function.

Sketch of proof

Having a working data structure is great, but what’s better is a framework for proving the amortization cost rigorously for not only queues, but also other persistent data structures. Traditionally we have 2 methods of proving amortized cost, the banker’s method of gaining and spending credits and the physicist’s method of assigning potential to data structures. Both of them are similar in that they calculate how much credit we have accumulated in the history up till a point, and then we can spend them on expensive operations that follow.

To prove amortization into the future for persistent data structures, we accumulate debits and try to pay for them in the future one step at a time. Here, debits would be the suspended costs of lazy computation. Adding n debit is essentially saying, “hey I need to do something O(n) here, please execute O(n) other operations before asking for my result.” Hence, you cannot ask for a particular result (like the reversed list R) if your debit account is positive; you must pay off all your debt first.

Then, proving these amortization bounds is simply a matter of choosing the right debits and payments for each operator in the banker’s method, and choosing the right debit potential function for each instance of the data structure. There are rigorous results given in the book, which will not be reproduced here. The curious reader can… just look them up in the book.

Scheduling

Lastly, another important point: we don’t really need amortization at all, instead we can just make everything O(1) worst case. The idea is that instead of lazily delaying the work until it is actually needed, we just actually complete it one step at a time. If reversing the list takes n steps, and we are amortizing that cost on n operations, why don’t we just actually do one step per operation? Since the operations come after the lazy computation is created, we can for sure do that. It was not possible before when the operations being amortized on came before the expensive computation. Of course, the code is going to be a little trickier and has more performance overhead, but theoretically the queue with scheduling achieves a worst case time complexity the same as the ephemeral (non-persistent) counterpart, everything is O(1). This is not always possible, as for some other data structures it is not easy to find out what pieces of expensive computations we need to pay off per operation. Fortunately for queues it is not hard to imagine how to implement one.

What I left out

Of course, this is still a TLDR version of all the technical details of the implementation, which are all explained in length in the book. The proofs are still nontrivial to complete even given the ideas, and it takes some effort to make sure the scheduling is efficient. I also left out all actual implementations of the data structure in ML code in the book, since explaining the syntax would be heavy and would not contribute to understanding the key ideas. Anyway, this is about the amount I want to discuss in this post; it’s also interesting to think about implementing maps and sets (like c++ STL ones) as functional data structures with the exact same asymptotic time complexity on common operators like add, remove, find for both and set, get for maps. Happy functional programming :)

Categories:

Updated: