Another concept which would be very hard to implement in Java is corecursion. It can be hard to describe in plain English (as evidenced in the somewhat hilarious Wikipedia article), so let’s have a look at an example.

### Head-first example: Fibonacci sequence

Everyone knows the Fibonacci sequence and its linear time implementations. To keep the story short, here’s how you could do it in Clojure:

(def fib (lazy-cat [0 1] (map + fib (rest fib))))

This snippet is a linear time, memoized, lazily-loaded infinite sequence of Fibonacci numbers. Although it’s very short, it can take quite a while to understand if you’re unprepared.

It generates a sequence which starts with numbers 0 and 1, and then a function which adds the sequence to itself, shifted by one:

index 0 1 2 fib 0 1 f (rest fib) 1 f (+ fib (rest fib)) f

One way to **understand how it works** is this iterative sequence:

index 0 1 2 3 fib 0 1 [fib(2) ...] (rest fib) 1 [fib(2) ...] (+ fib (rest fib)) [fib(2) ...]

The bottom row is a sum of the two rows above it. If you were doing it on a sheet of paper, you would add two elements in the bottom row (fib(2) = 0 + 1) and update the two rows above it. Then use the computed value for next element in the bottom row, and so on. Ellipsis symbolizes “next element”. Each time the top row is updated, all elements from the bottom row are copied – that is the newly completed value and another ellipsis. So the next step would be:

index 0 1 2 3 fib 0 1 [1 fib(3) ...] (rest fib) 1 [1 fib(3) ...] (+ fib (rest fib)) [1 fib(3) ...]

That’s a very **naive, but beginner-friendly** explanation. Of course at all times there is only one sequence (pictured in square brackets). What’s not obvious is that thanks to smart caching of such lazy sequences the addition of each pair only occurs once. So even though fib(2) is computed from (+ [0 1] [1]) and fib(2) from (+ [0 1 1] [1 1]), 0 is only added to 1 one time.

### Explanation

The core idea is: Having a data structure, define a computation on that structure. Execute it, then repeat on result of this computation, and so on. It is similar to recursion, but it **produces a sequence of data, with calculations taking place on demand** when this data is dereferenced. It’s oriented on data, not the operations.

### Breadth-first graph traversal

The Fibonacci sequence example is very popular, but corecursion is applicable in many more areas. Inspired by the before mentioned article, I decided to implement a breadth-first tree traversal (BFS).

Let’s define a tree as:

(defstruct tree :val :left :right) (def my-tree (struct tree 1 (struct tree 2) (struct tree 3 (struct tree 4 (struct tree 66)) (struct tree 5)))) ; 1 ; / \ ; 2 3 ; / \ ; 4 5 ; / ; 66

Not yet being a master, it took me quite a while to get this right. Whatever I tried was nowhere near the elegant solution I got from ataggart at stackoverflow:

(defn bftrav [& trees] (when trees (lazy-cat trees (->> trees (mapcat #(vector (:left %) (:right%))) (filter identity) (apply bftrav))))) (bftrav my-tree)

It returns a lazy sequence of all subtrees in breadth-first order. It starts with the tree, then (lazily) appends its left and right subtree (unless they’re empty), then subtrees of the trees from previous step…

### Why do corecursion?

Once you understand corecursion, it’s not harder than recursion. It has several advantages which make it a better approach in some situations. Sometimes it’s shorter and faster (thanks for heavy use of memoization). The biggest advantage is that it produces a stream of data which you can easily iterate and do whatever you need without a callback. You define the algorithm once, and then reuse it. Whether you want to search the graph for an element or serialize a graph of objects, you use the exact same function. However, since it’s callback-free, the traversal algorithm is clearer and more cohesive.