Monthly Archives: July 2010

Corecursion in Clojure

Another concept which would be very hard to implement in Java is corecursion. It can be hard to describe in plain English (as evidenced in the somewhat hilarious Wikipedia article), so let’s have a look at an example.

Head-first example: Fibonacci sequence

Everyone knows the Fibonacci sequence and its linear time implementations. To keep the story short, here’s how you could do it in Clojure:

(def fib (lazy-cat [0 1] (map + fib (rest fib))))

This snippet is a linear time, memoized, lazily-loaded infinite sequence of Fibonacci numbers. Although it’s very short, it can take quite a while to understand if you’re unprepared.

It generates a sequence which starts with numbers 0 and 1, and then a function which adds the sequence to itself, shifted by one:

index              0        1      2
fib                0        1      f
(rest fib)         1        f
(+ fib (rest fib)) f

One way to understand how it works is this iterative sequence:

index               0        1        2       3
fib                 0        1       [fib(2)  ...]
(rest fib)          1       [fib(2)   ...]
(+ fib (rest fib)) [fib(2)   ...]

The bottom row is a sum of the two rows above it. If you were doing it on a sheet of paper, you would add two elements in the bottom row (fib(2) = 0 + 1) and update the two rows above it. Then use the computed value for next element in the bottom row, and so on. Ellipsis symbolizes “next element”. Each time the top row is updated, all elements from the bottom row are copied – that is the newly completed value and another ellipsis. So the next step would be:

index               0        1        2          3
fib                 0        1       [1          fib(3)  ...]
(rest fib)          1       [1        fib(3)     ...]
(+ fib (rest fib)) [1        fib(3)   ...]

That’s a very naive, but beginner-friendly explanation. Of course at all times there is only one sequence (pictured in square brackets). What’s not obvious is that thanks to smart caching of such lazy sequences the addition of each pair only occurs once. So even though fib(2) is computed from (+ [0 1] [1]) and fib(2) from (+ [0 1 1] [1 1]), 0 is only added to 1 one time.

Explanation

The core idea is: Having a data structure, define a computation on that structure. Execute it, then repeat on result of this computation, and so on. It is similar to recursion, but it produces a sequence of data, with calculations taking place on demand when this data is dereferenced. It’s oriented on data, not the operations.

Breadth-first graph traversal

The Fibonacci sequence example is very popular, but corecursion is applicable in many more areas. Inspired by the before mentioned article, I decided to implement a breadth-first tree traversal (BFS).

Let’s define a tree as:

(defstruct tree :val :left :right)

(def my-tree 
  (struct tree 1 
    (struct tree 2)
    (struct tree 3
      (struct tree 4 
        (struct tree 66)) 
      (struct tree 5))))

;     1
;    / \
;   2   3
;      / \
;     4   5
;    /
;   66

Not yet being a master, it took me quite a while to get this right. Whatever I tried was nowhere near the elegant solution I got from ataggart at stackoverflow:

(defn bftrav [& trees]
  (when trees
    (lazy-cat trees 
      (->> trees
        (mapcat #(vector (:left %) (:right%)))
        (filter identity)
        (apply bftrav)))))

(bftrav my-tree)

It returns a lazy sequence of all subtrees in breadth-first order. It starts with the tree, then (lazily) appends its left and right subtree (unless they’re empty), then subtrees of the trees from previous step…

Why do corecursion?

Once you understand corecursion, it’s not harder than recursion. It has several advantages which make it a better approach in some situations. Sometimes it’s shorter and faster (thanks for heavy use of memoization). The biggest advantage is that it produces a stream of data which you can easily iterate and do whatever you need without a callback. You define the algorithm once, and then reuse it. Whether you want to search the graph for an element or serialize a graph of objects, you use the exact same function. However, since it’s callback-free, the traversal algorithm is clearer and more cohesive.

Software Transactional Memory in Clojure (alter vs. commute)

One of the selling points of Clojure is its excellent support for multithreading. Among other features, it has a very intuitive yet powerful implementation of Software Transactional Memory. There’s a nice introduction on it in R. Mark Volkmann’s famous article.

One notable feature of STM is a ref – shared binding which can only be modified inside transactions. Transactions are wrapped in (dosync) and behave similarly to database transactions. Value of a ref can be set in 3 ways: ref-set (obvious setter), commute and alter.

One thing that wasn’t obvious to me after reading the article was the difference between alter and commute. They both take a ref and a function, set the ref to result of running the function on previous value of the ref, and return the new value.

I decided to test its behavior with this simple experiment. Define an integer counter. Start 50 threads that increase this counter with alter and print the new value. After all threads finish, print the counter. Then repeat with commute instead of alter.

Here’s the sample with alter:

(def my_num (ref 0))

(defn inc_alter []
  (dosync
    (try
      (println "Alter: " (alter my_num inc)) 
      (catch Throwable t 
        (do 
          (println "Caught " (.getClass t)) 
          (throw t))))))

(defn test_alter []
  (let [threads (for [x (range 0 50)] (Thread. #(inc_alter)))]
    (do
      (println "---- Alter ----")
      (doall (map #(.start %) threads))
      (doall (map #(.join %) threads))
      (println "---- After loop: ----") 
      (inc_alter))))

(test_alter)

The output can look like:

---- Alter ----
Alter:  1
Alter:  2
Alter:  3
...
Alter:  47
Caught  clojure.lang.LockingTransaction$RetryEx
Alter:  48
Caught  clojure.lang.LockingTransaction$RetryEx
Alter:  49
Caught  clojure.lang.LockingTransaction$RetryEx
Caught  clojure.lang.LockingTransaction$RetryEx
Alter:  50
---- After loop: ----
Alter:  51

Explanation: when alter detects that the ref has been changed in another concurrent transaction, the whole transaction is rolled back. Then it is automatically retried. That’s fairly intuitive and straightforward.

When we replace alter with commute, it’s quite a different story. The output can look like:

---- Commute ----
Commute:  1
Commute:  2
Commute:  3
...
Commute:  43
Commute:  45
Commute:  45
Commute:  45
Commute:  45
---- After loop: ----
Commute:  51

The modification is commutative. No transaction is ever rolled back. But it also means that inside the transaction a stale value can be used, what is the reason for getting “45” several times.

It doesn’t work exactly like I described earlier (get value, commute, set value, return). In fact, when commute is called it instantly returns result of running the function on the ref. At the very end of transaction it performs the calculation again, this time synchronously (like alter) updating the ref. That’s why eventually the counter value is 51 even though the last thread printed 45.

By the way, have a look at the sample code and think about how much it would take to implement it your Java or C#. Not only is it multithreaded, but also synchronized, transactional (with ACID or at least ACI), serializable with auto-retry, and so on. So much going on in just 20 lines.

The Heart of Software

The heart of software is solving business problems. Have you heard it? Yes, so have I. The fact that it’s so rarely recognized or practiced is just inexplicable.

It’s not about programming. Nor is it about technology. Nor is it about languages, testing, architectures, networking, whatever. Sure, these are nice tools. But they are only tools. It’s amazing how much focus is paid to tools and how little to the problem at hand. What would you say to a carpenter who delivered wobbly (but still expensive) furniture for your dream kitchen only because the tools he had were bought at a “as much as you can carry for $10” bargain? You don’t care about his tools. You want a functional, robust product.

The purpose of frameworks, libraries and architecture is to serve, not command. When a framework imposes uncomfortable restrictions or affects the way your business logic and model is shaping up, in the best case it’s unsuitable for the problem. Don’t use it. Choosing an unsuitable or unknown tool only because it looks nice in resume is a cardinal sin deserving capital professional punishment.

Whatever you do, focus on the big picture. In case of software, it should be solving the business problem of your customer.

By no means is it a novel idea, but still it is extremely important and too rarely understood. The reason why it’s here is that it’s going to be an introduction to a series of articles on Domain-Driven Design. It’s a great approach to software development itself (like the rant above), but also to object-oriented programming and design. That’s what makes all the trendy OO* techniques shine. It’s a shame how often it is overlooked by teachers. If there’s one book every OO/business developer has to read, it’s “Domain-Driven Design” by Eric Evans.

OO* without DDD (or at least its bulk concepts) is like giving a diploma in architecture to everyone who can hold a pencil and draw on a sheet of paper. Regardless of whether what she produces is a usable, solid construction.