Tag Archives: Clojure

Configuration Files in Clojure

I recently made a contribution to ghijira, a small tool written in Clojure for exporting issues from GitHub in JIRA-compatible format. One of the problems to solve there was loading configuration from file.

Originally, it used have a separate config.clj file that looked like this:

(def auth "EMAIL:PASS")
(def ghuser "user-name")
(def ghproject "project-name")
(def user-map
  { "GithubUser1" "JIRAUser1"
    "GithubUser2" "JIRAUser2"})

Then it was imported in place with:

(load-file "config.clj")

I did not like it, because it did not feel very functional and the configuration file had too much noise (isn’t def form manipulation too much for a configuration file?).

For a moment I thought about using standard Java .properties files. They get the job done, but they’re also somewhat rigid and unwieldy.

It occurred to me I really could use something similar to Leiningen and its project.clj files: Make the config a plain Clojure map. It’s very flexible with minimal syntax, and it’s pure data just like it should be.

From a quick Google search I found the answer at StackOverflow.

It turns out I can rewrite my configuration file to:

{:auth      "EMAIL:PASS"
 :ghuser    "user-name"
 :ghproject "project-name"
 :user-map
   { "GithubUser1" "JIRAUser1"
     "GithubUser2" "JIRAUser2" }
}

And then load it in Clojure with read:

(defn load-config [filename]
  (with-open [r (io/reader filename)]
    (read (java.io.PushbackReader. r))))

That’s it, just call read.

I find this solution a lot more elegant and “pure”.

By the way, this data format is getting a life of its own, called Extensive Data Notation. The Relevance team has even published a library to use EDN in Ruby.

Careful with def in Clojure

Let’s start with a puzzle. Let’s create a little Leiningen project called careful. Let’s set :main careful.core in project.clj and put this in careful/core.clj:

(ns careful.core
  (:gen-class))

(defn get-my-value []
  (println "Sleeping...")
  (Thread/sleep 5000)
  (println "Woke up")
  "Done")

(def my-def (get-my-value))

(defn -main [& args]
  (println "Hello, World!"))

Here’s the question: What happens when you compile this project?

And the answer is…

$ time lein2 compile
Compiling careful.core
Sleeping...
Woke up
Compilation succeeded.

real	0m7.098s
user	0m5.276s
sys	0m0.212s

Hey, my program actually printed something during compilation! And it took way too much time. I didn’t expect that.

Such an apparently innocuous def can get you in a lot of trouble. Sure, no-one puts a sleep like that, but what about:

  • Code that computes something, perhaps something taking time or space?
  • Code that loads something from the network?

I don’t yet understand why it is resolved at compile time.

But I can understand why using def in that way is not a good idea. It’s an old imperative habit. This may be a perfectly valid imperative program:

public static void main(String[] args) {
	Data data = loadDataFromInternet();
	ProcessedData proc = process(data);
	generateReport(proc);
}

You may be tempted to do it this way in Clojure:

(def data (load-data-from-internet))
(def proc (process data))
(defn -main [& args]
  (generate-report proc))

… but that’s still an imperative style and it feels wrong.

It also is real pain to test.

How about one of these equivalents?

(defn -main [& args]
  (let [data (load-data-from-internet)]
     (generate-report (process proc))))
(defn -main [& args]
   (generate-report (process (load-data-from-internet))))

In the end, I arrived at the following conclusion. You should only use def for constants, some global parameters, dynamic variables, definitions of higher order functions – that kind of static stuff. All logic and behavior belongs in functions.

Update

As djork pointed out at Reddit, it’s because def creates a var in the current namespace with specific value.

It makes some sense when you think of what defn looks like – it’s really a macro wrapping def (also pointed out by djork). And we do expect functions introduced by defn to be compiled, right. Even docs clearly state that defn is the “same as (def name (fn [params* ] exprs*))“.

I still find it very confusing, though. I wonder if I’m just abusing the language.

Second Update

I came back to it later, and I may have finally understood.

This is a perfectly valid statement:

(def my-def (get-my-value))

But what about these?

; Unexpected argument:
(def my-def (get-my-value))

; Type cast exception (vector to number)
(def my-def-2 (+ 2 []))

; Whatever invalid statement
(def my-def-2 (+ 2 +))

Should they throw a compile-time error? It makes sense, right?

Now, when you type this:

(def my-def (get-current-date))

At runtime, do you expect it to have state as of compile time, or as of run time? In other words, should it be date of compilation, or “now” at the time of execution? The latter, right?

I can see why both evaluations (at compile and run time) are needed. Depending on point of view, it’s either some sort of language fragility or developer abusing the language. Either way, the conclusion stays the same: Careful with that def, Eugene.

Discussion

Aside from this blog, there is an interesting discussion with more detail at Reddit. Thanks guys!

Confitura 2012 & “Boring” Application of Clojure

This year I attended the Confitura conference for the first time. Many posts have been written on it, so I’ll focus on one (but not the only) presentation that I found particularly worthwhile.

Clojure…

I started my day with talk on Clojure as HTML Templating Language by Łukasz Baran. Despite all the advocacy for Clojure that I do, I was surprised to find this presentation in the agenda – specialized, concrete talk instead of a generic “Clojure is awesome! Go use it!”. Another surprise came from the fact that someone in Poland was serious about using it commercially. I suspect I wasn’t the only surprised person there, as there were a few dozen people in the room and only few of them used Clojure or even functional programming.

Łukasz showed how you can use Clojure for HTML templating in Java projects. He described a team which moved from Velocity to Clojure ca. 2009. They created a DSL for HTML similar to Hiccup (before Hiccup became popular), and went a step further implementing a component library and other automations. From the very beginning they assumed it would be called from Java, so they wrote a loosely coupled component that you can plug in to Spring. Finally, they’ve been using it in production for years and are very happy with it.

Why do this, and why this way? Compared to Velocity, Clojure is very fast, concise, powerful and productive. It has gentle learning curve (when narrowed down to DSL). It was much easier to introduce Clojure in a big enterprise Java shop this way than to write purely Clojure projects from scratch (this may be a good approach in general, by the way).

Personally, I really liked it that this presentation was limited to such a boring, but concrete area. Everyone knows something about writing view layers in Java webapps. Everyone knows pains of JSP (Velocity & other frameworks or not). This presentation showed that it can be done differently and the pain points can be mitigated. It also was a great proof that Clojure has its place and is no academic black magic. In other words, advocacy done right: Instead of showing off the new tool from an ivory tower, demonstrate how it can solve a concrete everyday problem.

… and the Rest

That was not the only good thing at this Confitura. In fact, I really enjoyed most of the talks I saw there.

Paweł Wrzeszcz gave a very good presentation on How to work remotely and don’t go crazy. He showed many good habits for teams and individuals that let you live a healthy life in a healthy project. Though my personal conclusion is that even if you do everything right as an individual, team culture can kill and sometimes the only way out is… out.

I saw two talks on testing, by Jacek Kiljański and Tomek Kaczanowski. Jacek seemed to be a young passionate who believes in everything he says, but also had a well prepared presentation on Clean Tests with clear message and good examples. The following talk by Tomek was quite different – felt more like a rational, sometimes skeptical veteran sharing war stories. It may be due to my tiredness or combination of high temperature and low oxygen, but I did not find this presentation as sharp and clear as the previuous one. Part of the story might be that Jacek stole Tomek’s thunder and there was much overlap.

After a break I went to Maciek Próchniak’s talk on Scala, CQRS and Event Sourcing, but I was rather disappointed. It was pretty chaotic and shallow. I suspect that if you had a slight idea of CQRS and ES, you could not learn much new – even though the problem at hand had some depth that could be discussed in more detail. On the the hand, it assumed too much of the listener to be suitable for a beginner.

Then I saw Sławek Sobótka’s presentation about Soft aspects for IT experts. It was centered around Dreyfuss skill acquisition model (again, and even Sławek admitted it’s something that appears a few times at each conference). Still, it managed to offer something new by only treating the model as a framework and an excuse to dive into many interesting aspects of psychology. Very professional, enjoyable and worthwhile.

Our day ended with Wojciech Seliga’s keynote titled How to be awesome at a Java developer job interview. Less of a talk, more of an emotional rant, but most of the time I really agreed with the presenter. I know way too many careless, ignorant people who consider themselves experts and neglect common tools and practices, stopped learning years ago or simply don’t know what they’re doing.

All in all, it was a very good conference. More technical and low-level than 33rd Degree. By no means “worse” or “better”, just different. Felt like a family get-together, rather than a big conference with big names talking about big stuff that put things in perspective or show some trends, but are somewhat detached from our daily work.

“Programming Concurrency on the JVM”

A few years ago when I took concurrency classes pretty much everything I was told was that in java synchronized is key. That’s the way to go, whenever you have multithreading you have to do it this way, period. I also spent quite a while solving many classic and less classic concurrency problems using only this construct, reimplementing more fancy locks using only this construct, preventing deadlocks, starvation and everything.

Later in my career I learned that is not the only way to go, or at least there are those fancy java.util.concurrent classes that take care of some stuff for you. That was nice, but apparently I never took enough time to actually stop and think how those things work, what they solve and why.

The light dawned when I started reading Programming Concurrency on the JVM: Mastering Synchronization, STM, and Actors by Venkat Subramaniam.

The book starts with a brief introduction on why concurrency is important today with its powers and perils. It quickly moves on to a few examples of different problems: IO-intensive task like calculating size of a large directory, and computationally intensive task of calculating prime numbers. Once the ground is set, it introduces three approaches to concurrent programming.

The first way to do it is what I summed up in the first paragraph, and what Venkat calls the “synchronize and suffer” model. Been there, done that, we know how bad it can get. This approach is called shared mutability, where different threads mutate shared state concurrently. It may be tamed (and a few ways to do it are shown in the book), but is a lot harder than it seems.

Another approach is isolated mutability, where each mutable part of state is only accessed by one thread. Usually this is the actor based concurrency model. The third way is pure immutability where there simply is no mutable state. That is typical for functional programming.

In the following chapters the book explores each of those areas in depth. It briefly explains the Java memory model nad shows what options for dealing with shared mutability and coordinating threads exist in core Java. It clearly states why the features from Java 5 are superior to the classic “synchronize and suffer” and describes locks, concurrent collections, executors, atomic references etc. in more detail. This is what most of us typically deal with in our daily Java programming, and the book is a great introduction to those modern (if old, in a way) APIs.

That’s about one third of the book. The rest is devoted to much more interesting, intriguing and powerful tools: software transactional memory and actors.

Sometimes we have to deal with shared mutability, and very often we need to coordinate many threads accessing many variables. The classic synchronization tools don’t have proper support for it: Rolling back changes and preventing one thread from seeing uncommited changes of another is difficult, and most likely they lead to coarse-grained locks which basically lock everything while a thread is mutating something.

We know how relational databases deal with it with their ACID transactional model. Software transactional memory is just that but applied to memory, with proper atomicity, consistency and isolation of transactions. If one thread mutates a transactional reference in transaction, another will not see it until that transaction is committed. There is no need for any explicit locks as the libraries (like Akka or Clojure) monitor what variables you access and mutate in transaction and apply locking automatically. They even can rollback and retry the transaction for you.

Another approach is isolated mutability, a.k.a. actors, best demonstated on Akka. Each actor runs in a single thread and all it can do is receive or pass messages. This is probably closest to the original concept of object-oriented programming (recommended reading by Michael Feathers). You have isolated cells that pass messages to each other, and that’s it. When you have a task to execute, you spawn actors and dispatch it to them as immutable messages. When they’re done, they can call you back by passing another message (if the coordinator is also an actor), or if you’re not that pure you can wait for the result. Either way, eveything is neatly isolated in scope of a single thread.

Lengthy as this summary/review is, it really does not do justice to the book. The book itself is dense with valuable information and practical examples, which are as close to perfection as possible: There are a few recurring problems which are fairly simple and easy to grasp, solved over and over again with different techniques and different languages. There are many examples in Java, Scala, Groovy, Clojure and JRuby, dealing with libraries such as the core Java API, Clojure, Akka, GPars… In a few words, a ton of useful stuff.

Last but not the least, it’s excellently written. If anyone has seen Venkat in real life, this book is all like him – entertaining, but also thought-provoking, challenging and inspiring. It reads like a novel (if not better than some of them) and is very hard to put down until you’re done.

Highly recommended.

Ring Handlers – Functional Decorator Pattern

During our last pairing session with Jacek Laskowski on Librarian there was a brief moment of confusion over Ring handlers. We struggled for a short while trying to figure out what order to put them in and what it really means to have code like:

(def app
  (-> routes
    ; sandbar
    (auth/with-security security-policy log-in)
	; compojure helper that includes a few Ring handlers
    site
	; sandbar again
    session/wrap-stateful-session))

It didn’t take us long to figure it out and the solution turns out to be a very elegant functional flavor of decorator.

It’s easy to dive too deep without proper understanding (and that’s what I admittedly did). Let’s start from the very beginning and see what these bits really mean. For starters, here’s a very basic app in plain Ring that simply returns the entire request:

(defn my-handler [request]
    {:body (str request)})

(def app my-handler)

When I hit http://localhost:3000/?my_param=54 in my browser, in return I get:

{:remote-addr "0:0:0:0:0:0:0:1", 
 :scheme :http, 
 :request-method :get, 
 :query-string "my_param=54", 
 :content-type nil, 
 :uri "/", 
 :server-name "localhost", 
 :headers {"cookie" "__utma=111872281.60059650.1328613066.1328726201.1328785442.5; __utmz=111872281.1328613066.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)", "connection" "keep-alive", "accept-encoding" "gzip, deflate", "accept-language" "pl,en-us;q=0.7,en;q=0.3", "accept" "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "user-agent" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0", "host" "localhost:3000"}, 
 :content-length nil, 
 :server-port 3000, 
 :character-encoding nil, 
 :body #<Input org.mortbay.jetty.HttpParser$Input@7c04703c>}

Note that my_param made it to :query-string, but obviously it’s quite inconvenient at this point and that’s not what we really want to deal with.

What is app at this point? No magic here, it’s just a very simple and boring function.

Let’s move on and add one of the seemingly magic Ring handlers – ring.middleware.params/wrap-params:

(def app
  (wrap-params my-handler))

This time for the same URL I get the same map, with a few new entries:

{:remote-addr "0:0:0:0:0:0:0:1", 
 ; Trimmed for brevity
 :query-params {"my_param" "54"}, 
 :form-params {}, 
 :query-string "my_param=54", 
 :params {"my_param" "54"}}

I can see that the wrapper added a few new entries: :query-params, :form-params and :params. Great, just like it was supposed to.

Now, what is app at this point? Just like before, it’s a regular function of request. So what does wrap-params really do? Let’s take a peek at (parts of) its source:

(defn wrap-params
  [handler & [opts]]
  (fn [request]
    (let [request  (if (:query-params request)
                     request
                     (assoc-query-params request))]
      (handler request))))

assoc-query-params is no magic, it simply parses query params and merges it with the request map.

Now let’s take a close look at the last line and back at wrap-params signature. Here’s what’s really going on:

  1. wrap-params takes handler (which is a function of request) as argument. In our case, it’s the trivial function that returns request in body.
  2. It then performs some work, in this case rebinding request to a map with a few more entries.
  3. Eventually it calls the handler that it got as parameter with the augmented request map.

In other words, wrap-params takes a handler function, and returns a function that performs some extra work and invokes the original handler.

Does it look familiar? Yup, it’s the old good decorator pattern. Do some work, then pass control on to the next handler (which can also be a decorator and delegate it further). In this case, though, it’s astonishingly simple (compared to what it takes in Java).

Now let’s say I want to chain one more handler that relies on the previous one. Let’s say I dislike strings and want to map params by Clojure keywords. There’s a handler for it: ring.middleware.keyword-params/wrap-keyword-params.

No need to think too long, let’s jump in and use it:

(def app 
  (wrap-keyword-params (wrap-params my-handler)))

… and I get:

{; Trimmed for brevity
 :params {"my_param" "54"}}

Whoops, that’s not what I expected. wrap-keyword-params was supposed to create a map with keys as keywords, not strings. Why didn’t it work?

Naive intuition tells me to treat wrappers as function calls. I wrap my-handler in wrap-params and pass the result of this invocation to wrap-keyword-params, right? Wrong!

Take a look at a sample wrapper above (wrap-params) and think what we were trying to do. What I really created here is a reversed chain like:

  1. Given a request, map its :params into keywords (wrap-keyword-params).
  2. Then pass control to the next function in chain, wrap-params. It parses query string and adds :params map to request.
  3. Then pass control to my-handler which prints the whole thing to browser

Nothing happens in the first step, because :params does exist at this point – it’s only created by wrap-params in the second step.

If we reverse it, it works like expected:

(def app 
  (wrap-params (wrap-keyword-params my-handler)))
{; Trimmed for brevity
 :params {:my_param "54"}}

To recap, a few things to remember from this lesson:

  • In functional programming, the decorator pattern is elegantly represented as a higher order function. I find it much easier to grasp than the OO flavor – in Java I would need an interface and 3 implementing classes for the job, greatly limiting (re)usability and readability.
  • In case of Ring wrappers, typically we have “before” decorators that perform some preparations before calling the “real” business function. Since they are higher order functions and not direct function calls, they are applied in reversed order. If one depends on the other, the dependent one needs to be on the “inside”.

Sequential ID Generation in Congomongo

By default MongoDB uses 12-byte BSON id for objects. For some reason I wanted to use an increasing sequence of integers.

The samples in documentation are in JSON and I was not sure how they translate to the Java driver. The JSON sample looks like:

function counter(name) {
    var ret = db.counters.findAndModify({query:{_id:name}, update:{$inc : {next:1}}, "new":true, upsert:true});
    // ret == { "_id" : "users", "next" : 1 }
    return ret.next;
}

db.users.insert({_id:counter("users"), name:"Sarah C."}) // _id : 1
db.users.insert({_id:counter("users"), name:"Bob D."}) // _id : 2

After some googling I found an implementation in Java. Just as I expected, it’s much longer and completely different.

public static String getNextId(DB db, String seq_name) {
    String sequence_collection = "seq"; // the name of the sequence collection
    String sequence_field = "seq"; // the name of the field which holds the sequence
 
    DBCollection seq = db.getCollection(sequence_collection); // get the collection (this will create it if needed)
 
    // this object represents your "query", its analogous to a WHERE clause in SQL
    DBObject query = new BasicDBObject();
    query.put("_id", seq_name); // where _id = the input sequence name
 
    // this object represents the "update" or the SET blah=blah in SQL
    DBObject change = new BasicDBObject(sequence_field, 1);
    DBObject update = new BasicDBObject("$inc", change); // the $inc here is a mongodb command for increment
 
    // Atomically updates the sequence field and returns the value for you
    DBObject res = seq.findAndModify(query, new BasicDBObject(), new BasicDBObject(), false, update, true, true);
    return res.get(sequence_field).toString();
}

Not much later I checked docs and source code for Congomongo and discovered fetch-and-modify. I rewrote the Java sample above to Clojure and later polished it using code from this commit by Krzysztof Magiera. In the end my sequence generator looks like this:

(defn next-seq [coll]
  (with-mongo db
    (:seq 
	  (fetch-and-modify :sequences {:_id coll} {:$inc {:seq 1}} :return-new? true :upsert? true))))
	
(with-mongo db 
  (insert! :books {:author "Adam Mickiewicz" :title "Dziady" :_id (next-seq :books)}))

The raw call to insert! could be wrapped in a function or macro to save some boilerplate if there are more collections. For instance:

(defn insert-with-id [coll el]
  (insert! coll (assoc el :_id (next-seq coll))))
  
(with-mongo db
  (insert-with-id :books {:author "Adam Mickiewicz" :title "Dziady"}))

In some circles this probably is common knowledge, but it took me a while to figure it all out.

Connection Management in MongoDB and CongoMongo

I decided to take the opportunity offered by Jacek Laskowski (in Polish) and take a closer look at interaction with MongoDB in Clojure. It has a nice, challenging learning curve as I haven’t done much practical work in Clojure and I’ve never actually dealt with Mongo before. Double win – learning two interesting things at a time.

The obvious choice for the integration is CongoMongo. It’s really easy to get it all set up and working. The official docs encourage you to simply do this:

(def conn (make-connection "mydb")

(set-connection! conn)

(insert! :robots {:name "robby"})

(fetch-one :robots)

; ... and so on

Easy. Too easy and comfortable. Coming from the old good and heavy JDBC/SQL I felt uneasy with the connection management. How does it work? Does it just open a connection and leave it dangling in the air the whole time? Might be good for a quick spike in REPL, but not for a real application which needs concurrency, is supposed to be running for days and weeks, and so on. How do you maintain it properly?

clojure.contrib.sql has with-connection. That opens the connection, runs something with it and then eventually closes it. CongoMongo has with-mongo, but all it does is bind the argument to *mongo-config* and execute body. Nothing is ever opened or closed.

That seemed insane and broken, until I took a step back and compared source of CongoMongo to documentation of underlying Java driver for MongoDB. The light dawned.

What make-connection really does is create an instance of Mongo and DB (if database name was provided). The result of this function is plain map: {:mongo #<Mongo>, :db #<DB>}.

Javadoc for Mongo say it’s a database connection with internal pooling. For most application, you should have 1 Mongo instance for the entire JVM. A page dedicated to Java Driver Concurrency explains it in more detail: The Mongo object maintains an internal pool of connections to the database. For every request to the DB (find, insert, etc) the java thread will obtain a connection from the pool, execute the operation, and release the connection..

At first I thought CongoMongo docs were misleading. The truth is, it’s just a wrapper for the Java driver. It’s fair for it to assume you know the basic principles of the underlying driver.

So what is called a “connection” here (the Mongo class) is in fact a much more sophisticated object. It maintains a connection pool and creates nice little DB objects for all the data handling, which in turn are smart enough to maintain the actual low-level connections for you. No ceremony, just gets out of the way as quickly as possible and lets you get the job done.

This is amazingly simple and elegant compared to JDBC / JEE / SQL. I guess I soon will be scratching my head over ACID, but at the moment I’m pleasantly surprised with the look of things.

Fill and Print an Array in Clojure

In a post in the “Extreme OO in Practice” series (in Polish), Koziołek used the following example: Fill a 3x3x3 matrix with subsequent natural numbers and print it to stdout.

The example in Java is 27 lines of code (if you remove comments). It is later refactored into 55 lines that are supposed to be more readable, but personally I can’t help but gnash my teeth over it.

Anyway, shortly after reading that example, I thought of what it would look like in Clojure. Here it is:

(def array (take 3 (partition 3 (partition 3 (iterate inc 1)))))
(doall (map #(doall (map println %)) array))

That’s it. I know it’s unfair to compare this to Java and say that it’s shorter than your class and main() declaration. But it is, and it makes the full-time Java programmer in me weep. Anyway, I have two observations.

As I like to point out, Java is awfully verbose and unexpressive. Part of the story is strong typing and the nature of imperative programming. But then again, sometimes imperative programming is very clumsy.

Secondly, this solution in Clojure presents a completely different approach to solving the problem. In Java you would declare an array and then iterate over it in nested loops, incrementing the “current value” in every innermost iteration. In Clojure, you start with an infinite sequence of lazy numbers, partition (group) it in blocks (that’s one row), then group those blocks into a 2D array, and take 3 of those 2D blocks as the final 3D matrix. Same thing with printing the array. You don’t iterate over individual cells, but naturally invoke a function on a sequence.

The difference is subtle, but very important. Note how the Java code needed 4 variables for the filling (“current value” and 3 indexes) and 3 indexes for printing. There was a very clear distinction between data and what was happening to it. In Clojure you don’t need to bother with such low-level details. Code is data. Higher-order functions naturally “serve as” data in the first line, and are used to iterate over this data in the second.

Java Does Not Need Closures. Not at All.

I hope that most Java developers are aware of the Swing threading policy. It makes sense, except for that it’s so insanely hard to follow. And if you do follow it, I dare say there is no place with more ridiculous boilerplate.

Here’s two examples that recently got me scratching my head, gnashing my teeth, and eventually weep in disdain.

Case One: Display Download Progress

class Downloader {
    download() {
        progress.startDownload();
        while(hasMoreChunks()) {
            downloadChunk();
            progress.downloadedBytes(n);
        }
        progress.finishDownload();
    }
}

class ProgressDisplay extends JPanel {
    JLabel label;
    startDownload() {
        SwingUtilities.invokeLater(new Runnable() {
            public void run() {
                label.setText("Download started");
            }
        });
    }
    downloadedBytes(int n) {
        SwingUtilities.invokeLater(new Runnable() {
            public void run() {
                label.setText("Downloaded bytes: " + n);
            }
        });
    }
    finishDownload() {
        SwingUtilities.invokeLater(new Runnable() {
            public void run() {
                label.setText("Download finished");
            }
        });
    }
}

Solution? Easy! Just write your own JLabel wrapper to hide this mess. Then a lengthy JTable utility. Then a JProgressBar wrapper. Then…

Case Two: Persist Component State

Task: Save JTable state to disk. You’ve got to read table state in EDT, but writing to disk from EDT is not the the best idea.

Solution:

ExecutorService executor = Executors.newSingleThreadExecutor();

void saveState(final Table table) {
    SwingUtilities.invokeLater(new Runnable() {
        public void run() {
            TableColumns state = dumpState(table);
            executor.execute(new Runnable() {
                public void run() {
                    saveToDisk(table.getTableKey(), state);
                }
            });
        }
    });
}

Two inner classes, inheritance, all those publics and voids and parentheses. And there is no way to avoid that!

In a sane language, it could look like:

(defn save-state [table]
  (do-swing
    (let [state (dump-state table)]
      (future (save-to-disk table state)))))

All semantics. Little ceremony, almost no syntax. If you counted characters in both solutions, I guess the entire second snippet is as long as the first one somewhere to the invocation of invokeLater. Before you get to the actual logic. No strings attached, do-swing and future are provided by the language or “official” utils.

Bottom Line

I recently saw two presentations by Venkat Subramaniam on polyglot programming where he mercilessly made fun of Java. In his (vaguely remembered) words, when you write such code for the whole day, you come back home and weep. You can’t sleep because you’re tormented by the thought of the abomination you created, and that you will have to deal with tomorrow. If your children asked you what do you for living, would like them to see this?

Yaclot 0.1.2: Extended Date Conversions

I released a new version of my Clojure conversion and map transformation library to Clojars. It includes two new features:

1. Added natural conversion between Long and Date.

2. You can pass a collection of formats for converting String to Date. It attempts parsing with each of the formats and returns first result which didn’t throw ParseException.

(convert
  &quot;2/12/11&quot;
  (using-format [&quot;yyyy-MM-dd&quot; &quot;M/dd/yy&quot;]
    (to-type java.util.Date)))
; =&gt; #&lt;Date Sat Feb 12 00:00:00 CET 2011&gt;

Enjoy!