Category Archives: Programming

Hello, Backbone and ClojureScript

A few days ago I started learning ClojureScript. I wrote a trivial “hello world” application just to get ClojureScript to compile and execute, and later added some basic jQuery support with jayq.

The time has come to make things a little bit more interesting and add Backbone.js to the mix. I’ve never done ClojureScript or Backbone before, so I’m learning them at the same time with an interesting learning curve.

Anyway, I managed to rewrite the first two examples from Backbone docs to pure CLJS. I made some minor modifications like triggering events on button click and changing main background instead of sidebar.

Here’s my page source (with Hiccup):

(hp/html5 
  [:head]
  [:body
   [:button#clickable-event "Click to trigger an alert from basic Backbone event"]
   [:button#clickable-color "Click to change background color"]
   (hp/include-js 
     "http://code.jquery.com/jquery-1.8.2.min.js"
     "http://underscorejs.org/underscore.js"
     "http://backbonejs.org/backbone.js"
     "js/cljs.js")
   ])

As you can see, it renders a very basic page with two buttons and includes a few JS libraries.

And here’s the CLJS file mixing jQuery and Backbone:

(ns hello-clojurescript
  (:use [jayq.core :only [$]])
  (:require [jayq.core :as jq]))

; ALERT ON CLICK
; Rewrite of http://backbonejs.org/#Events
(def o {})

(.extend js/_ o Backbone.Events)

(.on o "alert" 
  (fn [msg] (js/alert msg)))

(jq/bind ($ "#clickable-event") :click 
      (fn [e] (.trigger o "alert" "Hello Backbone!")))

; MODEL WITH COLOR CHOOSER
; Inspired by http://backbonejs.org/#Model but without sidebar

(def MyModel 
  (.extend Backbone.Model
    (js-obj 
      "promptColor"
      (fn [] 
        (let [ css-color (js/prompt "Please enter a CSS color:")]
          (this-as this
                   (.set this (js-obj "color" css-color))))))))

(def my-model (MyModel.))
 
(.on my-model "change:color"
  (fn [model color]
    (jq/css ($ "body") {:background color})))

(jq/bind ($ "#clickable-color") :click 
         (fn [e] (.promptColor my-model)))

There’s a number of new things (to me) and nonobvious pitfalls. View this side-by-side with Backbone demos, and note:

  • To invoke _.extend(o, Backbone.Events), do (.extend js/_ o Backbone.Events). ClojureScript will correctly transform (.extend js/_ ...) to _.extend(...), and it will copy Backbone.Events as is (no quoting necessary)
  • To distinguish between objects and functions defined elsewhere and in CLJS, always prefix the former with js/name. Works for alert, underscore etc.
  • I had an issue with passing objects (as maps) directly to calls like Backbone.Model.extend(). Tried things like {:promptColor fn} and {"promptColor" fn} to no avail. I finally discovered (js-obj) and it did the trick, but it’s pretty cumbersome. I wonder if there’s a better way.
  • You need some extra work to use this. It has to be bound to a Clojure symbol with this-as macro.
  • On a slightly related note, I really begin to love jayq. In this example I use bare Backbone directly and struggle, and really appreciate jayq bridging the gap to jQuery. I wonder if there is a CLJS wrapper for Backbone.

All in all, it’s an interesting exercise. Just the right learning curve – stimulating, but not discouraging, regularly providing visible feedback.

As usually, complete source is at GitHub. I created a new repository for it, to keep “hello ClojureScript” as small as possible. This new demo probably will grow as I learn more Backbone.

Hello, ClojureScript! (with jQuery)

I decided to give ClojureScript a try. It did not come easy, because I found the official documentation somewhat complicated. I know there is ClojureScript One, but that project also is not as simple as it could be. I don’t want fancy functionality, noir/compojure, enlive/hiccup, and tons of other semi-relevant tools. Bare simplistic HTML and a starting hook for ClojureScript is pretty much all I need for the head start, I can add the rest later.

I was looking for something really minimal, and the first simple example on my Google search was Daniel Harper’s article. I got rid of noir, used up to date versions of libraries, and voila – it’s working!

When I had my first “hello world” alert showing on page load, I decided to make things a little bit more interesting and introduce jQuery. I found jayq from Chris Granger and decided to give it a shot. There’s also a sample app on Chris’ blog that helped me with some issues, namely figuring out how to bind events. It references a few more interesting libs (namely fetch & crate), but I’ve had enough for now. I guess I could spend the whole night chasing such references.

In the end, the interesting pieces of code are below:

project.clj (configured to compile CLJS from src-cljs to resources/public/js/cljs.js):

(defproject hello-clojurescript "0.1.0-SNAPSHOT"
  :description "FIXME: write description"
  :url "http://example.com/FIXME"
  :license {:name "Eclipse Public License"
            :url "http://www.eclipse.org/legal/epl-v10.html"}
  :dependencies [[org.clojure/clojure "1.4.0"]
                 [ring "1.1.6"]
                 [jayq "0.1.0-alpha3"]]
  :plugins [[lein-cljsbuild "0.2.8"]]
  :cljsbuild
  {
   :source-path "src-cljs"
   :compiler
   {
    :output-to "resources/public/js/cljs.js"
    :optimizations :simple
    :pretty-print true
    }
   }
  :main hello-clojurescript.core
  )

core.clj (trivial app, with Ring wrapper configured to serve JS resources):

(ns hello-clojurescript.core
  (:require [ring.adapter.jetty :as jetty]
            [ring.middleware.resource :as resources]))


(defn handler [request]
  {:status 200
   :headers {"Content-Type" "text/html"}
   :body 
   (str "<!DOCTYPE html>"
        "<html>"
        "<head>"
        "</head>"
        "<body>"
        "<p id=\"clickable\">Click me!</p>"
        "<p id=\"toggle\">Toggle Visible</p>"
        "<script src=\"http://code.jquery.com/jquery-1.8.2.min.js\"></script>"
        "<script src=\"js/cljs.js\"></script>"
        "</body>"
        "</html>")})

(def app 
  (-> handler
    (resources/wrap-resource "public")))

(defn -main [& args]
  (jetty/run-jetty app {:port 3000}))

hello-clojurescript.cljs (this one gets compiled to JavaScript):

(ns hello-clojurescript
  (:use [jayq.core :only [$ delegate toggle]]))

(def $body ($ :body))

(delegate $body :#clickable :click
          (fn [e]
            (toggle ($ :#toggle))))

Complete source code with instructions can be found at my GitHub repository.

At the moment I see the following issues:

  • I’m really green at ClojureScript. Tons to learn here!
  • The JavaScript file compiled from this trivial example is 13k lines long and weighs about 500 kb. Doh! Fine for local development on desktop, not that good for targetting mobile.
  • The official docs for ClojureScript are really… discouraging. Just like core Clojure documentation, they are pretty academic and obscure.
  • Docs for jayq are… Wait a minute, nonexistent? At least it’s a fairly thin adapter with small, comprehensible codebase.

DevDay 2012

On October 5, 2012 I attended the second edition of DevDay, a one-day conference sponsored and organized by ABB in Krakow.

The Awesome

Scott Hanselman was the first speaker, and also one of the main reasons why I went to the conference. Even though I only knew him from This Developer’s Life.

His “Scaling Yourself” talk was probably the best productivity talk I’ve seen so far. Part of it was tricks and techniques I have already known, such as GTD and pomodoro. Apart from that, my notes include:

  • The fewer things you do, the more of each you can do.
  • By doing something you are likely to get more of it. If you’re available on weekends and after hours, you will be expected to do it. If you’re available for calls about work on vacations, you will be called. Even if you do work at that time, set that email to go out at 9 AM next business morning.
  • Avoid “guilt systems” such as long collection of recorded TV shows – or books, or unread articles. Or whatever it is that you collect and want to do, but eventually forms a big pile that you only feel guilty about.
  • Sort your information streams by importance and limit usage. Some obvious ones like do less Twitter or Facebook. Some less obvious ones like basic email filters. Combine that with techniques that help you root out the distractions such as pomodoro or Rescue Time.
  • Use information aggregators: Blogs or sites that repost articles, mashups etc. instead of subscribing to 100 different blogs and news sites.
  • Do not multitask, except for things that go well together. Foe example, exercise while watching TV or listening to podcasts. What’s more: use activities you want to do to motivate things you should do. For instance, watch TV only as long as you keep on moving on the treadmill.
  • Plan your work: Find three things you want to do today, this week and this year, and do them. Helps you set the focus on goals. Hint: Email & twitter probably won’t be on the list.
  • Plan your work: Plan your own work sprints, execute, and finally perform retrospectives. Can be applied to work, but also all kinds of personal activities.
  • Synchronize to paper. Don’t limit your space to one screen when you can print or write/draw it on as many sheets of paper as you want. Also, paper notebooks can be used in different conditions and never run out of battery..

Finally, Scott is a great speaker. Lots and lots of content served in a perfect way, sauced with some pretty good jokes. Delicious.

That’s just a bunch of quick notes. Even if you think you’ve heard enough on the topic of productivity, go watch the presentation on Scott’s site. Now.

The Good

I enjoyed the “Why You Should Talk to Strangers” talk from Martin Mazur. Half of it was about social interactions: “us versus them” divisions, difficulties that one can face when approaching complete strangers in public, and the amount of new things you can learn from them if you break the ice. The rest was really about polyglot programming: learning Eiffel, Haskell or Ruby and applying some concepts and ideas back to C#. Largely an unknown territory, but still the talk resonated really well with me.

Antek Piechnik gave a good talk on continuous delivery at an extreme scale: Where each new team member pushes code to production during the first week, and each commit to master can trigger deployment. I found the ideas on project organization pretty controversial, though: having a totally flat team, large pool of features and everyone working on what they want and how they want. Sounds interesting, but it’s only possible if you have only experienced A++ programmers in team that also have the same vision, agree on tools and techniques, and so on.

Finally, I liked Greg Young’s talk on getting productive in a project in 24 hours. The talk was nothing like the subject, though. Basically an introduction to code analysis: afferent/efferent coupling, test coverage and cyclomatic complexity, as well as data mining VCS. Very clear and down-to-earth, discussing some tools and practical examples for each concept.

I particularly liked the points on code coverage. I knew that method with 20 possible paths can have 100% line coverage from 2 tests, but still be poorly tested. Greg made a good point explaining that method coverage gets worse as the number of methods/collaborators between the test and the method grows, or when the methods between the test and our method have high cyclomatic complexity. In such cases it’s really an accidental coverage that in no way guarantees security.

Greg repeatedly stressed how all these concepts are only tools. They indicate interesting areas that may be trouble spots and may need attention. But one can never say that cyclomatic complexity of X is good code and Y is bad code (and the same is true for all other metrics).

The Rest

Rob Ashton’s talk on JavaScript was alright, but not spectacular. I did not learn much new. I know that the language is here to stay, for better or worse. You can patch some gaps with jslint/jshint and others with CoffeeScript, but for its spread the tooling is really patchy and barely existent.

I skipped Mark Rendle’s talk on Simple.Data/Simple.Web. Not my area.

I really did not like Sebastien Lambla’s talk on HTTP caching. Noise to signal ratio approaching infinity. Filled with poor jokes and irrelevant comments. Little, chaotically and patchily discussed substance.

Wrapping Up

All in all, DevDay was a good conference. Really good selection of speakers on a free conference, with free lunch, coffee and snacks. No recruiters, no stands, just the participants and speakers. I wish it had at least two tracks and more focus. I’m not sure if it’s a generic developer conference or a .net event (went for the former, felt atmosphere of the latter).

Configuration Files in Clojure

I recently made a contribution to ghijira, a small tool written in Clojure for exporting issues from GitHub in JIRA-compatible format. One of the problems to solve there was loading configuration from file.

Originally, it used have a separate config.clj file that looked like this:

(def auth "EMAIL:PASS")
(def ghuser "user-name")
(def ghproject "project-name")
(def user-map
  { "GithubUser1" "JIRAUser1"
    "GithubUser2" "JIRAUser2"})

Then it was imported in place with:

(load-file "config.clj")

I did not like it, because it did not feel very functional and the configuration file had too much noise (isn’t def form manipulation too much for a configuration file?).

For a moment I thought about using standard Java .properties files. They get the job done, but they’re also somewhat rigid and unwieldy.

It occurred to me I really could use something similar to Leiningen and its project.clj files: Make the config a plain Clojure map. It’s very flexible with minimal syntax, and it’s pure data just like it should be.

From a quick Google search I found the answer at StackOverflow.

It turns out I can rewrite my configuration file to:

{:auth      "EMAIL:PASS"
 :ghuser    "user-name"
 :ghproject "project-name"
 :user-map
   { "GithubUser1" "JIRAUser1"
     "GithubUser2" "JIRAUser2" }
}

And then load it in Clojure with read:

(defn load-config [filename]
  (with-open [r (io/reader filename)]
    (read (java.io.PushbackReader. r))))

That’s it, just call read.

I find this solution a lot more elegant and “pure”.

By the way, this data format is getting a life of its own, called Extensive Data Notation. The Relevance team has even published a library to use EDN in Ruby.

Careful with def in Clojure

Let’s start with a puzzle. Let’s create a little Leiningen project called careful. Let’s set :main careful.core in project.clj and put this in careful/core.clj:

(ns careful.core
  (:gen-class))

(defn get-my-value []
  (println "Sleeping...")
  (Thread/sleep 5000)
  (println "Woke up")
  "Done")

(def my-def (get-my-value))

(defn -main [& args]
  (println "Hello, World!"))

Here’s the question: What happens when you compile this project?

And the answer is…

$ time lein2 compile
Compiling careful.core
Sleeping...
Woke up
Compilation succeeded.

real	0m7.098s
user	0m5.276s
sys	0m0.212s

Hey, my program actually printed something during compilation! And it took way too much time. I didn’t expect that.

Such an apparently innocuous def can get you in a lot of trouble. Sure, no-one puts a sleep like that, but what about:

  • Code that computes something, perhaps something taking time or space?
  • Code that loads something from the network?

I don’t yet understand why it is resolved at compile time.

But I can understand why using def in that way is not a good idea. It’s an old imperative habit. This may be a perfectly valid imperative program:

public static void main(String[] args) {
	Data data = loadDataFromInternet();
	ProcessedData proc = process(data);
	generateReport(proc);
}

You may be tempted to do it this way in Clojure:

(def data (load-data-from-internet))
(def proc (process data))
(defn -main [& args]
  (generate-report proc))

… but that’s still an imperative style and it feels wrong.

It also is real pain to test.

How about one of these equivalents?

(defn -main [& args]
  (let [data (load-data-from-internet)]
     (generate-report (process proc))))
(defn -main [& args]
   (generate-report (process (load-data-from-internet))))

In the end, I arrived at the following conclusion. You should only use def for constants, some global parameters, dynamic variables, definitions of higher order functions – that kind of static stuff. All logic and behavior belongs in functions.

Update

As djork pointed out at Reddit, it’s because def creates a var in the current namespace with specific value.

It makes some sense when you think of what defn looks like – it’s really a macro wrapping def (also pointed out by djork). And we do expect functions introduced by defn to be compiled, right. Even docs clearly state that defn is the “same as (def name (fn [params* ] exprs*))“.

I still find it very confusing, though. I wonder if I’m just abusing the language.

Second Update

I came back to it later, and I may have finally understood.

This is a perfectly valid statement:

(def my-def (get-my-value))

But what about these?

; Unexpected argument:
(def my-def (get-my-value))

; Type cast exception (vector to number)
(def my-def-2 (+ 2 []))

; Whatever invalid statement
(def my-def-2 (+ 2 +))

Should they throw a compile-time error? It makes sense, right?

Now, when you type this:

(def my-def (get-current-date))

At runtime, do you expect it to have state as of compile time, or as of run time? In other words, should it be date of compilation, or “now” at the time of execution? The latter, right?

I can see why both evaluations (at compile and run time) are needed. Depending on point of view, it’s either some sort of language fragility or developer abusing the language. Either way, the conclusion stays the same: Careful with that def, Eugene.

Discussion

Aside from this blog, there is an interesting discussion with more detail at Reddit. Thanks guys!

Domain Modeling: Naive OO Hurts

I’ve read a post recently on two ways to model data of business domain. My memory is telling me it was Ayende Rahien, but I can’t find it on his blog.

One way is full-blown object-relational mapping. Entities reference each other directly, and the O/R mapper automatically loads data for you as you traverse the object graph. To obtain Product for an OrderLine, you just call line.getProduct() and are good to go. Convenient and deceptively transparent, but can easily hurt performance if you aren’t careful enough.

The other way is what that post may have called a document-oriented mapping. Each entity has its ID and its own data. It may have some nested entities if it’s an aggregate root (in domain-driven design terminology). In this case, OrderLine only has productId, and if you want to get the product you have to call ProductRepository.getProduct(line.getProductId()). It’s a bit less convenient and requires more ceremony, but thanks to its explicitness it also is much easier to optimize or avoid performance pitfalls.

So much for the beforementioned post. I recently had an opportunity to reflect more on this matter on a real world example.

The Case

The light dawned when I set out to create a side project for a fairly large system that has some 200+ Hibernate mappings and about 300 tables. I knew I only needed some 5 core tables, but for the sake of consistency and avoiding duplication I wanted to reuse mappings from the big system.

I knew there could be more dependencies on things I don’t need, and I did not have a tool to generate a dependency graph. I just included the first mapping, watched Hibernate errors for unmapped entities, added mappings, checked error log again… And so on, until Hibernate was happy to know all the referenced classes.

When I finished, the absolutely minimal and necessary “core” in my side project had 110 mappings.

As I was adding them, I saw that most of them are pretty far from the core and from my needs. They corresponded to little subsystems somewhere on the rim.

It felt like running a strong magnet over a messy workplace full of all kinds of metal things when all I needed was two nails.

Pain Points

It turns out that such object orientation is more pain than good. Having unnecessary dependencies in a spin-off reusing the core is just one pain point, but there are more.

It also is making my side project slower and using too many resources – I have to map 100+ entities and have them supported in my 2nd level cache. When I’m loading some of the core entities, I also pull many things I don’t need: numerous fields used in narrow contexts, even entire eagerly-loaded entities. At all times I have too much data floating around.

Such a model also is making development much slower. Build and tests take longer, because there are many more tables to generate, mappings to scan etc.

It’s also slower for another reason: If a domain class references 20 other classes, how does a developer know which are important and which are not? In any case it may lead to very long and somewhat unpleasant classes. What should be core becomes a gigantic black hole sucking in the entire universe. When an unaware newbie goes near, most of the time he will either sink trying to understand everything, or simply break something – unaware of all the links in his context, unable to understand all links present in the class. Actually, even seniors can be deceived to make such mistakes.

The list is probably much longer.

Solution?

There are two issues here.

How did that happen?

I’m writing a piece of code that’s pretty distant from the core, but could really use those two new attributes on this core entity. What is the fastest way? Obvious: Add two new fields to the entity. Done.

I need to add a bunch of new entities for a new use case that are strongly related to a core entity. The shortest path? Easy, just reference a few entites from the core. When I need those new objects and I already have the old core entity, Hibernate will do the job of loading the new entities for me as I call the getters. Done.

Sounds natural and I can see how I could make such mistakes a few years ago, but the trend could have been stopped or even reversed. With proper code reviews and retrospectives, the team may have found a better way earlier. Having some slack and good will it may have even refactored the existing code.

Is there a better way to do it?

Let’s go back to the opening section on two ways to map domain classes: “Full-blown ORM” vs. document/aggregate style.

Today I believe full-blown ORM may be a good thing for a fairly small project with a few closely related use cases. As soon as we branch out new bigger chunks of functionality and introduce more objects, they should become their own aggregates. They should never be referenced from the core, even though they themselves may orbit around and have a direct link to the core. The same is true for the attributes of core entites: If something is needed in a faraway use case, don’t spoil the core mapping with a new field. Even introduce a new entity if necessary.

In other words, learn from domain-driven design. If you haven’t read the book by Eric Evans yet, go do it now. It’s likely the most worthwhile and influential software book I’ve read to date.

“Release It!”

A while ago I wrote a post on Learning to Fail inspired largely by Michael T. Nygard’s book titled “Release It”. Now it’s time to review the book itself.

As the sub-title says, the book is all about designing and deploying production-ready software. It opens with a great introduction on why it really matters: Because software often is critical to business. Because its reliability and performance is really our job and matter of professionalism. Finally (if that’s not enough), because its behavior in production will have huge impact on our quality of life as well – matter of choosing between panic attacks and phone ringing at 4 AM, or software Just Working by itself, letting you enjoy healthy life and doing more fun stuff at work. That’s the center of mass here, by the way: More on development and operations, less on management and business.

The book is divided into four main areas. Each starts with a bit of theoretical introduction and/or an anecdote, followed by discussion of concrete phenomena, problems and solutions. Even though it might appear as a collection of patterns and antipatterns, it’s much more than that. Patterns and antipatterns are just a form, but it’s really about setting the focus for a few pages and naming the problem. Anyway, the “pattern” and “antipattern” concept is gone by the middle of the book.

The first part talks about stability, and how it’s impacted by error propagation, lack of timeouts, all kinds of poor error handling, weaker links etc. Then it shows solutions: How to stop errors from propagating. How to be paranoid, expect failure in each integration point (with 3rd party and not), and deal with them. How to fail fast. And so on.

The second part talks about capacity: Dealing with load, understanding constraints and making predictions. Impact from seasonal phenomena or ad campains. Strange and not obvious usage patterns – hitting “refresh” button, web scrapers etc. Finally, dealing with those issues with proper use of caching, pooling, precomputing content and tuning garbage collection.

The third part is a bag with all kinds of design issues: networking, security, availability (understanding and defining requirements, followed by load balancing and clustering), testing and administration.

The last part is all about operations: logging, monitoring, transparency, releasing, that kind of stuff. How to organize it so that routing maintenance will be less pain, monitor will let us detect issues early, and finally after or during an issue we will have enough information to diagnose it.

Some problems are discussed from bird’s eye view. Most problems are more down-to-earth, providing detailed discussion of an issue with a sketch for solution with its weak and strong points. Finally, when applicable, author rolls up his sleeves and is ready to talk about concrete code, SQL, heapdumps, scripting etc.

The book is actually full of real war stories, anecdotes, code samples, tool descriptions, case studies, and all kinds of concrete content. There are a few larger stories that go on like this: On this project the team did this, this and that in order to migate such and such risks. When marketing sent an advert, or when the system was launched, or during routine maintenance, this and this broke and started causing problems. We did heapdumps, monitored traffic and contents, read or decompiled the code etc. and discovered problems there and there. Finally, we solved them with this and that. And here comes the detailed list of trouble spots and ways to mitigate them. It’s really a complete view – from business perspective and needs, down to nitpicking about particular piece of code or discussing popular tools.

Apart from being a great collection of real problems and tricks, there is one longer lasting, recurring aspect that may be the most valuable lesson here. Michael T. Nygard regularly shows (and makes you feel it deep in your guts, especially if you did some maintenance in production) that you really should be expecting failure everywhere and every time. You should try and predict, and mitigate, as many issues as possible, as early as possible. You should be paranoid. More than that, embrace the fact that you will fail to predict everything, and design so that even random unpredictable failures won’t take you down and may be easier to solve.

All the time it’s very concrete and complete. It also feels very professional, genuine and even inspiring.

Highly recommended.

Spring: @EnableWebMvc and JSR-303

I’ve been happily using XML-free Spring with Web MVC, right until the moment when I wanted to plug in JSR-303 validation.

Failure

I imported validation-api and hibernate-validator to my project. I annotated code for my command:

public class SpendingCommand {
@Size(min=3)
private String category;
// ...
}

… and controller:

@Controller
public class SpendingEditionController {

@RequestMapping(value = "/spending_insert", method = RequestMethod.POST)
public String addSpending(@Valid SpendingCommand spending, 
		BindingResult result, ModelMap model) {
	return "my_view";
}

// ...
}

I plugged it in to form:

#springBind("command.$field")
<label for="$field" class="control-label">${label}:</label>
<div class="controls">
	<input type="text" name="${status.expression}" value="$!{status.value}" />
	$!{status.errorMessage}
</div>

… and nothing happened.

I looked for errors in BindingResult in my controller, and nothing was there. Clearly validation was not working at all.

Almost There: @Valid Working

I read a ton of tutorials, and they did not mention any specific black magic. After a long while of doc reading, random trying and debugging, I found this StackOverflow answer. Skaffman said that <mvc:annotation-driven /> was “rather pointless” so “don’t bother”. Luckily I read comments to that answer as well and discovered that this is actually crucial for all the new goodies in Spring Web MVC, including conversions and validation.

I added annotation equivalent of mvc:annotation-driven to my view configuration:

@Configuration
@ComponentScan(basePackages = "pl.squirrel.money.web")
@EnableWebMvc
public class ViewConfig

When I tested my code again, I did see errors in BindingResult in my controller, so finally validation was working. Unfortunately, the web page still did not show the message. Do you know why?

Bindings and Naming Conventions

It took me even longer to figure this one out. I even began to suspect my custom view for Velocity Tools & Tiles.

Finally in debug I noticed I had my command bound twice in page context: as command and as spendingCommand. I had two bindings for BindingResult as well, but with two different instances! One was org.springframework.validation.BindingResult.command, with zero errors, and another was org.springframework.validation.BindingResult.spendingCommand, containing all errors as expected.

In a word, mess. To clean this up, I had to explicitly name my command like this:

@RequestMapping(value = "/spending_insert", method = RequestMethod.POST)
public String addSpending(@ModelAttribute("command") @Valid SpendingCommand spending,
		BindingResult result, ModelMap model) {
	return "my_view";
}

Now I only have one instance of everything, and everything is working as expected. And they lived happily ever after.

Quirks

In the end, I find it interesting (in a bad sense) that it works like this. I think it’s a bug that the same command is bound under two different names, but it’s quite the opposite for BindingResult.

To test it, I attempted to edit this SpendingCommand in controller by overwriting value of a field. At this point I knew what would happen: My web page showed overwritten value in form (because Spring was still able to match the command with different name), but no validation errors (because there are two different instances of BindingResult.

Spring & Velocity Tools (No XML)

A few months ago I wrote about integrating Spring, Velocity and Tiles. I discovered that one bit was missing from there: Velocity Tools. Two hours of yak shaving, frantic googling and source reading later, I figured out how to add support for Velocity Tools to such project with no XML configuration. Here’s how.

For starters, let’s say I want to use some tools in my Velocity and Tiles pages. Let’s add the LinkTool.

template.vm:

<html>
	<head><title>#tiles_insertAttribute({"name":"title"})#end</title></head>
	<body>
		#tiles_insertAttribute({"name":"body"})#end
		<p>Spring macros work in tiles template, too: #springUrl("/myUrl")</p>
		<p>Do Velocity tools work in template? $link.contextPath</p>
	</body>
</html>

body.vm:

<p>Here's a demonstration that Spring macros work with Tiles: #springUrl("/myUrl")</p>
<p>Do Velocity tools work in Tile? $link.contextPath</p>

When I render the code from previous post, I get this:

Here's a demonstration that Spring macros work with Tiles: /SpringVelocityTiles/myUrl

Do Velocity tools work in Tile? $link.contextPath

Spring macros work in tiles template, too: /SpringVelocityTiles/myUrl

Do Velocity tools work in template? $link.contextPath

Not good.

After some googling, I found a similar question on StackOverflow. It had two helpful answers – one from serg, delegating to this blog post, and another from Scott.

None of them worked out of the box, though. I’m tired of XML configs, and apparently it’s too easy to get weird exceptions related to some Struts tools. No wonder I get them, I don’t use Struts and don’t want any of its tools!

Apparently the issue is that Spring support for Velocity Tools is rubbish. One way out is to write your own ViewResolver or View, and that’s what I did in the end.

For starters, I’ll configure my ViewResolver to use a new view class:

@Bean
public ViewResolver viewResolver() {
	VelocityViewResolver resolver = new VelocityViewResolver();
	resolver.setViewClass(MyVelocityToolboxView.class);
	resolver.setSuffix(".vm");
	return resolver;
}

MyVelocityToolboxView is below. This time I’m pasting it with imports to avoid ambiguity on names like Context or VelocityView.

package pl.squirrel.svt;

import java.util.Map;
import java.util.Set;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.velocity.context.Context;
import org.apache.velocity.tools.Scope;
import org.apache.velocity.tools.ToolboxFactory;
import org.apache.velocity.tools.config.ConfigurationUtils;
import org.apache.velocity.tools.view.ViewToolContext;
import org.springframework.web.servlet.view.velocity.VelocityView;

public class MyVelocityToolboxView extends VelocityView {
	@Override
	protected Context createVelocityContext(Map<String, Object> model,
			HttpServletRequest request, HttpServletResponse response) {
		ViewToolContext context = new ViewToolContext(getVelocityEngine(),
				request, response, getServletContext());
		
		ToolboxFactory factory = new ToolboxFactory();
		factory.configure(ConfigurationUtils.getVelocityView());
		
		for (String scope : Scope.values()) {
			context.addToolbox(factory.createToolbox(scope));
		}

		if (model != null) {
			for (Map.Entry<String, Object> entry : (Set<Map.Entry<String, Object>>) model
					.entrySet()) {
				context.put(entry.getKey(), entry.getValue());
			}
		}
		return context;
	}
}

It’s important that we only use ConfigurationUtils.getVelocityView() – it includes generic tools and view tools, but not Struts tools.

That’s it, now we have a project which uses Tiles for high-level templating, Velocity for individual pages and details, with (hopefully) full support for Spring macros and Velocity tools in all areas. Even if you don’t like Tiles, it may still serve as a good example of how to integrate Spring and Velocity Tools.

I pushed updated code for the demo application to my GitHub repository.

In the irregular habit of posting a sermon on Friday… Less than a week ago I saw an excellent presentation on using Clojure for Spring views. Compared to all this mess and yak shaving here, that Clojure solution is infinitely simpler, more elegant and more powerful at the same time. Too bad it does not have the market share of Spring & Velocity yet.

Confitura 2012 & “Boring” Application of Clojure

This year I attended the Confitura conference for the first time. Many posts have been written on it, so I’ll focus on one (but not the only) presentation that I found particularly worthwhile.

Clojure…

I started my day with talk on Clojure as HTML Templating Language by Łukasz Baran. Despite all the advocacy for Clojure that I do, I was surprised to find this presentation in the agenda – specialized, concrete talk instead of a generic “Clojure is awesome! Go use it!”. Another surprise came from the fact that someone in Poland was serious about using it commercially. I suspect I wasn’t the only surprised person there, as there were a few dozen people in the room and only few of them used Clojure or even functional programming.

Łukasz showed how you can use Clojure for HTML templating in Java projects. He described a team which moved from Velocity to Clojure ca. 2009. They created a DSL for HTML similar to Hiccup (before Hiccup became popular), and went a step further implementing a component library and other automations. From the very beginning they assumed it would be called from Java, so they wrote a loosely coupled component that you can plug in to Spring. Finally, they’ve been using it in production for years and are very happy with it.

Why do this, and why this way? Compared to Velocity, Clojure is very fast, concise, powerful and productive. It has gentle learning curve (when narrowed down to DSL). It was much easier to introduce Clojure in a big enterprise Java shop this way than to write purely Clojure projects from scratch (this may be a good approach in general, by the way).

Personally, I really liked it that this presentation was limited to such a boring, but concrete area. Everyone knows something about writing view layers in Java webapps. Everyone knows pains of JSP (Velocity & other frameworks or not). This presentation showed that it can be done differently and the pain points can be mitigated. It also was a great proof that Clojure has its place and is no academic black magic. In other words, advocacy done right: Instead of showing off the new tool from an ivory tower, demonstrate how it can solve a concrete everyday problem.

… and the Rest

That was not the only good thing at this Confitura. In fact, I really enjoyed most of the talks I saw there.

Paweł Wrzeszcz gave a very good presentation on How to work remotely and don’t go crazy. He showed many good habits for teams and individuals that let you live a healthy life in a healthy project. Though my personal conclusion is that even if you do everything right as an individual, team culture can kill and sometimes the only way out is… out.

I saw two talks on testing, by Jacek Kiljański and Tomek Kaczanowski. Jacek seemed to be a young passionate who believes in everything he says, but also had a well prepared presentation on Clean Tests with clear message and good examples. The following talk by Tomek was quite different – felt more like a rational, sometimes skeptical veteran sharing war stories. It may be due to my tiredness or combination of high temperature and low oxygen, but I did not find this presentation as sharp and clear as the previuous one. Part of the story might be that Jacek stole Tomek’s thunder and there was much overlap.

After a break I went to Maciek Próchniak’s talk on Scala, CQRS and Event Sourcing, but I was rather disappointed. It was pretty chaotic and shallow. I suspect that if you had a slight idea of CQRS and ES, you could not learn much new – even though the problem at hand had some depth that could be discussed in more detail. On the the hand, it assumed too much of the listener to be suitable for a beginner.

Then I saw Sławek Sobótka’s presentation about Soft aspects for IT experts. It was centered around Dreyfuss skill acquisition model (again, and even Sławek admitted it’s something that appears a few times at each conference). Still, it managed to offer something new by only treating the model as a framework and an excuse to dive into many interesting aspects of psychology. Very professional, enjoyable and worthwhile.

Our day ended with Wojciech Seliga’s keynote titled How to be awesome at a Java developer job interview. Less of a talk, more of an emotional rant, but most of the time I really agreed with the presenter. I know way too many careless, ignorant people who consider themselves experts and neglect common tools and practices, stopped learning years ago or simply don’t know what they’re doing.

All in all, it was a very good conference. More technical and low-level than 33rd Degree. By no means “worse” or “better”, just different. Felt like a family get-together, rather than a big conference with big names talking about big stuff that put things in perspective or show some trends, but are somewhat detached from our daily work.