Careful With Native SQL in Hibernate

I really like Hibernate, but I also don’t know a tool that would be nearly as powerful and deceptive at the same time. I could write a book on surprises in production and cargo cult programming related to Hibernate alone. It’s more of an issue with the users than with the tool, but let’s not get too ranty.

So, here’s a recent example.

Problem

We need a background job that lists all files in a directory and inserts an entry for each of them to a table.

Naive Solution

The job used to be written in Bash and there is some direct SQL reading from the table. So, blinders on and let’s write some direct SQL!

for (String fileName : folder.list()) {
    SQLQuery sql = session.getDelegate().createSQLQuery(
        "insert into dir_contents values (?)");
    sql.setString(0, fileName);
    sql.executeUpdate();
}

Does it work? Sure it does.

Now, what happens if there are 10,000 files in the folder? What if you also have a not so elegant domain model, with way too many entity classes, thousands of instances and two levels of cache all in one context?

All of a sudden this trivial job takes 10 minutes to execute, all that time keeping 2 or 3 CPUs busy at 100%.

What, for just a bunch of inserts?

Easy Fix

The problem is that it’s Hibernate. It’s not just a dumb JDBC wrapper, but it has a lot more going on. It’s trying to keep caches and session state up to date. If you run a bare SQL update, it has no idea what table(s) you are updating, what it depends on and how it affects everything, so just in case it pretty much flushes everything.

If you do this 10,000 times in such a crowded environment, it adds up.

Here’s one way to fix it – rather than running 10,000 updates with flushes, execute everything in one block and flush once.

session.doWork(new Work() {
    public void execute(Connection connection) throws SQLException {
        PreparedStatement ps = connection
                .prepareStatement("insert into dir_contents values (?)");
        for (String fileName : folder.list()) {
            ps.setString(1, fileName);
            ps.executeUpdate();
        }
    }
});

Other Solutions

Surprise, surprise:

  • Do use Hibernate. Create a real entity to represent DirContents and just use it like everything else. Then Hibernate knows what caches to flush when, how to batch updates and so on.
  • Don’t use Hibernate. Use plain old JDBC, MyBatis, or whatever else suits your stack or is there already.

Takeaway

Native SQL has its place, even if this example is not the best use case. Anyway, the point is: If you are using native SQL with Hibernate, mind the session state and caches.

ClojureScript Routing and Templating with Secretary and Enfocus

A good while ago I was looking for good ways to do client-side routing and templating in ClojureScript. I investigated using a bunch of JavaScript frameworks from ClojureScript, of which Angular probably gave the most promising results but still felt a bit dirty and heavy. I even implemented my own routing/templating mechanism based on Pedestal and goog.History, but something felt wrong still.

Things have changed and today there’s a lot buzz about React-based libraries like Reagent and Om. I suspect that React on the front with a bunch of “native” ClojureScript libraries may be a better way to go.

Before I get there though, I want to revisit routing and templating. Let’s see how we can marry together two nice libraries: Secretary for routing and Enfocus for templating.

Let’s say our app has two screens which fill the entire page. There are no various “fragments” to compose the page from yet. We want to see one page when we navigate to /#/add and another at /#/browse. The “browse” page will be a little bit more advanced and support path parameters. For example, for /#/browse/Stuff we want to parse the “Stuff” and display a header with this word.

The main HTML could look like:

<!DOCTYPE html>
<html>
<body>
	<div class="container-fluid">
		<div id="view">Loading...</div>
	</div>

	<script src="js/main.js"></script>
</body>
</html>

Then we have two templates.

add.html:

<h1>Add things</h1>
<form>
  <!-- boring, omitted -->
</form>

browse.html:

<h1></h1>
<div>
  <!-- boring, omitted -->
</div>

Now, all we want to do is to fill the #view element on the main page with one of the templates when location changes. The complete code for this is below.

(ns my.main
  (:require [secretary.core :as secretary :include-macros true :refer [defroute]]
            [goog.events :as events]
            [enfocus.core :as ef])
  (:require-macros [enfocus.macros :as em])
  (:import goog.History
           goog.History.EventType))

(em/deftemplate view-add "templates/add.html" [])

(em/deftemplate view-browse "templates/browse.html" [category]
  ["h1"] (ef/content category))

(defroute "/" []
  (.setToken (History.) "/add"))

(defroute "/add" []
  (ef/at 
    ["#view"] (ef/content (view-add))))

(defroute "/browse/:category" [category]
  (ef/at 
    ["#view"] (ef/content (view-browse category))))

(doto (History.)
  (goog.events/listen
    EventType/NAVIGATE 
    #(em/wait-for-load (secretary/dispatch! (.-token %))))
  (.setEnabled true))

What’s going on?

  1. We define two Enfocus templates. view-add is trivial and simply returns the entire template. view-browse is a bit more interesting: Given category name, alter the template by replacing content of h1 tag with the category name.
  2. Then we define Secretary routes to actually use those templates. All they do now is replace content of the #view element with the template. In case of the “browse” route, it passes the category name parsed from path to the template.
  3. There is a default route that redirects from / to /add. It doesn’t lead to example.com/add, but only sets the fragment: example.com/#/add.
  4. Finally, we plug in Secretary to goog.History. I’m not sure why it’s not in the box, but it’s straightforward enough.
  5. Note that in the history handler there is the em/wait-for-load call. It’s necessary for Enfocus if you load templates with AJAX calls.

That’s it, very simple and straightforward.

Update: Fixed placement of em/wait-for-load, many thanks to Adrian!

“Version Control with Git, 2nd Edition” by Jon Loeliger, Matthew McCullough; O’Reilly Media

Version Control with Git

There are reasons why Git has become so popular, but the first encounter with it can a bit overwhelming. Even if you kind of learn how to do basic things, it’s not uncommon to feel like we’re only scratching the surface. The typical reaction when something slightly less typical is needed often sounds like: “There be dragons!”

Here comes “Version Control with Git” by Jon Loeliger and Matthew McCullough.

It starts with a good explanation of the basic concepts of Git. It explains all the building blocks of Git and internal organization of repository. It slowly introduces the basic commands and every time explains very well how a change is reflected in the repository or what a command is really operating on.

The distribution, collaboration, merging etc. are introduced fairly late, but somehow by that time the reader will have understood the core so much that everything just falls into place and is immediately understandable. Finally, it also shows some more arcane features and commands that probably are rarely used, but knowing that they are there and having the book handy for when the time comes doesn’t hurt.

Last but not the least, it explains common usage patterns as well as things that can be done outside the typical path, with appropriate warnings about possible negative impact.

This book is a must-read for all Git users. It’s usable on all levels, from absolute newbie to someone who feels fairly proficient with Git. I’ve been using Git daily for quite a while, and it really helped me understand what is going on. Everything is very accessible, with plenty of examples as small and practical as possible, as well as some images.

Direct Server HTTP Calls in Protractor

When you’re running end-to-end tests, chances are that sometimes you need to set up the system before running the actual test code. It can involve cleaning up after previous executions, going through some data setup “wizard” or just calling the raw server API directly. Here’s how you can do it with Protractor.

Protractor is a slick piece of technology that makes end-to-end testing pretty enjoyable. It wires together Node, Selenium (via WebDriverJS) and Jasmine, and on top of that it provides some very useful extensions for testing Angular apps and improving areas where Selenium and Jasmine are lacking.

To make this concrete, let’s say that we want to execute two calls to the server before interacting with the application. One of them removes everything from database, another kicks off a procedure that fills it with some well-known initial state. Let’s write some naive code for it.

Using an HTTP Client

var request = require('request');

describe("Sample test", function() {
    beforeEach(function() {
        var jar = request.jar();
        var req = request.defaults({
            jar : jar
        });

        function post(url, params) {
            console.log("Calling", url);
            req.post(browser.baseUrl + url, params, function(error, message) {
                console.log("Done call to", url);
            });
        }

        function purge() {
            post('api/v1/setup/purge', {
                qs : {
                    key : browser.params.purgeSecret
                }
            });
        }

        function setupCommon() {
            post('api/v1/setup/test');
        }
        
        purge();
        setupCommon();
    });

    it("should do something", function() {
        expect(2).toEqual(2);
    });
});

Since we’re running on Node, we can (and will) use its libraries in our tests. Here I’m using request, a popular HTTP client with the right level of abstraction, built-in support for cookies etc. I don’t need cookies for this test case – but in real life you often do (e.g. log in as some admin user to interact with the API), so I left that in.

What we want to achieve is running the “purge” call first, then the data setup, then move on to the actual test case. However, in this shape it doesn’t work. When I run the tests, I get:

Starting selenium standalone server...
Selenium standalone server started at http://192.168.15.120:58033/wd/hub
Calling api/v1/setup/purge
Calling api/v1/setup/test
.

Finished in 0.063 seconds
1 test, 1 assertion, 0 failures

Done call to api/v1/setup/purge
Done call to api/v1/setup/test
Shutting down selenium standalone server.

It’s all wrong! First it starts the “purge”, then it starts the data setup without waiting for purge to complete, then it runs the test (the little dot in the middle), and the server calls finish some time later.

Making It Sequential

Well, that one was easy – the HTTP is client is asynchronous, so that was to be expected. That’s nothing new, and finding a useful synchronous HTTP client on Node isn’t that easy. We don’t need to do that anyway.

One way to make this sequential is to use callbacks. Call purge, then data setup in its callback, then the actual test code in its callback. Luckily, we don’t need to visit the callback hell either.

The answer is promises. WebDriverJS has nice built-in support for promises. It also has the concept of control flows. The idea is that you can register functions that return promises on the control flow, and the driver will take care of chaining them together.

Finally, on top of that Protractor bridges the gap to Jasmine. It patches the assertions to “understand” promises and plugs them in to the control flow.

Here’s how we can improve our code:

var request = require('request');

describe("Sample test", function() {
    beforeEach(function() {
        var jar = request.jar();
        var req = request.defaults({
            jar : jar
        });
        
        function post(url, params) {
            var defer = protractor.promise.defer();
            console.log("Calling", url);
            req.post(browser.baseUrl + url, params, function(error, message) {
                console.log("Done call to", url);
                if (error || message.statusCode >= 400) {
                    defer.reject({
                        error : error,
                        message : message
                    });
                } else {
                    defer.fulfill(message);
                }
            });
            return defer.promise;
        }
		


        function purge() {
            return post('api/v1/setup/purge', {
                qs : {
                    key : browser.params.purgeSecret
                }
            });
        }

        function setupCommon() {
            return post('api/v1/setup/test');
        }
		
        var flow = protractor.promise.controlFlow();
        flow.execute(purge);
        flow.execute(setupCommon);
    });

    it("should do something", function() {
        expect(2).toEqual(2);
    });
});

Now the post function is a bit more complicated. First it initializes a deferred object. Then it kicks off the request to server, providing it with callback to fulfill or reject the promise on the deferred. Eventually it returns the promise. Note that now purge and setupCommon now return promises.

Finally, instead of calling those functions directly, we get access to the control flow and push those two promise-returning functions onto it.

When executed, it prints:

Starting selenium standalone server...
Selenium standalone server started at http://192.168.15.120:53491/wd/hub
Calling api/v1/setup/purge
Done call to api/v1/setup/purge
Calling api/v1/setup/test
Done call to api/v1/setup/test
.

Finished in 1.018 seconds
1 test, 1 assertion, 0 failures

Shutting down selenium standalone server.

Ta-da! Purge, then setup, then run the test (again, that little lonely dot).

One more thing worth noting here is that control flow not only takes care of executing the promises in sequence, but also it understands the promises enough to crash the test as soon as any of the promises is rejected. Once again, something that would be quite messy if you wanted to achieve it with callbacks.

In real life you would put that HTTP client wrapper in a separate module and just use it wherever you need. Let’s leave that out as an exercise.

“RESTful Java with JAX-RS 2.0, 2nd Edition” by Bill Burke; O’Reilly Media

RESTful Java with JAX-RS 2.0

REST is all the rage now (not without a reason), and in the Java world the standard API for that is JAX-RS (under the JEE umbrella). “RESTful Java with JAX-RS 2.0″ is the second edition of Bill Burke’s book on the JAX-RS API. Bill Burke is the creator of RESTEasy and a member of the committee that designed JAX-RS.

The book is divided into two parts, over a dozen short chapters in each. The first part includes a very nice introduction to REST, has a great systematic reference over the API and finally a few words on integration with various frameworks, security, caching etc. The second chapter is basically a workbook – there is downloadable code with a few examples for each chapter, and these chapters basically are a detailed walk through.

The author is careful not to get ahead of himself and starts quite slow, introducing the more advanced, automated and magical features step by step. As a result, the book is a great introduction for complete newcomers. But it doesn’t stop there – it discusses all the more advanced features as well (later), with the same depth and clarity. That makes it a great reference and a cookbook that you’re likely to get back to as use the API in your work.

Everything is well thought out and executed. It’s a very easy read, with each chapter stating the problem that it’s trying to solve, following up with presentation of the relevant part of the API and a number of practical examples. If for some reason you need more, you’re free to explore the workbook or complete running code.

Highly recommended.

Note on edition: I read it on Kindle, no issues at all.

“Mastering Web Application Development with AngularJS” (Book Review)

While the first demos and tutorials of AngularJS make very good impression, using it on your own in real life applications quickly leads to confusion and frustration. You soon discover that the documentation falls short of explaining what really is going on, especially in the more advanced areas. It does not do a very good job at showing idiomatic usage either – with proper separation of responsibilities, use of services and directives, etc.

“Mastering Web Application Development with AngularJS” by Paweł Kozłowski and Peter Darwin is really good resource to fill those gaps. It starts with a decent explanation of what AngularJS is all about. How DOM is some kind of a skeleton behind the application, or in other words how application state is directly reflected in DOM. Right after this introduction it introduces unit testing, and from this point on everything is demonstrated not only with the “production” code, but also with accompanying test suites.

Then it starts to dig a bit deeper – from filters, communcation with back-end and navigation through writing custom directives and performance. While the beginning seems to be a bit slow, the chapters on directives are really detailed, have plenty of great examples and do an outstanding job at explaining this difficult subject. Actually, I would say that the whole book may be a bit too advanced for beginners, but then even if you have some experience with Angular, it is well worth reading for the directives alone.

The entire book is organized as a systematic “reference”, with each chapter dedicated to one aspect of the framework: Binding and filters, communication with server, forms, navigation and routing, directives, internationalization, build/deployment, and so on. There also is a complete non-trivial application available on Github, and referenced throughout the book. Each and every aspect has a very accessible and complete explanation. Theory and rationale, working code as well as test suites.

In other words, the book is not a simplistic tutorial, but a detailed study that takes a reasonably complex application and dissects it one “dimension” at a time. You don’t need to study the entire application while reading the book, but it’s a great complementary material that demonstrates how the pieces fit together and is a ready-to-use cookbook of some sort.

If there is anything missing, I would say it’s information on idiomatic usage: How you are supposed to structure your application, divide it into modules and services, and so on. Not that it’s completely missing from the book, but a bit of a bird’s-eye view would be nice as well.

All in all, it’s definitely worth reading. Detailed, non-trivial, doing a great job at explaining the “why’ and demonstrating the “how”.

(I got the book directly from Packt and read it on Kindle – nothing to complain about in this edition, everything readable and comprehensible.)

The future may just as well be RESTful

Chris Zheng has just published an article on “Why the future is NOT RESTful”. It made a bit of a splash, but I think it’s based on false assumptions and quite wrong. Here’s why.

Chris observes that client takes more and more responsibilities from the server. He suggests that server is slowly becoming just a database frontend with authorization. I think it’s very wrong if not dangerous.

Server will never be a dumb database frontend except for trivial CRUD applications. If you have more than one user in the system working with the same thing concurrently, you have to have some coordination on server side. If you have any business logic at all, you have to have it on the server side.

Client is becoming thicker, fine. It may have a database, offline state and a lot logic. But it does not mean that server is losing any of its thickness in domain model. We’re only moving the presentation side (all of it), enhancing it, maybe duplicating the domain logic. But ultimately we cannot trust the client about anything. It’s all in hands of the users, exposed to reverse engineering, all kinds of forgery and so on. I can send whatever I want to the server with curl.

Next point: With REST you can’t have security. I’m young, green and new to the game, but is that really true? Does REST mean no authorization, no results filtered for each user? If Jim can see some “accounts” in the system and Jessica some others, does it make it impossible to return different results for GET /account depending on context?

To take it a step further: We’re not talking about the very monolithic view here, are we? REST doesn’t mean that you only can have one representation of an “account”, and only one GET/POST/PUT on it. You can still have more representations, each tailored for a specific use case. Getting a list of Jimmy’s friends would be a completely different endpoint from managing user permissions, account settings etc. Even if they’re all “accounts”.

Another point that Chris is making is that for some reason having one resource for one thing is bad, but there’s little explanation for it except for a restaurant counters analogy. It’s hard to argue if there are no real arguments on the other side, but…

Having concrete, well-defined resources that represent one thing each doesn’t really sound like a bad idea to me. If anything, it’s more like the ideal interface segregation and single responsibility. Are they not desirable anymore? Going back to blurry restaurant analogies – I may go for fries or rice, I do order dessert from a different part of the menu than the main course, I may even order wine from a different menu altogether. I may be served by more than one waiter. Heck, sometimes I may even go for a buffet!

To sum up, REST is more than dumb CRUD, and server is much more than just an authorizing database front end. Popularity of REST is a bit of fashion, it’s just another way to solve the problem, and by no means is it the silver bullet. But still, the future may just as well be RESTful.

“Beautiful REST + JSON APIs” by Les Hazlewood (notes from the talk)

Here’s a summary of a few interesting technical details from a very good presentation on designing REST APIs by Les Hazlewood. Many interesting, elegant and not so obvious solutions here.

  • Keep resources coarse-grained, especially if you’re designing a public API and you don’t know the use cases.
  • Resources are either collections or single instances. They’re always an object, a noun. Don’t mix the URLs with behavior. /getAccount and /createDirectory can easily explode to /getAllAccounts, /searchAccounts, /createLdapDirectory…
  • Collection is at /accounts, instance at /accounts/a1b2c3. Create, read, update and delete with GET/POST/PUT/PATCH/DELETE… Everyone knows, but anyway.
  • Media types:
    • application/json – regular JSON.
    • application/foo+json – JSON of type foo. In my understanding, roughly corresponding to XML schema or DTD.
    • application/foo+json;application – JSON of type foo, where the response type (entity format?) is application.
    • application/json, text/plain – JSON prefered, text acceptable.
  • Versioning:
    • https://api.foo.com/v1
    • Media-Type application/json+foo;application&v=1
  • HREF:
    • Use instead of IDs
    • Sample use:
      GET /accounts/a11b223
      200 OK
      {
        "href": "https://api.foo.com/v1/accounts/a11b223",
        "givenName": "Tony",
        "surname": "Stark",
        "directory": {
          // Instance reference
          "href": "https://api.foo.com/v1/directories/ffb554"
        },
        "groups": {
          // Collection reference
          "href": "https://api.foo.com/v1/accounts/a11b223/groups"
        }
      }
      
    • Reference (link) expansion:
      GET /accounts/a11b223?expand=directory
      {
        "href": "https://api.foo.com/v1/accounts/a11b223",
        "givenName": "Tony",
        "surname": "Stark",
        "directory": {
          "href": "https://api.foo.com/v1/directories/ffb554",
          "name": "Avengers",
          "creationDate": "2013-08-08T14:55:12Z"
        }  
      }
      
    • Partial representations:
      GET /accounts/a11b223?fields=givenName,surname,directory(name)
      
    • Pagination:
      GET .../applications?offset=50&limit=25
      {
        "href": ".../applications",
        "offset": 50,
        "limit": 25,
        "first": { 
          "href": ".../applications?offset=0"
        },
        "prev": {
          "href": ".../applications?offset=25"
        },
        "items": [
          {"href": "..."},
          {"href": "..."}
        ]
      
  • Many-to-many – represent as another, special resource, e.g. /groupMembership. Not so obvious potential benefits: If you make it a resource, it’s easy to reference from either end of the association. If you want, you can easily add data to the association (e.g. creation date, responsible user, whatever).
  • Error handling – credit to Twilio.
    POST /directories
    409 Conflict
    {
      "status": 409,
      "code": 40924, // because HTTP has too few codes
      "property": "name",
      // Ready-to-use user-friendly message:
      "message": "A directory named Avengers already exists",
      // Dev-friendly message, can be stacktrace etc.
      "developerMessage": "A directory named Avengers already 
        exists. If you have a stale local cache, please expire
        it now.",
      "moreInfo": "http://foo.com/docs/api/errors/40924"
    }
    
  • Security – many interesting points here, but one particularly valuable. Authorize on content, not URL. URLs can change and may diverge from security configuration…

It’s too bad that the presenter doesn’t really leave the happy CRUD path. I’d really like to hear his recommendation or examples on more action-oriented use cases (validate or reset user password, approve order, what have we).

The entire presentation is a bit longer than that. I talks quite a lot about why you would want to use REST and JSON, as well as security, caching and other issues.

Clojure on Pedestal

Yesterday I gave a two-hour talk at Lambda Lounge Kraków on Pedestal (and some ClojureScript).

I talked only about the client side, and in that mostly about the dataflow engine and how it connects to rendering and services. We also had some unplanned discussion on testing.

Most of the talk was actually a tutorial, developing a basic monitoring application in a few steps. It probably doesn’t add much over the tutorial, but hopefully it was a faster introduction.

If you’re interested in resources:

Now that I’m on the Internet, I guess I have to explain myself. Yes, I am aware that there are better ways than that jQuery, that I could have created a library for time series, and that it would not run after advanced Closure compilation. It’s not even a perfect Pedestal app. But I think it did the job of explaining the concepts behind Pedestal without drowning the audience in details, using basic pieces that everyone was familiar with. Let’s say that I left ironing out the wrinkles as an exercise.

Despite my lack of experience from my point of view it went fairly well. Always an interesting experience. If you’ve been there, feel free to tell me what you think.

Strange Loop 2013

This year I had the pleasure to be at the Strange Loop conference in Saint Louis. Here’s a brief recap of what was happening there.

Preconf: Emerging Languages Camp

On the first day I sat in about a dozen presentations related to “emerging languages”. The good ones:

  • Daniel Gregoire – Gershwin: stack-based, concatenative Clojure. There were a few presentations of stack-based languages here, but in my opinion this one was the best. It’s been the first time I actually heard about such languages. Not very practical, but quite an interesting excercise none the less.
  • Bento Miso – Daimio: a language for sharing. An attempt at creating a language for creating web plugins, with sandboxing, clear and isolated execution spaces, interactions defined in in/out ports… A bit too hard to follow though..
  • Andreas Rumpf – Nimrod – very good presentation of quite interesting and well thought out language intended mainly for game development. Reportedly fast, very expressive, decent compiler with macros and some serious compile time optimizations… Worth a look.
  • Walter Wilson – Axiomatic Languages. Interesting, very theoretical approach to programming. Define programs in terms of axioms and logical reasoning. Want to sort a list? No problem, just define what it means for elements to be ordered and what a permutation is, then say that the desired output is a permutation that is ordered. Rather academic.
  • Tracy Harms – J. The joke says it’s been an emerging language for 23 years. I’ve never been able to get it (not that I tried much), and this presentation was a very good introduction. It included and decently explained a lot of the syntax as well as the underlying flow and concepts. Quite unusual and interesting.
  • Bodil Stokke – BODOL. This gem was one of the wildest things that I’ve seen not only on this conference, but also in my professional life. Ponies, pattern matching, logic programming, lambdas, monads, types, parsing, programming language theory, live coding… All running from Emacs.

I also attended two unsessions.

  • One was a live recording session of the Think Distributed podcast. 4 smart folks took questions from audience and shared their thoughts on testing, programming, handling failures etc. in distributed environment. Worthwhile.
  • The other un-session was largely based on the she++ documentary on bringing women to programming (and technology in general). Important problem that reappeared a few more times at the conference. It’s worth mentioning, though, that Strange Loop had an unusually high number of female speakers. They might have been preferred in some ways, but nonetheless delivered many awesome talks and the gender did not really matter.

Day One

  • Jenny Finkel – Machine Learning for Relevance and Serendipity. In this opening keynote we got to learn some of the challenges behind Prismatic. (If you haven’t seen Prismatic yet, do it now. It’s awesome.). Very good insights on the discovery problem, machine learning, collective intelligence, user behavior, statistics…
  • Jason Gilman – Visualization Driven Development. One more stab at the problem of using visual aids in programming. This time more like: Tired of waiting for the tools? Roll your own, very custom for the code you’re currently working with.
  • Philipp Haller – Scala Async. I don’t really do Scala, but I liked this presentation a lot. Solid, elegant presentation of upcoming features of Scala for async programming (inspired by F# and C#). Lots of good code, easy to follow examples and great explanation of the matter.
  • Craig Andera – Real-World Datomic. A little bit of a case study, much insight into design and usage of the database. We got to learn about designing and representing the domain model, all the index types and their use cases and some other internals of Datomic. Great talk.
  • Rich Hickey – Clojure core.async. Very, very good presentation not only on core.async (which I’ve never actually used before), but also on asynchronicity in general, CSP, performance, code expressiveness and cohesion… Not so much syntax and case studies, more of a solid explanation of reasons and usage of the library. Plus Rich is as brilliant a speaker as he is a creator.
  • Craig Muth – Xiki: GUI and Text Interfaces Are Converging. Demonstration of a nice… engine (?) for doing all kinds of things from plain text files or just a text editor: GUI prototyping, using git, databases, rails, file system navigation. Very impressive demos, but is it practical? Seems like it may need a good deal of Emacs-fu or a few dozen keyboard shortcuts to do effectively.
  • Joseph Wilk – Creative Machines. The presenter shared many different attempts to get the computer to create music (as well as some examples of work done by others on this topic). All this trying to answer whether machines can be creative, what it means to be creative, how does human creativity work and whether it can be reproduced by a machine. Passionate, well-delivered and thought-provoking, even though I do not fully agree with the conclusions.
  • Jen Myers – Making Software Development Make Sense to Everyone. Very passionate keynote on encouraging women to technology. Talking about iconic heroines of the past (Sally Ride, Nichelle Nichols), as well as something closer to our field and time. Jen presented the “little”, things that she is directly involved with at the moment (Dev Bootcamp, Girl Develop It), as well as shared her vision of a future where hopefully it no longer will be necessary.

At night there was a wild party in the City Museum. Warehouse turned to museum with lots and lots of industrial or fossilized attractions: High and twisted slides, caves, hanging bus, ball pit, crazy places to crawl and walk and lie and sit everywhere (ceilings, floors, going in all directions). Everything adult sized, and boy were those geeks happy as kids to do all this stuff. Last but not least, it was accompanied by very appropriate cave/jazz/dubstep band Moon Hooch. Words can’t really describe it, you had to experience it.

Day Two

  • Martin Odersky – The Trouble With Types. The first half of this keynote was a dense study of type systems in various languages, including the tradeoffs of dynamic vs. static and strong vs. weak. Martin did a great job explaining why Scala is looking the way it is, and why he thinks a strong, static type system is the way to go. The second half delved deep into Scala and to me was too abstract or needed too deep knowledge about Scala. Still, Odersky is a great speaker and thinker. Bonus points for the joke that Rich Hickey could take those slides and give the same talk and treating us to Bach.
  • Avi Bryant – Add all the things: Abstract algebra meets analytics. Very good presentation with very practical examples of how thinking in basic mathematical terms can make software development easier. It showed how you can apply the group theory to data aggregation and storage. It also had a few interesting tricks generally related to the area of data collection and analytics. Great presenter.
  • Crista Lopes – Exercises in Style. Take one problem (word count in text file) and solve it in one language in several different styles, every time deliberately taking some very specific constraints. Spaghetti script, procedural, functional, continuation passing style… Great exercises, worth a look (much bigger book coming out shortly). Unfortunately, was too slow in the beginning and had to cut a few examples later.
  • Brenton Ashworth – Web Apps in Clojure and ClojureScript with Pedestal. Good talk demonstrating and explaining Pedestal. Might have been a little bit too abstract, but I guess you can only fit so much in one hour. Included an impressive dashboard demo (great user experience and data visualizations).
  • David Pollak – Getting Pushy: Pushing data from server to browser. The creator of Lift, appropriately dressed in a lab coat, did a very good job talking about client-server (or server-client?) interaction in web apps. It involved communication in both directions, nicely encapsulated in one accessible piece that solves all the pain of possible failure modes in HTTP and network itself. Included very slick demos in AngularJS and Lift, as well as a little experiment in Clojure (Plugh) based on core.async and clang. Very good.
  • Josh Bloch, Bob Lee – Java Puzzlers, Strange Loop Edition. A handful of entertaining puzzles that often can suprise even veteran Java developers. Java specification seems to have infinite amounts of craze corners, apparently created only so Josh can have fun with the puzzlers.
  • Chris Granger – Finding a way out. The inventor of Light Table unveiled another interesting project supposed to make programming different and more accessible: Aurora. It made for some very slick and impressive demos, but it’s far from finished or usable as an end-to-end product. This time it looks like the target is not a developer of complex apps, but rather the “spreadsheet programmer”. Really nice, but not sure if very practical, and it would be great to actually see a finished, polished environment (Aurora or Light Table).
  • Douglas Hofstadter – What is a strange loop and what is it like to be one? Very good closing keynote, related to Hofstadter’s books. Impressive speech on life, perception, abstraction and consciousness.
  • David Stutz – Thrown for a Loop: A Carnival of Consciousness. Performance/theater very closely related to the preceding speech and Hofstadter’s works. Good deal of discussion of the material and immaterial, self-consciousness and imagination. Appropriately included variations of Goedel theorem, Escher graphics (plus a good deal of Emacs) and live play of Bach fugues by a brass quintet. Plus some very pleasant acrobatics. Very spectacular and unusual, I wish my English sucked less so I could get more of it.

Final Thoughts

I’m using the words “very good”, “great” and “impressive” all the time. It’s boring, but it doesn’t even do justice to the quality of talks and the whole conference. I had very high expectations for the whole thing, and I tend to be hard to please, yet the reality was simply a dream come true.

In some ways the conference was surprisingly minimalistic – I did not even get a note pad or printed program, no sponsor adverts, well nothing except for the ID badge, breakfast at the hotel and cold catered lunch. On the other hand, we got the wild party at the City Museum, the conference venue was very spectacular and comfortable, not to mention the speaker list. I do not mind the tradeoff at all! That’s the way to do it – cut the semi-important stuff to minimum and spend all the money on what really matters and becomes a truly unforgettable experience. Hats off to Alex Miller for putting it all together.