Category Archives: Life

Human Error?

I’ve just watched Sidney Decker’s “System Failure, Human Error: Who’s to Blame” talk from DevOpsDays Brisbane 2014. It’s a very nice and worthwhile talk, though there is some noise.

You can watch it here:

It covers a number of interesting stories and publications from the last 100 years of history related to failures and disasters, their causes and prevention.

Very quick summary from memory (but the video surely has more depth):

  • Shit happens. Why?
  • Due to human physical, mental or moral weaknesses – claim from early XX century, repeated till today.
  • One approach (equated to MBA): these weak and stupid people need to be told what to do by the more enlightened elites.
  • Bad apples – 20% people are responsible for 80% accidents. Just find them and hunt them down? No, because it’s impossible to account for different conditions of every case. Maybe the 20% bus drivers with the most accidents drive in busy city center? Maybe the 20% doctors with most patient deaths are infant surgeons – how can we compare them to GPs?
  • Detailed step-by-step procedures and checklists are very rarely possible. When they are, though, they can be very valuable. This happens mostly in industries and cases backed by long and thorough research – think piloting airplanes and space shuttles, surgery etc.
  • Breakthrough: Maybe these humans are not blame? Maybe the failures are really a result of bad design, conditions, routine, inconvenience?
  • Can disasters be predicted and prevented?
  • Look for deviations – “bad” things that are accepted or worked around until they become the norm.
  • Look for early signs of trouble.
  • Design so that it’s harder to do the wrong thing, and easier and more convenient to do the right thing.

A number of stories follows. Now, this is a talk from DevOps conference, and there are many takeaways in that area. But it clearly is applicable outside DevOps, and even outside software development. It’s everywhere!

  • The most robust software is one that’s tolerant, self-healing and forgiving. Things will fail for technical reasons (because physics), and they will have bugs. Predict them when possible and put in countermeasures to isolate failures and recover from them. Don’t assume your omnipotence and don’t blame the others, so it goes. See also the Systems that Run Forever Self-heal and Scale talk by Joe Armstrong and have a look at the awesome Release It! book by Michael Nygard.
  • Make it easy for your software to do the right thing. Don’t randomly spread config in 10 different places in 3 different engines. Don’t require anyone to stand on two toes of their left foot in the right phase of the moon for doing deployment. Make it mostly run by itself and Just Work, with installation and configuration as straightforward as possible.
  • Make it hard to do the wrong thing. If you have a “kill switch” or “drop database” anywhere, put many guards around it. Maybe it shouldn’t even be enabled in production? Maybe it should require a special piece of config, some secret key, something very hard to happen randomly? Don’t just put in a red button and blame the operator for pressing it. We’re all in the same team and ultimately our goal is to help our clients and users win.

The same principles apply to user interface design. Don’t randomly put in a bunch of forms and expect the users to do the right thing. If they have a workflow, learn it and tailor the solution that way. Make it harder for end users to make mistakes – separate opposite actions in the GUI, make the “negative” actions harder to execute.

Have a look at the above piece of Gmail GUI. Note how the “Send” button is big and prominent, and how far away and small the trash is. There is no way you could accidentally press one when you meant the other.

Actually, isn’t all that true for all the products that we really like to use, and the most successful ones?

“Cooking for Geeks” by Jeff Potter; O’Reilly Media

And now for something completely different… cooking!

Cooking for Geeks

I’ve always been intrigued by “Cooking for Geeks”. I came across it several times, and finally when I saw it in O’Reilly Blogger Reviews program I couldn’t resist.

This book is true to its title – it explains the principles of cooking in a slightly different way. It explains the tastes and some basic ideas of balancing and composition. It talks a lot about food consistency and “doneness” in terms of chemical reactions between various components and in response to temperature over time. It shows how the basic principles of cooking and baking work from such perspective. Finally, it has some great points on hardware and foot safety.

All this narrative is interleaved with recipes, placed right after related material. Sauces in taste composition, pizza, bread and cakes in chapters on baking, and so on. They are pretty exceptional – often discussing a few ways to approach a problem, or things to pay attention to, why they matter and what depends on them. Finally, there are some interviews with chefs, geeks and cooking passionates.

It’s very light language, though sometimes quite information-dense and a bit too abstract. I learned a ton from it. A lot of very basic stuff that every homegrown cook does intuitivelly, but you never know why it works this way. Some things may have been too advanced or too abstract, though. I wish there were more of those recipes, which really do a great job of explaining things on real examples. Sometimes I felt it’s a bit too abstract for me, and at the same time it was probably too basic for experts.

All in all, I think it was worth the read. Entertaining and enlightening at the same time, even if not all content is immediately interesting to everyone.

Paper edition could be a bit better. It looks beautiful, and you could keep it handy and even make annotations in it.

DevDay 2012

On October 5, 2012 I attended the second edition of DevDay, a one-day conference sponsored and organized by ABB in Krakow.

The Awesome

Scott Hanselman was the first speaker, and also one of the main reasons why I went to the conference. Even though I only knew him from This Developer’s Life.

His “Scaling Yourself” talk was probably the best productivity talk I’ve seen so far. Part of it was tricks and techniques I have already known, such as GTD and pomodoro. Apart from that, my notes include:

  • The fewer things you do, the more of each you can do.
  • By doing something you are likely to get more of it. If you’re available on weekends and after hours, you will be expected to do it. If you’re available for calls about work on vacations, you will be called. Even if you do work at that time, set that email to go out at 9 AM next business morning.
  • Avoid “guilt systems” such as long collection of recorded TV shows – or books, or unread articles. Or whatever it is that you collect and want to do, but eventually forms a big pile that you only feel guilty about.
  • Sort your information streams by importance and limit usage. Some obvious ones like do less Twitter or Facebook. Some less obvious ones like basic email filters. Combine that with techniques that help you root out the distractions such as pomodoro or Rescue Time.
  • Use information aggregators: Blogs or sites that repost articles, mashups etc. instead of subscribing to 100 different blogs and news sites.
  • Do not multitask, except for things that go well together. Foe example, exercise while watching TV or listening to podcasts. What’s more: use activities you want to do to motivate things you should do. For instance, watch TV only as long as you keep on moving on the treadmill.
  • Plan your work: Find three things you want to do today, this week and this year, and do them. Helps you set the focus on goals. Hint: Email & twitter probably won’t be on the list.
  • Plan your work: Plan your own work sprints, execute, and finally perform retrospectives. Can be applied to work, but also all kinds of personal activities.
  • Synchronize to paper. Don’t limit your space to one screen when you can print or write/draw it on as many sheets of paper as you want. Also, paper notebooks can be used in different conditions and never run out of battery..

Finally, Scott is a great speaker. Lots and lots of content served in a perfect way, sauced with some pretty good jokes. Delicious.

That’s just a bunch of quick notes. Even if you think you’ve heard enough on the topic of productivity, go watch the presentation on Scott’s site. Now.

The Good

I enjoyed the “Why You Should Talk to Strangers” talk from Martin Mazur. Half of it was about social interactions: “us versus them” divisions, difficulties that one can face when approaching complete strangers in public, and the amount of new things you can learn from them if you break the ice. The rest was really about polyglot programming: learning Eiffel, Haskell or Ruby and applying some concepts and ideas back to C#. Largely an unknown territory, but still the talk resonated really well with me.

Antek Piechnik gave a good talk on continuous delivery at an extreme scale: Where each new team member pushes code to production during the first week, and each commit to master can trigger deployment. I found the ideas on project organization pretty controversial, though: having a totally flat team, large pool of features and everyone working on what they want and how they want. Sounds interesting, but it’s only possible if you have only experienced A++ programmers in team that also have the same vision, agree on tools and techniques, and so on.

Finally, I liked Greg Young’s talk on getting productive in a project in 24 hours. The talk was nothing like the subject, though. Basically an introduction to code analysis: afferent/efferent coupling, test coverage and cyclomatic complexity, as well as data mining VCS. Very clear and down-to-earth, discussing some tools and practical examples for each concept.

I particularly liked the points on code coverage. I knew that method with 20 possible paths can have 100% line coverage from 2 tests, but still be poorly tested. Greg made a good point explaining that method coverage gets worse as the number of methods/collaborators between the test and the method grows, or when the methods between the test and our method have high cyclomatic complexity. In such cases it’s really an accidental coverage that in no way guarantees security.

Greg repeatedly stressed how all these concepts are only tools. They indicate interesting areas that may be trouble spots and may need attention. But one can never say that cyclomatic complexity of X is good code and Y is bad code (and the same is true for all other metrics).

The Rest

Rob Ashton’s talk on JavaScript was alright, but not spectacular. I did not learn much new. I know that the language is here to stay, for better or worse. You can patch some gaps with jslint/jshint and others with CoffeeScript, but for its spread the tooling is really patchy and barely existent.

I skipped Mark Rendle’s talk on Simple.Data/Simple.Web. Not my area.

I really did not like Sebastien Lambla’s talk on HTTP caching. Noise to signal ratio approaching infinity. Filled with poor jokes and irrelevant comments. Little, chaotically and patchily discussed substance.

Wrapping Up

All in all, DevDay was a good conference. Really good selection of speakers on a free conference, with free lunch, coffee and snacks. No recruiters, no stands, just the participants and speakers. I wish it had at least two tracks and more focus. I’m not sure if it’s a generic developer conference or a .net event (went for the former, felt atmosphere of the latter).

Learning to Fail

Back at university, when I dealt with much low-level problem solving and very basic libararies and constructs, I learned to pay attention to what can possibly go wrong. A lot. Implementing reliable, hang-proof communication over plain sockets? I remember it today, a trivial loop of “core logic” and a ton of guards around it.

Now I suspect I am not the only person who got used to all the convenient higher-level abstractions so much that he began to forget this approach. Thing is, real software is a little bit more complex, and the fact that our libraries kind of deal with most low-level problems for us doesn’t mean there are no reasons to fail.

Software

As I’m reading “Release It!” by Michael T. Nygard, I keep nodding in agreement: Been there, done this, suffered that. I’ve just started, but it’s already shown quite a few interesting examples of failure and error handling.

Michael describes a spectacular outage of an airline software. Its experienced designers expected many kinds of failures and avoided many obvious issues. There was a nice layered architecture, with proper redundancy on every level from clients and terminals, through servers, through database. All was well, yet on a routine maintenance in database the entire system just hung. It did not kill anyone, but delayed flights and serious financial losses have an impact too.

The root cause turned out to be one swallowed exception on servers talking to the database, thrown by JDBC driver when the virtual IP of the database server was remapped. If you don’t have proper handling for such situations, one such leakage can lock the entire server as all of its threads wait for the connection or for each other. Since there were no proper timeouts anywhere in the server or above, eventually everything hung.

Now it’s easy to say: It’s obvious, thou shalt not swallow exceptions, you moron, and walk on. Or is it?

The thing is, an unexpected or improperly handled error can always happen. In hardware. Or a third party component. Or core library of your programming language. Or even you or your colleague can screw up and fail to predict something. It. Just. Happens.

Real Life

Let’s take a look at two examples from real life.

Everyone gets in the car thinking: I’m an awesome driver, accidents happen but not to me. Yet somehow we are grateful for having airbags, carefully designed crumple zones, and all kinds of automatic systems that prevent or mitigate effects of accidents.

If you were offered two cars at the same cost, which would you choose? One is in pimp-my-ride style with extremely comfortable seats, sat TV, bright pink wheels and whatever unessential features. But it breaks down every so often based on its mood or the moon cycle, and would certainly kill you if you hit a hedgehog. The other is just comfortable enough, completely boring, no cool features to show off at all. But it will serve you 500,000 kilometers without a single breakdown and save your life when you hit a tree. Obvious, right?

Another example. My brother-in-law happens to be a construction manager at a pretty big power plant. He recently took me on a trip and explained some basics on how it works, and one thing really struck me.

The power station consists of a dozen separate generation units and is designed to survive all kinds of failures. I was impressed, and still am, that in power plant business it’s normal to say stuff like: If this block goes dark, this and this happens, that one takes over, whatever. No big deal. Let’s put it in a perspective. A damn complicated piece of engineering that can detect any potentially dangerous conditions, alarm, shut down and fail over just like that. From small and trivial things like variations in pressure or temperature, through conditions that could blow the whole thing up. And it is so reliable that when people talk about such conditions, rare and severe as they are, they say it in the same tone as “in case of rain the picnic will be held at Ms. Johnson’s”.

Software Again

In his “After the Disaster” post, Uncle Bob asked: “How many times per day do you put your life in the hands of an ?if? statement written by some twenty-two year old at three in the morning, while strung out on vodka and redbull?”

I wish it was a rhetorical question.

We are pressed hard to focus on adding shiny new features, as fast as possible. That’s what makes our bosses and their bosses shine and what brings money to the table. But not only them, even we (the developers) naturally take most pride in all those features and find them the most exciting part of our work.

Remember that we’re here to serve. While pumping out features is fun, remember that those people simply rely on you. Even if you don’t directly cause death or injury, your outages can still affects lives. Think more like a car or power station designer, your position is really closer to theirs than to a lone hippie who’s building a little wobbly shack for himself.

When an outage happens and also causes financial loss, you will be to blame. If that reasoning does not work, do it for yourself – pay attention now to avoid pain in future, be it regular panic calls at 3 AM or your boss yelling at you.

More Stuff

Michael T. Nygard ends that airline example with a very valuable advice. Obvious as it may seem, it feels different if you realize it and engrave it deep in your mind. Expect failure everywhere, and plan for it. Even if your tools handle some failures, they can’t do everything for you. Even if you have at least two of each thing (no single point of failure), you can still suffer from bad design. Be paranoid. Place crumple zones on every integration point with other systems, and even different components of your system, in order to prevent cracks from propagating. Optimistic monoliths fail hard.

Want something more concrete? Go read “Release It!”, it’s full of great and concrete examples. There’s a reason why it fits in a book and not in a blog post.

Culture Kills (or Wins)

This post is about one of the things that everyone is aware of to some degree. It feels familiar, but the picture becomes a lot sharper once you put it in a proper perspective.

The Project

There is an older project created a few years ago, perhaps in 2000 or 2005. At the time it was running a single service on a single server for 100 users. The architecture and tools were adequate: One project, one interface, one process. Some shortcuts – that’s fine, you need to get something out to get the money in. Now-standard tools and techniques like TDD, IoC / DI, app servers etc. were nowhere to be seen – either because they were not needed at that time, or they did not even exist in a reasonable form (Spring, Guice, JEE in 2000 or 2005?).

Five or ten years later, the load went up by a few orders of magnitude. So did the number of features. The codebase has been growing and growing, and new features have been added all the time.

Now, let’s consider two situations.

Bad Becomes Worse

What if the architecture and process remained the same from the very beginning? Single project (now many megabytes of source code). Single process with core logic driven by eight-KLOC-long abominations. Many interesting twists related to “temporary” shortcuts from the past. No IoC. No DI. No TDD. Libraries from 2000 or 2005. No agile.

It could be for different reasons. The developers are poor, careless souls who have not improved themselves for so many years and they are not aware that different ways exist (or neglect them). Or they know there are better ways, but are drowned with feature requests.

There is little rotation in the team. A few persistent core members persist, able to navigate in this sea of spaghetti. They are still able to pump out new features. Slower and slower, but anyway. That’s probably the only reason keeping the project alive. They just neglect change, because if the thing still kind of works. Why fix something that is not broken? Why spend money on improving something that’s working?

Now we have a fairly fossilized team and project. I dare say in this shape it can only get worse. Even if the product itself was somewhat interesting and stable, would you like to change your job to join the team and work on it? Dealing with tons of legacy code, in a culture that fears change, with no modern tools at your disposal? With no way to learn anything?

Right.

Very good developers usually already have a job and there is no way they would ever quit for this. You will not be able to hire them, unless you pay insane amount of money. Even then, money is not as good a motivator as genuine passion and interest.

Who would do it? Only people with poor skill, lack of experience, desperates or those who don’t give a damn. They will take forever to get up to speed in this ever growing mudball. And because they’re not top class, chances are the project won’t get any better.

We get a nasty negative feedback loop. Bad code and culture encourages more bad code. Mediocre new hires make it even worse. And so the spiral continues.

Good Gets Better

In the second scenario, the team has some caring craftsmen. They constantly read, learn, think, explore and experiment. They observe their product and process and improve them as they recognize more adequate tools and techniques. Some time along the path they broke it down into modules and instead of a monolithic mudball got an extensible service-oriented architecture. They understood inversion of control and brought it in together with a DI container, refactoring the god classes before they grew out of control. They got higher test coverage. In short, they constantly evaluate what and how they’re doing, how they can make it better, and put that to practice.

Now this team can hire pretty much anyone they like. They may decide to hire inexperienced people with the right attitude and train them. But they also are able to attract passionates who are much above average and who will make it even better.

It creates a sweet positive feedback loop. Great culture never loses the edge and it attracts people who can only make it better.

Quality Matters

That’s why quality and refactoring matter. It’s OK to take a shortcut to get something out. It’s OK to use basic tools for a basic job. But if you never improve, the project will stagnate and rot away.

Sure, fulfilling business needs is important. Having few bugs is important. Avoiding damage (to people or the business) is important. But in the long run if you just keep cranking out features and never retrospect or pay down technical debt, it will become a nasty ever-slowing grind. If you’re lucky, it will just get slower and require some babysitting in production and emergency bug fixing. If you’re less lucky, it will become inoperable and completely unmaintainable if some of the persistent spaghetti wranglers leave or are hit by a truck.

Are We Doomed?

To end this sermon with a positive accent, let’s say that while the feedback loops are strong, they are not unbreakable. Culture change is hard in either direction, but possible. If the “bad” team or its management realizes the situation in time and starts improving, they may be able to shift to the positive loop. Introduce slack or retrospectives, start discussion and slowly, but regularly improve. And if for whatever reason you abandon good practices, letting leaders go or drowning them up to the neck with work, it will start the drift towards the negative loop.

Software for Use

Here’s confession of a full time software developer: I hate most software. With passion.

Why I Hate Software

Software developers and people around the process are often very self-centered and care more about having a good time than designing a useful product. They add a ton of cool but useless and bugged features. They create their own layers of frameworks and reinvent everything every time, because writing code is so much more fun than writing, reusing or improving it.

They don’t care about edge cases, bugs, rare conditions and so on. They don’t care about performance. They don’t care about usability. They don’t care about anything but themselves.

Examples? Firefox that has to be killed with task manager because it slows to a crawl during the day on most powerful hardware. Linux which never really cared or managed to solve the issues with drivers for end user hardware. Google maps showing me tons of hotel and restaurant names instead of street names, the exact opposite of what I want when planning a trip. Eclipse or its plugins that require me to kill the IDE from task manager, waste some more time, and eventually wipe out the entire workspace, recreate it and reconfigure.

All the applications with tons of forms, popups, dialogs and whatnot. Every error message that is a page long, has a stacktrace, cryptic code and whatever internal stuff in it. All the bugs and issues in open source software, which is made in free time for fun, rarely addressing edge cases or issues happening to a few percent users because they’re not fun.

It’s common among developers to hate and misunderstand the user. It’s common even at helpdesk, support and many people who actually deal with end users. In Polish there is this wordplay “użyszkodnik”, a marriage of “użytkownik” (user) and “szkodnik” (pest).

What Software Really Is About

Let me tell you a secret.

The only purpose of software is to serve. We don’t live in a vacuum, but are always paid by someone who has a problem to solve. We are only paid for two reasons: To save someone money, or to let them earn more money. All the stakeholders and users care about it is solving their problems.

I’ve spent quite a few years on one fairly large project that is critical for most operations of a corporation. They have a few thousand field workers and a few dozen managers above, and only a handful of people responsible for software powering all this. Important as it is, the development team is a tiny part of the entire company.

Whenever I design a form, a report, an email or whatever that the end user will ever see, the first and most important thing to do is: Get in their shoes. Understand what they really need and what problem they are trying to solve. See how we can provide it to the them so that it’s as simple, concise, self-explanatory and usable as possible. Only then we can start thinking about code and the entire backend, and even then the most important thing to keep in mind is the end user.

We’re not writing software for ourselves. Most of the time we’re not writing it for educated and exceptionally intelligent geeks either. We write it for housewives, grandmas, unqualified workers, accountants, ladies at bookshops or insurance companies, all kinds of business people.

We write it for people who don’t care about software at all and do not have a thorough understanding of it. Nor do they care care how good a time you were having while creating it. They just want to have the job done.

You’re Doing It Wrong

If someone has to ask or even think about how something works, it’s your failure. If they perform some crazy ritual like rebooting the computer or piece of software, or wipe out a work directory, that’s your fault. If they have to go through five dialogs for a job that could be done with two clicks, or are forced to switch between windows when there is a better way, it’s your failure. When they go fetch some coffee while a report that they run 5 times a day is running, it’s your fault. If there is a sequence of actions or form entries that can blow everything up, a little “don’t touch this” red button, it’s your fault. Not the end user’s.

It’s not uncommon to see a sign in Polish offices that reads (sadly, literally): “Due to introduction of a computer system, our operations are much slower. We are sorry for the inconvenience.” Now, that’s a huge, epic failure.

Better Ways

That’s quite abstract, so let me bring up a few examples.

IKEA. I know furniture does not seem as complicated as software, but it’s not that trivial either. It takes some effort to package a cabinet or a chest of drawers in a cardboard box that can be assembled by the end user. They could deliver you some wood and a picture of cabinet, and blame you for not knowing how to turn one into another. They could deliver a bunch of needlessly complicated parts without a manual, and blame the user again.

They know they need to sell and have returning customers, not just feel good themselves and blame others.

What they do is carefully design every single part and deliver a manual with large, clear pictures and not a single line of text. And it’s completely fool-proof and obvious, so that even such a carpentry ignorant as you can assemble it.

LEGO. Some sets have thousands of pieces and are pretty complex. So complex that it would be extremely difficult even for you, craftsman proficient in building stuff, to reproduce.

Again, they could deliver 5,000 pieces and a single picture to you and put the blame on you for being unable to figure it out. Again, that’s not what what they do. They want to sell and they want you to return. So they deliver a 200-page-long manual full of pictures, so detailed and fool-proof that even a child can do it.

There are good examples in software world as well. StackOverflow is nice, but only for certain kind of users. It’s great for the Internet geeks who get the concept of upvotes, gamification, focusing on tiny narrow parts and not wider discussion etc. Much less for all kinds of scientists and, you know, regular people, who seem to be the intended audience of StackExchange.

Google search and maps (for address search, intuitiveness and performance), DuckDuckGo are pretty good. Wolfram Alpha. Skyscanner and Himpunk. Much of the fool-proof Apple hardware and software.

In other words, when you know what it does and how to use it the first time you see it, and it Just Works, it’s great.

Conclusion

Successful startups know it. They want to sell and if they make people think or overly complicate something, people will just walk on by. I guess many startups fail because they don’t realize it. Many established brands try to do it and learn from startups, simplifying and streamlining their UIs (Amazon, MS Office, Ebay…). It’s high time we applied it to all kinds of software, including the internal corporate stuff and open source.

After all, we’re only here to serve and solve problems of real people.

That’s the way you do it.

IDEs of the Future

I suspect that by now everyone has seen Bret Victor’s “Inventing on Principle” talk. If you haven’t, here it is:

Bret Victor – Inventing on Principle from CUSEC on Vimeo.

I found it great and inspiring not only for the interactive features, but also for the moral parts. I’ve seen some previous inventions by Bret in the past around WorryDream.com. I wish I had been taught science like this. I try to teach like this, and this definitely is the way I am going to try to teach my children with.

Back to the point, though. Cool as they were, Bret’s interactive examples made me ask myself whether it could work with something more complicated (or less contrived?). I was not quite sure.

Today Prismatic served me Chris Ganger’s Light Table – similar concept, even inspired by Bret, applied to an IDE. Take a look at this video:

The video tells a lot, but if you prefer text you can read more at Chris’ blog.

Now, this is something that probably could work. Maybe not exactly like showed here, maybe not that minimalistic, but it has some ideas that I would really like to see in my IDE today.

  • Showing and searching documentation this way seems absolutely possible. Eclipse does not do it for me yet, though.
  • I would really love to see this kind of view where you focus on a single chunk of code (class, method or function) and can automatically see all the referenced bits (and only them) and navigate back and forth. I imagine a navigable graph that takes me between places, but it shows contents of more than one file at a time, and is not really based on files but “chunks” like functions or classes. Does not seem very far either, and could be as life-changing as multiple desktops or monitors (if not more).
  • Finally, the interactive debugging. Looks great and could work on functional language like Clojure and I can see how it would work there. It would be one hell of an effort to get it working for Java though, with all the encapsulation and baroque ceremony.

All in all, very inspiring ideas. I really hope my IDE does some of those things one day. Keep up the good work, guys!

33rd Degree 2012

After great success of the first edition of 33rd Degree I did not really need anyone to convince me to participate this year. I can honestly say that I’ve been waiting for it for a year, and it definitely was worth it.

Day 1

The conference opened with three keynotes. The first speaker was Raffi Krikorian from Twitter who spent good deal of time explaining what Twitter really is: A gigantic event pump that consumes massive amounts of messages and broadcasts them as fast as it can. He also provided a very coarse-grained look over some pieces of their architecture, including heavy use of parallel scatter-gather algorithms even for fairly basic tasks like rendering a timeline. Another interesting bit was their own Finagle library for asynchronous networking with all the fancy stuff of failure detection and failover/retry, service discovery, load balancing etc. Worth noting is that Raffi did not say a bad word about Rails, but only explained why it did not scale to their level.

The second keynote was Ken Sipe talking about the complexity of complexity. I am not sure who was first, but I’m under the impression that this talk was in many ways similar to Rich Hickey’s Simple Made Easy, except for that it was a lot more fluffy. Rich had a very clear point to make and he did it perfectly, with witty and merciless criticism of some flavors of agile as a little cherry on top.

The third keynote was Venkat Subramaniam on Facts and Fallacies of Everyday Software Development. Addressed to pointy-haired bosses (including inside each one of us), it challenged some common opinions like trusting “proven technologies” (read: fossils or large vendors) and advocated polyglot programming (he can’t help but do it even when he talks about “pure Java”). Hardly anything new, but Venkat’s beloved style still made it an entertaining talk.

After those keynotes we got smaller presentations in parallel tracks with some hard decisions to make. I wish I went to Matthew McCullough’s git workshop. Instead I wasted some time looking at JFrog presentation by Frederic Simon (I expected some interesting stuff on continuous integration & delivery). Sadek Drobi’s talk on Play and non-blocking IO was quite OK, but I did not really buy the product. It may be because I’m not into Play, but I found the solution somewhat overengineered and much less appealing than Finagle.

The last technical talk of the day for me was Wojciech Seliga’s piece on tests around JIRA. Those guys are quality nuts, but I can honestly say I would feel really safe and comfortable in this environment. He explained all kinds of unit tests that they have: Regular “boring” unit tests, integration tests (including talking to other Atlassian products), functional tests and platform tests. Even though most of the functional tests run against an API (and only some on Selenium), testing the whole system takes many hours on a 70-worker CI cluster, and they still do it on each and every commit. That is mostly because of platform and performance tests, where they meticulously test against all kinds of platforms: combinations of OS, architecture, DB and whatnot.

Wojciech openly admitted that sometimes it takes them days to make a build green, but they are still very rigorous about it. He also shared some good tricks. Tests are very neatly organized in categories and you can drill down to see what exactly is failing and why. Failing tests go to quarantine in order to keep developers from assuming that there is a false negative that can be ignored while there can actually be other issues making the build fail. Once something goes into quarantine it’s either fixed or eradicated, there is zero tolerance for failure.

Day 2

I started the second day with Barry O’Reilly’s talk on agile & lean startups. I expected much fluff, but I actually learned a few interesting things. We all now the agile/lean cycle of (rougly speaking) iteratively inventing, creating, testing and evaluating, and we got to hear about it here as well. What was interesting and new to me though was how much you can do in the market research field. Two good examples include: Create very detailed portraits of your potential users, to the point where you can actually imagine this person as if you walked their shoes, feel their needs, think how they may want to use your product etc, eventually leading to detailed sketches and scenarios. Consider several distinct portraits and see what functionaliy and user experience emerges. The second idea that sounds crazy to my introvertic self is actually going to the street, cafes etc. and asking strangers about their needs and opinions. Reportedly they love it (“Building new Facebook? I wanna be a part of it!”).

Then I listened to Matthew McCullough’s presentation on economic games in software (and life, actually). Perfectly prepared, rich and fluent, easily one of the best speakers I saw in action (even when he’s coding, like last year on git and hadoop). Matthew touched a few topics: Money is not the only (and not the most important) part of the equation. It may be better to live in a nice place with your family and friends around you than to be paid $300/hour and work on the South Pole. It goes the other way round as well: If you hire people, make sure you motivate them in other ways – create a great environment, listen to their needs etc. Speaking of money, he also explained some tricks that are used on us every day – such as shaping restaurant menu or software license pallette so that they have some ridiculously overpriced things that no-one would buy only to make you buy other overpriced (but cheaper) products.

Enough soft talks, next one was Venkat Subramaniam talking about concurrency, reportedly in pure Java. Started off with a seemingly trivial synchronization issue that is very hard to do right in Java and easily generates a ton of boilerplate. Then he went on to resolve it in a few lines of Clojure (surprise, surprise), explained the concept of STM and went back to Java, resolving the original problem using this little concurrency library (yes, Clojure in Java). Very witty and entertaining, and even though I know some Clojure I wasn’t aware how easy it is to get it to work for you in Java. It really can be just an accessible concurrency library!

Next on my schedule was Sławek Sobótka’s talk on basic software engineering patterns. I enjoy most of the works that he publishes on his blog and the live presentations. This was no exception, even if most of the stuff wasn’t new to me. In short, it was a gentle introduction to some ideas behind DDD and modeling. From basic building blocks (value objects, entities, aggregates) up to operations (services) and customization (policies), all neatly organized in clear layers and levels. Professional, clear, fluent and to the point.

Then I saw Ken Sipe’s presentation on web security, which was basically a run through OWASP Top Ten threats. Unlike the keynote this one was little fluff much stuff and taught me something about common security issues. I knew most of the threats, but never really went through the OWASP list myself. Very good and thorough presentation, complemented by personal stories and interesting digressions.

Venkat Subramaniam showing Scala in a terribly overcrowded room was the last thing on my agenda for the day. I’ve never done anything in this language and I found this presentation a very gentle and interesting introduction. Looks like something between powerful but chaotic Groovy (that feels more like a scripting language to me) and old and baroque Java. I could actually give it a try, though I am yet to think of an applicable project.

Day 3

I kicked off the last day with Jacek Laskowski’s talk on Clojure. It clearly was not possible to show all the interesting corners of the language, but I think he did a good job at explaining the very basics as well as some concurrency concepts. It could have included more details or examples on why we would want to use those concurrency constructs, but still it managed to spark some interest (and not scare people off), and that alone is a big win.

Then I listened to Bartosz Majsak talking about Arquillian – a JBoss library for running higher level tests (demonstrated on Selenium-driven acceptance tests) using mock classes in a managed container. Quite interesting, I may give it a try when I get to play with JEE webapps.

The last strictly technical talk of the day was Nate Schutta on HTML5. Nate asked the audience which parts of the stack they are interested in, and after the introduction went over what we were most interested in: Canvas, client-side data storage and worker threads. Very good, fluent talk, explaining the foundations and history behind HTML5 and complemented by practical examples.

Finally we got three keynotes. The first one was Nate Schutta on code craft. Just like you may expect from keynote, it was a mile wide and an inch deep talk about code and work quality. Keep it simple, keep it short, write tests, don’t reinvent the wheel, get and respect feedback… you know it. Professional and interesting, but immediately forgettable. Things to take away: PMD, Clover – run them regularly, radiate information and encourage others to participate in improvement. Culture kills.

Second keynote was Jurgen Appelo telling us how to change the world. It was very well prepared, starting with a witty and masochistic self-introduction, but then it continued to explain a 4-tier model of influencing people:

  • Work iteratively and reflect on your progress, for example using the classic Plan-Do-Check-Act approach.
  • Understand and drive individuals. Make them aware to the idea or product. Make them desire it by making it challenging, interesting or in other ways appealing. Finally pass on knowledge and abilities.
  • Affect social interactions and culture. Understand that each successful adoption has several stages: Innovators and early adopters who are personally interested in getting your product or idea (crazy people who are ready to drive around the city looking for the last iPad). The majority or skeptics whose needs are largely different (more interested in the ability to watch a movie on the train than in having the latest and coolest hardware). Finally the laggards who can undo your efforts if you stop early.
  • Shape the environment to reinforce all of the above: Radiate information, create a strong cultural identity, use incentives, infrastructure and institutions that encourage or guard the right direction.

There was much more to it than this shallow summary, and there is a reason why it fits in a 19-page booklet (downloadable).

The last keynote was Uncle Bob Martin demanding professionalism. Let me sum it up. Started off with a weird off-topic discussion on why people see more colors than dogs and les than bees. Then came a good part: Creating software resembles engineering, but not the way we popularly believe. “Regular” engineers and architects create very detailed documents and are relatively cheap compared to the manufacturing and execution. It’s often said that programmers are on the “manufacturing” stage, while actually everything they do is on engineering. They actually produce very detailed documents (source code), and that’s a huge effort compared to execution (compilation and running). So the cost model is inverted.

Then Uncle Bob slowly went to the point. Our computers are many orders of magnitude more powerful than those of the distant past, yet we use very similar constructs. Execution on modern hardware is practically instant and free. Uncle Bob asked where it got us, and someone replied “Angry Birds”. To this point I very much agree: It’s extremely sad how we waste all the power on bloated frameworks or worthless entertainment. But the point that Uncle Bob tried to make was quite different: Since hardware and execution is so cheap, we need to keep the model inverted by making execution as fast as possible, and that is professionalism. Write tests, don’t test through GUI, avoid database.

Let me repeat: Hardware is blazingly fast, so our primary concern should be to keep the build fast so that most of the time is spent on coding, not compiling deploying. Am I the only one who finds it self-conflicting? The issue is not that we create useless junk. It’s not that we miss business requirements. It’s not that we suck at estimation. It’s to keep the cost model inverted. Actually, when Uncle Bob encouraged the audience to ask questions, someone asked: “How to estimate better?” At this point he laughed and left the stage.

One thing worth taking away: Good architecture is what makes decisions about infrastructure deferrable to the last stages of the project. Keep expanding the core and mocking if necessary, and think of database or frameworks last.

Final Word

All in all, I would say this edition was as good as the last one if not better. I paid more attention to technical talks and some of them paid off. One thing that was definitely worse: It was terribly overcrowdeed! It was next to impossible to walk the halls or even the large exhibition area, not to mention lunch or bathrooms. I don’t think making it larger in the same venue was a good idea.

Two Stories of Ignaz Semmelweis

Here’s two versions of the story of Ignaz Semmelweis. You may have heard it as it’s quite popular in the programming & craftsmanship community. I heard the two versions about a year apart and together they’re even more interesting.

First Story

Ignaz Semmelweis was a Hungarian physician. He believed that by washing their hands consistently and well enough, doctors can significantly reduce the infection and mortality rate among mothers and their newborn children.

Unfortunately, those findings were in conflict with what most doctors believed at the time. They were prejudiced and had their old, established point of view and would not listen to the poor man.

Semmelweis did not manage to convince his peers to his findings. He died a miserable death in an asylum.

Second Story

Ignaz Semmelweis was a Hungarian physician. He believed that by washing their hands consistently and well enough, doctors can significantly reduce the infection and mortality rate among mothers and their newborn children.

Unfortunately, in the process of achieving his superior results, he managed to alienate or offend his entire profession. Things eventually got so bad that his colleagues started to deliberately avoid washing their hands just to defy him.

Semmelweis was a genius but he was also a lunatic, and that made him a failed genius. He died a miserable death in an asylum.

Which is True?

I heard the first story for the first time on the last year’s 33rd Degree. The way my inner wannabe-craftsman understood it at that time was: poor guy ahead of his time was tormented to death by his prejudiced, petrified community. He was right, and his stubborn peers made his life miserable.

I read the second version in Apprenticeship Patterns by Dave Hoover and Adewale Oshineye. Here the lesson is completely different. There is no way you can get your point across if you alienate or offend your peers. You have to be very patient and careful, especially if your ideas do not go hand in hand with what the community believes at the point.

Even if your peers are willing to listen, and you have convincing evidence or arguments, you can never play the one and only enlightened man in the world. It goes the other way round, too. Learn to listen, even if some ideas don’t exactly go hand in hand with your point of view. See if there is any value you can take from them. And even if you disagree, have some respect and don’t play the “we know better, be gone ye stupid lunatic” role.

I suspect both versions of the story are true. People are full of judgement and prejudice. Such is human nature. It’s rarely a wall that you can take down with a ram. More often you find yourself sitting on a giant elephant and trying to convince it to go in a specific direction. The only way you can succeed is preparing and cutting the way through the jungle and pointing the elephant into the opening.

This elephant rider metaphor comes from an awesome talk by Venkat Subramaniam. Heck, I would say his presence alone makes 33rd Degree 2012 a must-see.

Apprenticeship Patterns

Apprenticeship Patterns. Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye has been lying on my bookshelf for quite a while. I expected a largely repetitive, buzz-driven soft read. It turned out to be quite a surprise.

Yes, Apprenticeship Patterns is a book about software craftsmanship. However, it’s very far from what wannabe-craftsmen say: “We’re all artists, why don’t people admire us like we deserve it.” Actually, it’s the exact opposite.

Learning is a very long road. Accept your ignorance as you begin, seek sources to learn from, kindred spirits or mentors. Get your hands dirty and learn by practice. Work with people, code breakable toys, read constantly, and practice, practice, practice. It’s only your responsibility to learn and improve your skills, diversify, and deepen your knowledge. Don’t repeat the buzz, but get your hands dirty and get to work. Finally, share what you learned with those behind you on the path and create communities where people can motivate each other and learn together. And don’t lose your motivation and goals along the road.

Now, the previous paragraph does look pretty fluffy. That’s just a birds-eye view, and those tend out to lack detail. The book itself is very down-to-earth, “dirty” and concrete. It’s an inspiring collection of thoughts, ideas and tricks on self-improvement.

I’ve seen quite a few talks and read a few articles on craftsmanship, and none of them was anywhere near as concrete and complete as this book. It doesn’t only apply to software. In fact, I believe it largely applies to any other area that involves learning in our life.

Highly recommended.