Monthly Archives: December 2011

Apprenticeship Patterns

Apprenticeship Patterns. Guidance for the Aspiring Software Craftsman by Dave Hoover and Adewale Oshineye has been lying on my bookshelf for quite a while. I expected a largely repetitive, buzz-driven soft read. It turned out to be quite a surprise.

Yes, Apprenticeship Patterns is a book about software craftsmanship. However, it’s very far from what wannabe-craftsmen say: “We’re all artists, why don’t people admire us like we deserve it.” Actually, it’s the exact opposite.

Learning is a very long road. Accept your ignorance as you begin, seek sources to learn from, kindred spirits or mentors. Get your hands dirty and learn by practice. Work with people, code breakable toys, read constantly, and practice, practice, practice. It’s only your responsibility to learn and improve your skills, diversify, and deepen your knowledge. Don’t repeat the buzz, but get your hands dirty and get to work. Finally, share what you learned with those behind you on the path and create communities where people can motivate each other and learn together. And don’t lose your motivation and goals along the road.

Now, the previous paragraph does look pretty fluffy. That’s just a birds-eye view, and those tend out to lack detail. The book itself is very down-to-earth, “dirty” and concrete. It’s an inspiring collection of thoughts, ideas and tricks on self-improvement.

I’ve seen quite a few talks and read a few articles on craftsmanship, and none of them was anywhere near as concrete and complete as this book. It doesn’t only apply to software. In fact, I believe it largely applies to any other area that involves learning in our life.

Highly recommended.

Careful with that PrintWriter

Let’s assume we have a nice abstraction over email messages simply called Email. Something that can write itself to Writer and provide some other functionality (e.g. operating on the inbox).

Now, take this piece of code:

public void saveEmail(Email email) {
	FileOutputStream fos = null;
	try {
		fos = new FileOutputStream("file.txt");
		PrintWriter pw = new PrintWriter(fos);
		email.writeContentTo(pw);
		pw.flush();
		email.deleteMessageFromInbox();
	} catch (IOException e) {
		log.error("Saving email", e);	
	} finally {
		IOUtils.closeQuietly(fos);
	}
}

Question: What happens if for whatever reason the PrintWriter cannot write to disk? Say, your drive just filled up and you can’t squeeze a byte in there?

To my surprise, nothing got logged, and the messages were removed from inbox.

Unfortunately, I learned it the hard way. This code is only a piece of a larger subsystem and it took me quite a while to figure out. Used to dealing mostly with streams, I skipped this area and checked everything else first. If PrintWriter was like a stream, it would throw an exception, skipping deleteMessageFromInbox.

All streams around files work like that, why would this one be different? Well, PrintWriter is completely different. Eventually I checked its Javadoc and saw this:

“Methods in this class never throw I/O exceptions, although some of its constructors may. The client may inquire as to whether any errors have occurred by invoking checkError().”

At first I was furious about finding that’s all due to yet another inconsistency in core Java. Now I’m only puzzled and a bit angry. Most things in Java work like this – they throw exceptions on every occasion and force you to explicitly deal with them. And it’s not unusual to see 3 or 4 of them in a signature.

One could say, it’s all right there in the Javadoc. Correct. But still, it can be confusing to have two fundamentally different approaches to error handling in one corner of the library. If one gets used to dealing with one side of it, he may expect the same from the other. Instead, it turns out you always have to carefully look at signatures and docs, because even in core Java there is no single convention.

Related: try/catch/throw is an antipattern – interesting read, even if a bit too zealous.

TDD on Existing Code

I recently read two blog posts about writing tests for existing (often ?legacy?) code. Jacek Laskowski describes (in Polish) the moment when he realized that TDD actually means development driven by tests, starting with tests, and as such cannot be applied to existing codebase.

I can think of at least several cases where you can do TDD on existing, even not-so-pretty legacy code.

Introducing Changes

When you’re about to add a feature to existing code, start with a test for that single feature. When you see a bug report, write a test for it. Sure, it probably won’t cover much more than this feature or bug, but it can drive your development and guard against regression in future.

Understanding / Documenting Code

When you learn a library, you can try spikes in form of unit tests. It may be much more exhaustive and beneficial in future than a ?breakable toy? that you throw away after use. You can use it as documentation or ready-to-use examples in future, and it may even protect you against bugs or incompatibilities introduced in new versions of the library.

You can also try the same trick on legacy code. As Tomek Kaczanowski points out, those tests will often be high-level, integration or end-to-end tests. That’s better than nothing and can be a good starting point for refactoring.

Is that TDD?

One could say that this is not test driven development. I would argue that the whole point of TDD is not a fanatic red-green-blue cycle. It is introducing small, fast, focused (as much as possible…), automated tests that become ?live? specification and documentation, and protect you from regressions.

Yes, there is focus shift. There is no ?red? phase. Moreover, in a way you write tests after code, even though the benefits left after the process are the same.

I’ve spent a few years on a fairly large project full of legacy code. And I mean legacy, sometimes in the facepalm way.

Whenever I start a piece of work, be it bug or feature, I try to think about tests. If it’s a rusty, legacy area, I may spend a while understanding the codebase. As I do it, I may leave tests for existing code as breadcrumbs. Very often they reveal weaknesses and beg for refactoring. Sometimes I may do the refactoring in place (if it’s very important, or easy), other times leave it for the future.

Sometimes I do know the area that I need to deal with, but it is pretty convoluted. Again, I may spend quite a while trying to write the first test for my new piece of work. But as I design this test, I carefully lay out and understand all collaborators and think about flow. Think of it: designing a test alone can help you understand codebase and the task at hand in much more depth, and feel much more safe about what you’re trying to do.

Once the first test is in, new tests are usually much easier and we’re back in the fancy red-green-blue groove.

There is much more depth to it. TDD is not limited to designing new code in green grass projects. Tests can also help you understand all the dependencies and conditions in existing environment. In other words, think carefully before hacking.

I would say it’s equally, if not even more beneficial, than on the green grass.