Tag Archives: Testing

Validating Class/Package Dependencies with Classycle

Classycle is a very nice analyzer and dependency checker for class and package dependencies.

It lets you define package groups (components, layers) and express unwanted dependencies such as cycles, or dependencies between particular packages. For example you can specify that you want no package cycles and no dependencies from com.foo.domain.* on com.foo.api.*. All using a very human-friendly, concise format.

Then you kick off the analyzer (it comes with an Ant task and a standalone command line tool) and it produces a report with violations.

There is a number of other tools out there: JDepend, Sonar, JArchitect and so on. So why Classycle?

  • It’s free (BSD license).
  • It’s fast.
  • It’s powerful and expressive. The rules take just a few lines of easily readable text.
  • It integrates with build tools very well. We have it running as part of the build script, for every build. It’s really just another automated test. Thanks to that the project structure is probably the cleanest one I’ve worked with so far.

Gradle Plugin

Thanks to having an Ant task Classycle is very easy to integrate with Gradle, with one caveat: The official build is not in Maven Central, and the only build that is there does not include the Ant task.

Gradle itself uses Classycle via a script plugin, buried somewhere in project structure. They published Classycle on their own repository, but it’s the older version that doesn’t support Java 8.

Inspired by that, we wrote our own plugin and made it available for everyone with minimum effort. It’s available on Gradle Plugin Portal and on GitHub.

In order to use it, all you need is:

  • Add the plugin to your project:
    plugins { id "pl.squirrel.classycle" version "1.1" }
    
  • Create Classycle definition file for each source set you want to have covered in src/test/resources/classycle-${sourceSet.name}.txt:

    show allResults
    
    {package} = com.example
    check absenceOfPackageCycles > 1 in ${package}.*
    
  • Congratulations, that’s all it takes to integrate Classycle with your Gradle build! Now you have the following tasks:
    # For each source set that has the dependency definition file:
    classycleMain, classycleTest, ... 
    
    # Analyze all source steps in one hit:
    classycle
    
    # Also part of the check task:
    check
    

See Plugin Portal and GitHub for more information. Happy validating!

Direct Server HTTP Calls in Protractor

When you’re running end-to-end tests, chances are that sometimes you need to set up the system before running the actual test code. It can involve cleaning up after previous executions, going through some data setup “wizard” or just calling the raw server API directly. Here’s how you can do it with Protractor.

Protractor is a slick piece of technology that makes end-to-end testing pretty enjoyable. It wires together Node, Selenium (via WebDriverJS) and Jasmine, and on top of that it provides some very useful extensions for testing Angular apps and improving areas where Selenium and Jasmine are lacking.

To make this concrete, let’s say that we want to execute two calls to the server before interacting with the application. One of them removes everything from database, another kicks off a procedure that fills it with some well-known initial state. Let’s write some naive code for it.

Using an HTTP Client

var request = require('request');

describe("Sample test", function() {
    beforeEach(function() {
        var jar = request.jar();
        var req = request.defaults({
            jar : jar
        });

        function post(url, params) {
            console.log("Calling", url);
            req.post(browser.baseUrl + url, params, function(error, message) {
                console.log("Done call to", url);
            });
        }

        function purge() {
            post('api/v1/setup/purge', {
                qs : {
                    key : browser.params.purgeSecret
                }
            });
        }

        function setupCommon() {
            post('api/v1/setup/test');
        }
        
        purge();
        setupCommon();
    });

    it("should do something", function() {
        expect(2).toEqual(2);
    });
});

Since we’re running on Node, we can (and will) use its libraries in our tests. Here I’m using request, a popular HTTP client with the right level of abstraction, built-in support for cookies etc. I don’t need cookies for this test case – but in real life you often do (e.g. log in as some admin user to interact with the API), so I left that in.

What we want to achieve is running the “purge” call first, then the data setup, then move on to the actual test case. However, in this shape it doesn’t work. When I run the tests, I get:

Starting selenium standalone server...
Selenium standalone server started at http://192.168.15.120:58033/wd/hub
Calling api/v1/setup/purge
Calling api/v1/setup/test
.

Finished in 0.063 seconds
1 test, 1 assertion, 0 failures

Done call to api/v1/setup/purge
Done call to api/v1/setup/test
Shutting down selenium standalone server.

It’s all wrong! First it starts the “purge”, then it starts the data setup without waiting for purge to complete, then it runs the test (the little dot in the middle), and the server calls finish some time later.

Making It Sequential

Well, that one was easy – the HTTP is client is asynchronous, so that was to be expected. That’s nothing new, and finding a useful synchronous HTTP client on Node isn’t that easy. We don’t need to do that anyway.

One way to make this sequential is to use callbacks. Call purge, then data setup in its callback, then the actual test code in its callback. Luckily, we don’t need to visit the callback hell either.

The answer is promises. WebDriverJS has nice built-in support for promises. It also has the concept of control flows. The idea is that you can register functions that return promises on the control flow, and the driver will take care of chaining them together.

Finally, on top of that Protractor bridges the gap to Jasmine. It patches the assertions to “understand” promises and plugs them in to the control flow.

Here’s how we can improve our code:

var request = require('request');

describe("Sample test", function() {
    beforeEach(function() {
        var jar = request.jar();
        var req = request.defaults({
            jar : jar
        });
        
        function post(url, params) {
            var defer = protractor.promise.defer();
            console.log("Calling", url);
            req.post(browser.baseUrl + url, params, function(error, message) {
                console.log("Done call to", url);
                if (error || message.statusCode >= 400) {
                    defer.reject({
                        error : error,
                        message : message
                    });
                } else {
                    defer.fulfill(message);
                }
            });
            return defer.promise;
        }
		


        function purge() {
            return post('api/v1/setup/purge', {
                qs : {
                    key : browser.params.purgeSecret
                }
            });
        }

        function setupCommon() {
            return post('api/v1/setup/test');
        }
		
        var flow = protractor.promise.controlFlow();
        flow.execute(purge);
        flow.execute(setupCommon);
    });

    it("should do something", function() {
        expect(2).toEqual(2);
    });
});

Now the post function is a bit more complicated. First it initializes a deferred object. Then it kicks off the request to server, providing it with callback to fulfill or reject the promise on the deferred. Eventually it returns the promise. Note that now purge and setupCommon now return promises.

Finally, instead of calling those functions directly, we get access to the control flow and push those two promise-returning functions onto it.

When executed, it prints:

Starting selenium standalone server...
Selenium standalone server started at http://192.168.15.120:53491/wd/hub
Calling api/v1/setup/purge
Done call to api/v1/setup/purge
Calling api/v1/setup/test
Done call to api/v1/setup/test
.

Finished in 1.018 seconds
1 test, 1 assertion, 0 failures

Shutting down selenium standalone server.

Ta-da! Purge, then setup, then run the test (again, that little lonely dot).

One more thing worth noting here is that control flow not only takes care of executing the promises in sequence, but also it understands the promises enough to crash the test as soon as any of the promises is rejected. Once again, something that would be quite messy if you wanted to achieve it with callbacks.

In real life you would put that HTTP client wrapper in a separate module and just use it wherever you need. Let’s leave that out as an exercise.

Testing Spring & Hibernate Without XML

I’m very keen on the improvements in Spring 3 that eventually let you move away from XML into plain Java configuration with proper support from IDE and compiler. It doesn’t change the fact that Spring is a huge suite and it sometimes finding the thing you need can take a while.

XML-free unit tests around Hibernate are one such thing. I knew it was possible, but it took me more than 5 minutes to find all the pieces, so here I am writing it down.

I am going to initialize all my beans in a @Configuration class like this:

@Configuration
@EnableTransactionManagement
public class TestRepositoryConfig {
	@Bean
	public DataSource dataSource() {
		return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.H2)
				.setName("Nuts").build();
	}

	@Bean
	public LocalSessionFactoryBean sessionFactoryBean() {
		LocalSessionFactoryBean result = new LocalSessionFactoryBean();
		result.setDataSource(dataSource());
		result.setPackagesToScan(new String[] { "pl.squirrel.testnoxml.entity" });

		Properties properties = new Properties();
		properties.setProperty("hibernate.hbm2ddl.auto", "create-drop");
		result.setHibernateProperties(properties);
		return result;
	}

	@Bean
	public SessionFactory sessionFactory() {
		return sessionFactoryBean().getObject();
	}

	@Bean
	public HibernateTransactionManager transactionManager() {
		HibernateTransactionManager man = new HibernateTransactionManager();
		man.setSessionFactory(sessionFactory());
		return man;
	}

	@Bean
	public OrderRepository orderRepo() {
		return new OrderRepository();
	}
}

… and my test can look like this:

@RunWith(SpringJUnit4ClassRunner.class)
@TransactionConfiguration(defaultRollback = true)
@ContextConfiguration(classes = { TestRepositoryConfig.class })
@Transactional
public class OrderRepositoryTest {
	@Autowired
	private OrderRepository repo;

	@Autowired
	private SessionFactory sessionFactory;

	@Test
	public void testPersistOrderWithItems() {
		Session s = sessionFactory.getCurrentSession();

		Product chestnut = new Product("Chestnut", "2.50");
		s.save(chestnut);
		Product hazelnut = new Product("Hazelnut", "5.59");
		s.save(hazelnut);

		Order order = new Order();
		order.addLine(chestnut, 20);
		order.addLine(hazelnut, 150);

		repo.saveOrder(order);
		s.flush();

		Order persistent = (Order) s.createCriteria(Order.class).uniqueResult();
		Assert.assertNotSame(0, persistent.getId());
		Assert.assertEquals(new OrderLine(chestnut, 20), persistent
				.getOrderLines().get(0));
		Assert.assertEquals(new OrderLine(hazelnut, 150), persistent
				.getOrderLines().get(1));
	}
}

There are a few details worth noting here, though:

  1. I marked the test @Transactional, so that I can access Session directly. In this scenario, @EnableTransactionManagement on @Configuration seems to have no effect as the test is wrapped in transaction anyway.
  2. If the test is not marked as @Transactional (sensible when it only uses @Transactional components), the transaction seems to always be committed regardless of @TransactionConfiguration settings.
  3. If the test is marked as @Transactional, @TransactionConfiguration seems to be applied by default. Even if it’s omitted the transaction will be rolled back at the end of the test, and if you want it committed you need @TransactionConfiguration(defaultRollback=false).
  4. This probably goes without saying, but the @Configuration for tests is probably different from production. Here it uses embedded H2 database, for real application I would use a test database on the same engine as production.

That’s it, just those two Java classes. No XML or twisted depedencies. Take a look at my github repository for complete code.

TDD on Existing Code

I recently read two blog posts about writing tests for existing (often ?legacy?) code. Jacek Laskowski describes (in Polish) the moment when he realized that TDD actually means development driven by tests, starting with tests, and as such cannot be applied to existing codebase.

I can think of at least several cases where you can do TDD on existing, even not-so-pretty legacy code.

Introducing Changes

When you’re about to add a feature to existing code, start with a test for that single feature. When you see a bug report, write a test for it. Sure, it probably won’t cover much more than this feature or bug, but it can drive your development and guard against regression in future.

Understanding / Documenting Code

When you learn a library, you can try spikes in form of unit tests. It may be much more exhaustive and beneficial in future than a ?breakable toy? that you throw away after use. You can use it as documentation or ready-to-use examples in future, and it may even protect you against bugs or incompatibilities introduced in new versions of the library.

You can also try the same trick on legacy code. As Tomek Kaczanowski points out, those tests will often be high-level, integration or end-to-end tests. That’s better than nothing and can be a good starting point for refactoring.

Is that TDD?

One could say that this is not test driven development. I would argue that the whole point of TDD is not a fanatic red-green-blue cycle. It is introducing small, fast, focused (as much as possible…), automated tests that become ?live? specification and documentation, and protect you from regressions.

Yes, there is focus shift. There is no ?red? phase. Moreover, in a way you write tests after code, even though the benefits left after the process are the same.

I’ve spent a few years on a fairly large project full of legacy code. And I mean legacy, sometimes in the facepalm way.

Whenever I start a piece of work, be it bug or feature, I try to think about tests. If it’s a rusty, legacy area, I may spend a while understanding the codebase. As I do it, I may leave tests for existing code as breadcrumbs. Very often they reveal weaknesses and beg for refactoring. Sometimes I may do the refactoring in place (if it’s very important, or easy), other times leave it for the future.

Sometimes I do know the area that I need to deal with, but it is pretty convoluted. Again, I may spend quite a while trying to write the first test for my new piece of work. But as I design this test, I carefully lay out and understand all collaborators and think about flow. Think of it: designing a test alone can help you understand codebase and the task at hand in much more depth, and feel much more safe about what you’re trying to do.

Once the first test is in, new tests are usually much easier and we’re back in the fancy red-green-blue groove.

There is much more depth to it. TDD is not limited to designing new code in green grass projects. Tests can also help you understand all the dependencies and conditions in existing environment. In other words, think carefully before hacking.

I would say it’s equally, if not even more beneficial, than on the green grass.

Do Leave Failing Tests

Thou shalt do your coding in small increments. No work item may be longer than one day. And if thy work is unfinished at the end of the day, cut it off, and cast from thee. We heard it, believe it and are ready to die for it.

Always?

Here’s an example. Recently I spent over two weeks on a single development task. It had several moving parts that I had to create from scratch, carefully designing each of them and their interactions. Together they form a system that must be as robust as possible, otherwise our users won’t be able to use our application at all (hello, Web Start!). Finally, I tried to make it as transparent and as maintainable as possible.

I took the iterative approach. Create a spike or two. Lay out some high-level design and test. Implement some detail. Another spike. Another detail. An a-ha moment, step back and rewrite. You know how it goes.

Now, it was an all-or-nothing piece of functionality. It was critical that everything perfectly fits together and there are no holes. I just had to grasp the whole thing, even though it took days. Secondly, there were no parts that made much sense individually. Design and interfaces varied wildly, as is often the case in the young, unstable stage of development.

Sometimes your task is just like that. It simply is too big for one day or even one week, and too critical and cohesive to be partitioned.

Here’s an advice.

When you have to leave, what is the best moment to stop? It’s when you’re done with the part you were working on now, right? And then, when you get back to it on Monday, you spend a few lazy sleepy hours trying to figure out where you left 3 days ago? Wrong!

When you’re leaving, leave a failing test. Code that fails to compile. A bug that jumps at you and tries to bite your head off as soon as you launch. When you return, you’ll see the fire and jump right in to action. You will know exactly where you left, and won’t take long to figure out what to do next.

Thanks to Tynan for helping me realize it. Though his post is a general lifestyle advice, I think it can be applied to software engineering as well.

TDD in Clojure: Mocking & Stubbing

A few minutes into my first real TDD trip in Clojure I discovered there is two reasonable ways I can do tests: Mocking or black-box testing. I decided to go for mocking, and so I discovered clojure.contrib.mock. I found the docs fairly confusing, but finally understood it with a little help of this article and research.

Verify Calls to a Function

Assuming we have a function to compute square of a number, we want to write another function for square of sum.

(defn square [x] (* x x))

Our test can look like this:

(ns squirrel.test.core
  (:use [[clojure.test]])
  (:use [[clojure.contrib.mock]]))

(deftest test-square-of-sum
  (expect [square (has-args [3])]
    (square-of-sum 2 1)))

This use of expect asserts that when we execute the inner form (square-of-sum 2 1), it calls square with argument equal 3. However, it does not execute square itself. The only thing that this test checks is whether square got called. In particular, it does not check what (square-of-sum 2 1) returns. We’ll get back to stubbing in a moment.

Stubbing

Let’s modify our test to also assert the final result:

(deftest test-square-of-sum
  (expect [square (has-args [3])]
    (is (= 9 (square-of-sum 2 1))))

When the test runs, it fails because square-of-sum returns nil. The reason is that expect replaces square with a stub which by default doesn’t return anything.

To have stub return a concrete value, we can use the returns function:

(deftest test-square-of-sum
  (expect [square (returns 9 (has-args [3]))]
    (is (= 9 (square-of-sum 2 1))))

Voila. Now the test passes.

To recap, what this two-line test does is:

  • Create a stub for square which returns 9 for argument of 3.
  • Assert this stub is called with argument 3.
  • Assert that square-of-sum calls this stub with argument 3.
  • Assert that square-of-sum eventually returns the correct value.

Quite a lot for such a tiny test.

Expectation Hash

You may be wondering what exactly the second argument in each binding pair is. Clojure docs call it expectation hash.

Each function that operates on an expectation hash, such as has-args or returns, has two overloaded versions. One of them only takes a value or predicate and returns a new expectation hash. Examples include (returns 9) or (has-args [3]).

The other version takes an expectation hash as the second argument. These versions are used to pass the expectation hash through a chain of decorators. Order of decoration does not matter, so (returns 9 (has-args [3])) is effectively the same as (has-args [3] (returns 9)).