Asynchronous tests with GPars (Osoco test gallery, part 4)

If you wrote multithreaded code in Groovy/Grails, I’m sure you stumbled upon the GPars library. It simplifies parallel processing – sometimes it’s as easy as wrapping a code fragment with a GPars closure, changing the iteration method (in our example from each to eachParallel) and passing the closure to a GPars pool. The library implementation handles the executor pool creation and adds new methods to collections.

Whenever I deliberately add multithreading, I always separate responsibilities (processing logic and concurrency handling). I wrap the single-threaded class with a new one, handing the parallel execution (creating a pool, managing threads, handling cancellation, etc.):

class MultithreadedProcessorService {
    ProcessorService processorService
    def threadPoolSize = ConfigurationHolder.config.threadPool.size

    void processMultithreaded(batches) {
        GParsPool.withPool(threadPoolSize) {
            batches.eachParallel { batch ->
                processorService.process(batch)
            }
        }
    }
}

class ProcessorService {
    void process(batch) {
        // some processing
    }
}

To test the multithreaded class you can you use Spock interactions. As opposed to Groovy and Grails mocks, they are thread-safe and you won’t get any weird and unstable results:

class MultithreadedProcessorServiceSpec extends UnitSpec {
    private MultithreadedProcessorService multithreadedProcessorService
    // Using Spock mocks because they are thread-safe
    private ProcessorService processorService = Mock(ProcessorService)

    def setup() {
        mockConfig('threadPool.size=2')
        multithreadedProcessorService = new MultithreadedProcessorService(
            processorService: processorService
        )
    }

    def 'processes batches using multiple threads'() {
        given:
        def batches = [[0, 1], [2, 3, 4], [5]]
        def processedBatches = markedAsNotProcessed(batches)

        when:
        multithreadedProcessorService.processMultithreaded(batches)

        then:
        batches.size() * processorService.process({ batch ->
            processedBatches[batch] = true
            batch in batches
        })
        assertAllProcessed(processedBatches)

    }

    private markedAsNotProcessed(batches) {
        batches.inject([:]) { processed, batch ->
            processed[batch] = false
            processed
        }
    }

    private void assertAllProcessed(batches) {
        assert batches*.value == [true] * batches.size()
    }
}

This post series present the best of Osoco tests – tests that were tricky or we are just proud of. You can find a runnable source code for this test and more in the Grails Test Gallery project shared on GitHub.

Advertisements

Grails base persistence specification (Osoco test gallery, part 3)

In our projects we include at least one integration test per domain class. We want to assure us that the GORM mapping is correct and the class is persistable in the target database.

The test plan is simple:

  • create a valid persistable sample of the domain object
  • save it
  • clear Hibernate first level cache (i.e. flush and clear current session)
  • retrieve the object once again and compare it with the previously saved object. They must be equal as by equals() but be different object references

We put the steps above in a base abstract specification. In concrete specifications we implement merely the factory method spawning a persistable object instance.

abstract class PersistenceSpec extends IntegrationSpec {
    SessionFactory sessionFactory

    protected abstract createPersistableDomainObject()
    
    def 'persistable domain object should be able to be saved and retrieved'() {
        given:
        def persistableDomainObject = createPersistableDomainObject()

        when:
        def savedDomainObject = persistableDomainObject.save()

        then:
        savedDomainObject.id != null

        when:
        clearFirstLevelCache()

        def retrievedDomainObject = persistableDomainObject.class.get(savedDomainObject.id)

        then:
        savedDomainObject == retrievedDomainObject
        !savedDomainObject.is(retrievedDomainObject)
    }

    private clearFirstLevelCache() {
        sessionFactory.currentSession.flush()
        sessionFactory.currentSession.clear()
    }
}

This post series present the best of Osoco tests – tests that were tricky or we are just proud of. You can find a runnable source code for this test and more in the Grails Test Gallery project shared on GitHub.

Grails equals and hashCode testing (Osoco test gallery, part 2)

If you work with GORM and your objects are held in collections, you should implement equals and hashCode methods. As everything, they should be thouroughly tested if they obey equals-hashCode contract. At Osoco we created a equals-hashcode-test Grails plugin that provides a base Spock specification for that.

In your equals and hashCode tests you simply have to extend the EqualsHashCodeSpec and provide at least two methods:

  • a factory method createDomainObjectToCompare that will create a fresh object for the comparison
  • a map of modified properties included in equals and hashCode
  • optionally a map of modified properties ignored in both methods

Additionally in case of inheritance, we check whether the equals method complies with the symmetry rule. We do it in an extra feature method added to the spec:

def 'equals relation must be symmetric with inheritance'() {
    given:
    def content = new Content()

    and:
    def book = new Book()

    expect:
    (content == book) == (book == content)
}

You may say, in Grails 2.0 and Groovy 1.8 there are @Equals and @HashCode AST transformations and you don’t need to write tests of them. I agree to some point. I think you can trust Groovy developers and you don’t have to test the methods for reflexivity, symmetry, transivity, consistency, and ‘non-nullity’. Anyway, you have to test the annotation configuration if it includes the properties that actually account for the logical equality between objects.

In the future releases of the plugin we will surely simplify the base specification and remove some checks but we will surely continue testing both methods (with our plugin it’s dirt-cheap!)


This post series present the best of Osoco tests – tests that were tricky or we are just proud of. You can find a runnable source code for this test and more in the Grails Test Gallery project shared on GitHub.

JMS queue listener integration tests with Grails (Osoco test gallery, part 1)

In our application we have to consume JMS messages. We install Grails JMS plugin and write a listener for this task. To not violate SRP, our consumer receives a message and routes it to a MessageProcessorService for further processing.

class QueueListenerService {
    static final QUEUE_NAME = 'aQueue'
    static exposes = ['jms']

    MessageProcessorService messageProcessorService

    @Queue(name = QueueListenerService.QUEUE_NAME)
    void onMessage(msg) {
        messageProcessorService.process(msg)
    }
}

MessageProcessorService is not relevant for this test; we have a stub for it:

class MessageProcessorService {
    void process(msg) {
        log.info("Processing message ${msg}")
    }
}

We use Spock for testing. We want to verify that our listener:

  • receives a message and passes it to further processing
  • message is returned to the queue if an exception is thrown (assuming that we configured a redelivery policy)

In this kind of spec we have to deal with asychrony. We are going to send a message from the spec, but it will be received by another thread. We synchronize the test and listener threads by a signal (CountDownLatch) that will indicate us that the JMS message has been received and passed to the processor. We initialize the latch at the beginning of the test. Then we send the message and block until the counter reaches zero (that means, our mock has received the message).

Last but not least we want to check if the queue either has been emptied (first test method) or not (second test method). Once again we have to handle the asynchronous behaviour of JMS. After receiving the message we start to poll the queue checking if the given condition (queue empty or not ) is fulfilled. We establish a 5 second timeout – it should be enough for JMS broker to handle a processing exception (see the Timeout helper class).

So, ladies and gentlemen, the final code:

class QueueListenerServiceSpec extends IntegrationSpec {
    private static final QUEUE_RECEPTION_TIMEOUT_SEC = 5
    private static final QUEUE_POLL_TIMEOUT_MILLIS = 5000
    private static final QUEUE_POLL_INTERVAL_MILLIS = 500

    QueueListenerService queueListenerService
    JmsService jmsService

    private messageReceived = new CountDownLatch(1)
    private messageProcessor = Mock(MessageProcessorService)

    def setup() {
        queueListenerService.messageProcessorService = messageProcessor
    }

    def 'receives a message and passes it to further processing'() {
        when:
        jmsService.send(QueueListenerService.QUEUE_NAME, message())
        messageReceived.await(QUEUE_RECEPTION_TIMEOUT_SEC, SECONDS)

        then:
        1 * messageProcessor.process({ Map msg ->
            messageReceived.countDown()
            msg == message()
        })

        assertQueue(empty())
    }

    def 'message is returned to the queue if an exception is thrown'() {
        given:
        messageProcessor.process(_) >> {
            messageReceived.countDown()
            throw new IllegalStateException('Processing error')
        }

        when:
        jmsService.send(QueueListenerService.QUEUE_NAME, message())
        messageReceived.await(QUEUE_RECEPTION_TIMEOUT_SEC, SECONDS)

        then:
        assertQueue(notEmpty())
    }

    private message() {
        [key: 'a value']
    }

    private void assertQueue(condition) {
        def timeout = new Timeout(QUEUE_POLL_TIMEOUT_MILLIS)
        while (!condition.fulfilled()) {
            if (timeout.hasTimedOut()) {
                throw new AssertionError(condition.describeFailure())
            }

            sleep(QUEUE_POLL_INTERVAL_MILLIS)
        }
    }

    private empty() {
        [
            fulfilled: { jmsService.browse(QueueListenerService.QUEUE_NAME) == []},
            describeFailure: 'Expected queue to be empty'
        ]
    }

    private notEmpty() {
        [
            fulfilled: { jmsService.browse(QueueListenerService.QUEUE_NAME) != []},
            describeFailure: 'Expected queue to be NOT empty'
        ]
    }
}
class Timeout {
    private endTime

    Timeout(duration) {
        endTime = System.currentTimeMillis() + duration
    }

    def hasTimedOut() {
        timeRemaining() <= 0
    }

    def timeRemaining() {
        endTime - System.currentTimeMillis()
    }
}

This post series present the best of Osoco tests – tests that were tricky or we are just proud of. You can find a runnable source code for this test and more in the Grails Test Gallery project shared on GitHub.

Testing a legacy Java application with Groovy, Spock, Spring Test and Unitils

Why Groovy and Spock?

I was thinking how to retrofit legacy Java application with automated tests.

Why not use Groovy? It has a powerful syntax that allows do more with less coding. Fixture generation can be made easier – Groovy has a language level support for lists and maps. Objects with many attributes can be created using constructors with named parameters. Complex object hierarchies? – no problem, write your own builder or use an implementation that Groovy ships with (e.g. for handling XML). Closures will simplify test helper method creation (implementing template method pattern is trivial with a closure). And what is most important – all these features can be used on the existing Java code.

Next step was to choose a testing library.

I was planning to write integration tests interacting with database. The application made an extensive use of Spring IoC container and declarative transactions. Persistence was implemented with Hibernate and using Spring ORM module. So the natural choice is JUnit 4 with Spring Test framework (for application context injection and transaction management in test methods).

I wanted to write BDD style tests. My goal was to describe system features through specifications. It would give me a clear benefit – documentation and examples of the behaviour of the legacy code. I wanted to have tests integrated with Spring. So my choice fell on Spock (here you find a short and concise introduction to Spock).

Why Spock? You find strong arguments in the Spock wiki. Additionally, it integrates seamlessly with Spring and Spring Test. I liked the separation of four test phases phases through blocks (given, when, then and cleanup).

Spock and Spring Test test drive

To test Spock capabilities I created a very simple project based on the legacy system code. It uses Hibernate for persistence, Spring IoC and ORM. You can find the complete source code on GitHub:

https://github.com/mgryszko/blog-spock-spring-unitils

Spock integrates Spring as an extension. The documentation is short but to the point. Don’t forget to check examples from GitHub.

To use Spring Test annotations in your specification, first include spock-spring.jar on your classpath (e.g. as a Maven dependency):

<dependency>
    <groupId>org.spockframework</groupId>
    <artifactId>spock-spring</artifactId>
    <version>0.5-groovy-1.7</version>
    <scope>test</scope>
</dependency>

Then tell Spring what configuration files should be used to build the application context. This is done like in a normal JUnit 4 Spring test – just place the @ContextConfiguration on the specification. Spock will intercept specification lifecycle method calls (setup, cleanup, setupSpec, cleanupSpec) and delegate them to SpringTestContextManager. From now on you can use @Autowired and @Resource annotations to inject dependencies. To execute a feature method in a transaction, mark it with @Transactional, as if it was a JUnit test method.

But wait, this is an integration test and I want to populate the database with test data! Clearly DbUnit comes to mind. Let’s use it.

The easiest way to setup a persistent fixture is to use Spock Unitils extension. Unitils library provides a simple way to load DbUnit fixtures (and can also create Spring application context and inject Spring dependencies, what overlaps with the functionality offered by Spring Test).

First include spock-unitils.jar on your classpath (in my sample as a Maven dependency; I included explicitely unitils-dbunit to exclude some dependencies provided by other artifacts used in the project):

<dependency>
	<groupId>org.spockframework</groupId>
	<artifactId>spock-unitils</artifactId>
	<version>${spock.version}</version>
	<scope>test</scope>
</dependency>
<dependency>
	<groupId>org.unitils</groupId>
	<artifactId>unitils-dbunit</artifactId>
	<version>${unitils.version}</version>
	<scope>test</scope>
	<exclusions>
		<exclusion>
			<groupId>commons-logging</groupId>
			<artifactId>commons-logging</artifactId>
		</exclusion>
		<!-- Spock already includes JUnit -->
		<exclusion>
			<groupId>junit</groupId>
			<artifactId>junit</artifactId>
		</exclusion>
		<!-- due to Hibernate 3.2 dependency -->
		<exclusion>
			<groupId>org.unitils</groupId>
			<artifactId>unitils-dbmaintainer</artifactId>
		</exclusion>
		<!-- caused by DBUnit dependency -->
		<exclusion>
			<groupId>org.slf4j</groupId>
			<artifactId>slf4j-nop</artifactId>
		</exclusion>
	</exclusions>
</dependency>

Then, put @UnitilsSupport annotation on the specification to enable Unitils support. To populate the database with test data on every feature method, annotate your specification with @DataSet. By default, Unitils will look for a .xml datafile in the same directory (package) as the specification (net/gryszko/spock/dao/BankDaoSpec.xml).

The complete specification with Spring and Unitils features is listed above:

@ContextConfiguration(locations = ["classpath:/resources.xml"])
@UnitilsSupport
@DataSet
class BankDaoSpec extends Specification {

  @Autowired
  private BankDao dao

  @Transactional
  def "finds a bank by name"() {
    setup:
    def bankName = 'MBank'

    when:
    Bank bank = dao.findByName(bankName)

    then:
    bank.name == bankName
  }
}

Note on Spring and Unitils transaction syncronization

When Unitils (with DbUnit) and Spring are used together:

  1. transaction handling is disabled for Unitils in unitils.properties (DatabaseModule.Transactional.value.default=disabled)
  2. Unitils handles transactions using Spring PlatformTransactionManager

In the second case, there are two options of configuring the datasource for Spring and Unitils:

  1. Spring and Unitils use separate datasources. Unitils creates its datasource based on the configuration from unitils.properties. Spring datasource depends on the implementation used (in the simplest case it could be org.springframework.jdbc.datasource.SimpleDriverDataSource
  2. Spring and Unitils share the datasource. Unitils creates its datasource based on the configuration from unitils.properties. The datasource is used then in the application context through UnitilsDataSourceFactoryBean

What implications does the choice of datasource configuration have?

When performing a DbUnit operation and if Unitils transaction handling is disabled, default database and connection transactional settings will be used. In my example, it worked well both with HSQLDB and MySQL.

If Unitils transaction handling is enabled and two separate datasources are configured for Unitils and Spring, then PlatformTransactionManager synchronizes on two datasources (first datasource is used for DbUnit operations, second one for the application persistence). It results it two parallel transactions, what works well for MySQL, but not for HSQLDB (deadlock).

If Unitils and Spring share the same datasource, then PlatformTransactionManager synchronizes on it. A transaction is started for the test method. When DbUnit operation is going to be performed, the transaction is suspended and a new one is started. After finishing the DbUnit operation, the original transaction (for the test method) is resumed. Works well both for MySQL and HSQLDB.

Notice that even if Spring uses HibernateTransactionManager and Unitils DataSourceTransactionManager, they synchronize on the same resource (datasource) via TransactionSynchronizationManager.

Wrap up

Writing tests in Groovy makes a lot of fun. With Spock they are well structured and more human readable than pure JUnit tests. There are already a lot of extensions. As of 0.5 version, it integrates with Maven and Gradle build tools and Grails, Guice, Spring, Tapestry and Unitils frameworks.

For a quick DbUnit integration, Unitils meets its expectations. For a more sophisticated scenario, it has some drawbacks: you cannot define shared database fixtures (dataset is loaded on every test method) and you have to be careful with the transaction configuration (in your application and for Unitils). Depending on the application and database configuration your test can be deadlocked.

A solution to these shortcomings would be a custom @DataSet annotation working with a DbUnit test execution listener and participating in application transactions. But this is a subject for another blog post…