Spying with Jasmine

I really like using Jasmine to write unit tests in JavaScript. It’s easy to use, and the way the expects are written feels really natural. I’ve made a lot of use of Spies in tests I’ve written recently, and they’re also pretty awesome. I’ve used them in a few different ways:
* As a basic mock object – to verify that a method is being called or not called
* To control the value that’s being returned
* To fire a callback function
* To check the arguments that are being passed to a method are what I expect

Of course you could do all of this by hand crafting mock objects, but using the Jasmine spies usually means there is less code, and the tests become far more readable. The other advantage to using Spies is that Jasmine will remove the spies at the end of the tests, so it avoids the problem we had previously of mocks in one test overwriting real objects that we needed for another test.

Verifying a method is called or not

There are two ways to create a spy in Jasmine:
* spyOn
* jasmine.createSpy

spyOn can only be used when the method already exists on the object, whereas jasmine.createSpy will return a function.

Take this example of an object that would take a JQuery element and toggle the display property:

var Toggleable = function(element) {

    this.toggle = function() {
        if (currentlyVisible()) {
            hide();
        } else {
            show();
        }
    }



    function currentlyVisible() {
        return element.css("display") === "block";
    }

    function show() {
        element.show();
    }

    function hide() {
        element.hide();
    }

}

When testing the toggle function, I could use spyOn with a real JQuery object, or jasmine.createSpy to create a fake one. Below are two different ways to write a test that verifies that the jquery show method is called on the object if it’s currently hidden.

it("can create a method for you", function() {

    var fakeElement = {};
    fakeElement.css = function() {
    };
    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

it("can spy on an existing method", function() {

    var fakeElement = $("<div style='display:none'></div>");
    spyOn(fakeElement, 'show');

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

I could also use spies to verify that a method hasn’t been called. For example, I could add another expect to the second test above:

it("can tell you when a method has - and hasn't - been called", function() {

    var fakeElement = $("<div style='display:none'></div>");
    spyOn(fakeElement, 'show');
    spyOn(fakeElement, 'hide');

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();
    expect(fakeElement.hide).not.toHaveBeenCalled();

});

Using a Spy to control the return value

In the first test above, I had to stub the css function since that was also being called in the toggle() function. I could also use a spy for this – although I don’t care whether or not it’s called, it might make the code slightly more readable. Here’s the same test with the two spies:

it("can create a method for you", function() {
	
    var fakeElement = {};
    fakeElement.css = jasmine.createSpy("CSS spy").andReturn("none");
    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

There isn’t a really nice way to return different values depending on the arguments that are passed into a spy, but it is possible using andCallFake(). Here’s the same test (again, sorry, it’s getting a bit boring now, isn’t it?) rewritten with andCallFake:

it("can create a method for you with some logic", function() {
	
    var fakeElement = {};
    fakeElement.css = jasmine.createSpy("CSS spy").andCallFake(function(property) {
    	if (property === "display") {
        	return "none";
        }
    });

    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

Firing a Callback

Quite regularly, I want to unit test methods that call out to another method and pass it a callback, for example, if it’s making an ajax request and setting the html of an element based on what’s returned.

var DataContainer = function(element) {

	this.loadData = function() {
		$.ajax({
			url: 'http://postposttechnical.com/',
			context: document.body,
			success: putDataInElement
		});
	}

	function putDataInElement(data) {
		element.html(data);
	}

}

In this case, I could use a spy to call the callback, using andCallFake, then use toHaveBeenCalled to spy on the method in the callback that I’m interested in:

it("can call a callback that's passed", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalled();
});

The ajax spy will be passed the callback that sets the html, and pass it our fake data. Here’s a pretty contrived example that would make sure we were setting the html of the element, and not the text. It might be more useful if you have if / else branches in the code, to validate that the correct methods are being called.

it("can tell if a method has been called or not", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");
    fakeElement.text = jasmine.createSpy("text for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalled();
    expect(fakeElement.text).not.toHaveBeenCalled();
});

Verifying Arguments

In the example above, I might accidentally forget to pass the data into the element.html() call. We can also test for that using the Jasmine function toHaveBeenCalledWith.

Here’s how that might work in the test above:

it("can tell if a method has been called or not and check the parameters", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");
    fakeElement.text = jasmine.createSpy("text for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
    	params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalledWith(fakeData);
});

toHaveBeenCalledWith will return true if there has been at least one call with the matching arguments, even if there is more than one. To use this, you must supply all of the arguments in the right order – which may not always be possible. Consider if I wanted to check that the ajax call was made – the success callback is an internal function that I can’t create here. However, I can use jasmine.Any instead – here’s how it works:

it("can check the arguments passed to a function", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy("Ajax Spy").andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect($.ajax).toHaveBeenCalledWith({
        url : 'http://postposttechnical.com/',
        context : document.body,
        success : jasmine.any(Function)
    });

});

All of the code examples here can be found on Github.

Simple Design and Testing Conference

The Simple Design and Testing conference consists of two days of Open Space discussions between a small group, primarily developers with a keen interest in Agile. It kicked off in London on March 12th with lightning talks for introductions, before lining up the Open Space sessions.

On Day 2, Liz Keogh held two sessions that were closer to presentations than discussions: Feature Injection and The Evil Hat. Both were excellent, and it was quite a nice contrast to the open space sessions to have a couple that were more structured.

Tell Don’t Ask

Tom Crayford kicked off the session by asking what if all methods returned void? There was a lot of discussion about the patterns followed in the “GOOS” book before the group moved on to talking more about functional programming. Although I didn’t contribute much the discussion was interesting – I’d love to try a coding kata one day with all methods returning void, it sounds like an interesting challenge and I’d like to see how it turns out :)

Writing Tests against a Legacy Codebase is a Good Thing

I was looking for good ways to convince people that doing testing – unit and acceptance tests – against a legacy codebase is a Good Thing. I thoroughly enjoyed the discussion and plan to post more about it later, but what I found most interesting was the difference of opinion even within the group. Coming around to really, truly seeing and believing in the value of writing and maintaining automated tests can be a very long process.

Speculative Coding

Many teams and organizations still build software a layer at a time, with teams divided along technology and architectural lines. Entire APIs are built before the client that will consume them is started, with work being wasted because we speculate as to what will ultimately be needed. Finding ways to get fast feedback such as attacking the most complex and interesting scenarios first can help to avoid this waste.

Wicked Problems

I’m still a bit of a newbie when it comes to Systems Thinking, and I hadn’t heard of Wicked Problems. The definition is that this is a problem that doesn’t have a solution. You don’t know when you’re done, and you can’t experiment – every possible solution is a one-shot operation. Like testing software. You never know the best ways to test the software until you’ve done it. Liz pointed out that the same could be said of analysis. It was an interesting way of looking at things, and I definitely left the session with even more respect for testers.

Simplicity Debt

If we try to keep things simple, do we just defer complexity to a later date? Complex problems usually require complex solutions. We quickly realized that we didn’t really have a common understanding of “Simple”. Does it mean the least amount of effort, or the most elegant solution? Is simple really the opposite of complex, or do we actually mean the opposite of compound? I think we all came away from that session thinking wow, simplicity really is quite complex.

Feature Injection

Liz explains feature injection as a pull system to derive the requirements for the features we want by starting with the vision for a project. To meet the vision, we must achieve goals, which require the system to have certain capabilities. Features work together to form capabilities, and features are further decomposed into stories. Liz suggested that stories having true business value in their own right is often a myth – the main value of stories is that they can be completed on their own and provide a means to get feedback as fast as possible. Most stories can’t be shipped alone.

The other really useful insight from this session was that we often talk about stories from the point of view of the user, but in actual fact most of them are not for the user’s benefit. Does the user really want to see adverts on the page? If we think about the story from the stakeholder’s point of view, it’s far easier to articulate the goal – As a stakeholder, I want adverts on the page, so that when users look at the page I make money.

Why do we estimate?

Estimates, Daniel Temme proposes, are lies. There’s no value in lies, so why do we persist in trying to estimate all of our work? Why can’t we just put the work on the board and pull it in, and it’s done when it’s done? It’d probably be done faster if we weren’t wasting time estimating (and complaining about it). There are companies out there like Forward and DRW who are doing this successfully, blogging and talking about it. I haven’t made up my mind where I stand yet on this one.

Liz’s Evil Hat

I thoroughly enjoyed the Evil Hat talk. I think I just like hearing other people’s horror stories, and Liz has some great ones. It’s also a really interesting point of view to take, which is that most measurements and metrics are really just opportunities for people to game the system. Even if you don’t do it deliberately, that little devil is always on your shoulder, whispering in your ear.

Types of Testing

This session had been born out of a confusion in a previous one between terms for tests like functional, integration and acceptance. I think in the end, we were in general agreement that functional and acceptance tests and scenario tests are pretty much all the same thing. Integration tests usually test the interaction between two or more objects, but not the entire system. In the end, I think the most important thing is that your team have a common understanding of what you mean. Your functional test by any other name would still be green and shiny :)

Jasmine Testing Dojo

We attempted to run a jasmine testing dojo later in the afternoon, but ran into technological challenges (Liz had the only projector adapter, and she’d left for lunch …) but we managed to write a few tests and use some spies! I’d love to create a short Jasmine testing workshop to run in a user group in future, so watch this space!

Retrospective

We all agreed the weekend had been a great success, although for next time hopefully we can get more publicity and a slightly larger group. Finally, off home to rest my brain and have a well deserved glass of wine …