A “Live Coding” experiment …

My colleague Rachel Laycock and I prepared a presentation in April on testing JavaScript with Jasmine. It is based on the work we had done introducing Jasmine earlier this year to a team of JavaScript developers who were working on a large and complex JavaScript application, with little or no unit testing in place already (I wrote about this here and here).

For our first “practice run”, we prepared demonstration code and tests based on the Bowling Game Kata and presented it to a group of fellow ThoughtWorkers. We demonstrated the tests running, and at the request of the audience, changed the code to break one of the tests and demonstrate the error messages.

Feedback from the group was that the pace of the presentation was fairly fast, and the most exciting bit was when we actually changed the code “live”.

More recently, our team on my current client were asked if we might be able to fill in for a lunchtime Tech Talk at short notice, so I dug out the Jasmine presentation again (sadly, I had to do it without Rachel, since she is thousands of miles away enjoying the delights of Bangalore as a trainer at ThoughtWorks University!)

With only a few hours to adapt the presentation, I decided based on the previous feedback to write all of the code and tests during the presentation. Brave maybe, slightly reckless and risky for sure! I ran through the code I planned to write only once with another colleague Chris, and stumbled in a number of places, particularly in the code I wanted to write to demonstrate Jasmine spies, but overall I was happy that it would be good enough. I also decided to try and recruit audience members who might be willing to write some of the code for me.

Learnings from how it went

Overall, I was happy with the decision to code live – the first half of the presentation went very well. When I reached the code around spies, the tests went red and I was unable to multi task well enough to keep presenting, see the bugs and fix them and the tests so I abandoned the real code and carried on, using just the slides and test code. I also received feedback that suggested that the action of writing tests first was a powerful demonstration of test-driven-development, which was also something that was interesting for the audience.

I also found it very difficult to keep talking and keep it interesting while I was either writing code or somebody else was writing a test, so there were a few short silences.

Lastly, I was aware that some of the code I was writing was not optimal, but I didn’t feel that I had the time or enough focus to refactor it (particularly when it wasn’t working anyway).

I think the presentation would work better with two people and a little more practice of the code. This way, one person could talk while the other focused on writing a test or making it pass. However, I would definitely keep the live coding aspect.

I fell back on the short preparation time as an excuse when things didn’t quite go to plan, but I hope a development audience would usually be fairly forgiving if you can’t quite write code right first time around!

The presentation and code is available from github. I hope to present it again at the London Java Community in July.

Spying with Jasmine

I really like using Jasmine to write unit tests in JavaScript. It’s easy to use, and the way the expects are written feels really natural. I’ve made a lot of use of Spies in tests I’ve written recently, and they’re also pretty awesome. I’ve used them in a few different ways:
* As a basic mock object – to verify that a method is being called or not called
* To control the value that’s being returned
* To fire a callback function
* To check the arguments that are being passed to a method are what I expect

Of course you could do all of this by hand crafting mock objects, but using the Jasmine spies usually means there is less code, and the tests become far more readable. The other advantage to using Spies is that Jasmine will remove the spies at the end of the tests, so it avoids the problem we had previously of mocks in one test overwriting real objects that we needed for another test.

Verifying a method is called or not

There are two ways to create a spy in Jasmine:
* spyOn
* jasmine.createSpy

spyOn can only be used when the method already exists on the object, whereas jasmine.createSpy will return a function.

Take this example of an object that would take a JQuery element and toggle the display property:

var Toggleable = function(element) {

    this.toggle = function() {
        if (currentlyVisible()) {
            hide();
        } else {
            show();
        }
    }



    function currentlyVisible() {
        return element.css("display") === "block";
    }

    function show() {
        element.show();
    }

    function hide() {
        element.hide();
    }

}

When testing the toggle function, I could use spyOn with a real JQuery object, or jasmine.createSpy to create a fake one. Below are two different ways to write a test that verifies that the jquery show method is called on the object if it’s currently hidden.

it("can create a method for you", function() {

    var fakeElement = {};
    fakeElement.css = function() {
    };
    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

it("can spy on an existing method", function() {

    var fakeElement = $("<div style='display:none'></div>");
    spyOn(fakeElement, 'show');

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

I could also use spies to verify that a method hasn’t been called. For example, I could add another expect to the second test above:

it("can tell you when a method has - and hasn't - been called", function() {

    var fakeElement = $("<div style='display:none'></div>");
    spyOn(fakeElement, 'show');
    spyOn(fakeElement, 'hide');

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();
    expect(fakeElement.hide).not.toHaveBeenCalled();

});

Using a Spy to control the return value

In the first test above, I had to stub the css function since that was also being called in the toggle() function. I could also use a spy for this – although I don’t care whether or not it’s called, it might make the code slightly more readable. Here’s the same test with the two spies:

it("can create a method for you", function() {
	
    var fakeElement = {};
    fakeElement.css = jasmine.createSpy("CSS spy").andReturn("none");
    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

There isn’t a really nice way to return different values depending on the arguments that are passed into a spy, but it is possible using andCallFake(). Here’s the same test (again, sorry, it’s getting a bit boring now, isn’t it?) rewritten with andCallFake:

it("can create a method for you with some logic", function() {
	
    var fakeElement = {};
    fakeElement.css = jasmine.createSpy("CSS spy").andCallFake(function(property) {
    	if (property === "display") {
        	return "none";
        }
    });

    fakeElement.show = jasmine.createSpy("Show spy");

    var toggleable = new Toggleable(fakeElement);

    toggleable.toggle();

    expect(fakeElement.show).toHaveBeenCalled();

});

Firing a Callback

Quite regularly, I want to unit test methods that call out to another method and pass it a callback, for example, if it’s making an ajax request and setting the html of an element based on what’s returned.

var DataContainer = function(element) {

	this.loadData = function() {
		$.ajax({
			url: 'http://postposttechnical.com/',
			context: document.body,
			success: putDataInElement
		});
	}

	function putDataInElement(data) {
		element.html(data);
	}

}

In this case, I could use a spy to call the callback, using andCallFake, then use toHaveBeenCalled to spy on the method in the callback that I’m interested in:

it("can call a callback that's passed", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalled();
});

The ajax spy will be passed the callback that sets the html, and pass it our fake data. Here’s a pretty contrived example that would make sure we were setting the html of the element, and not the text. It might be more useful if you have if / else branches in the code, to validate that the correct methods are being called.

it("can tell if a method has been called or not", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");
    fakeElement.text = jasmine.createSpy("text for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalled();
    expect(fakeElement.text).not.toHaveBeenCalled();
});

Verifying Arguments

In the example above, I might accidentally forget to pass the data into the element.html() call. We can also test for that using the Jasmine function toHaveBeenCalledWith.

Here’s how that might work in the test above:

it("can tell if a method has been called or not and check the parameters", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");
    fakeElement.text = jasmine.createSpy("text for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy().andCallFake(function(params) {
    	params.success(fakeData);
    });

    container.loadData();

    expect(fakeElement.html).toHaveBeenCalledWith(fakeData);
});

toHaveBeenCalledWith will return true if there has been at least one call with the matching arguments, even if there is more than one. To use this, you must supply all of the arguments in the right order – which may not always be possible. Consider if I wanted to check that the ajax call was made – the success callback is an internal function that I can’t create here. However, I can use jasmine.Any instead – here’s how it works:

it("can check the arguments passed to a function", function() {

    var fakeElement = {};
    fakeElement.html = jasmine.createSpy("html for fake element");

    var container = new DataContainer(fakeElement);

    var fakeData = "This will be the new html";
    $.ajax = jasmine.createSpy("Ajax Spy").andCallFake(function(params) {
        params.success(fakeData);
    });

    container.loadData();

    expect($.ajax).toHaveBeenCalledWith({
        url : 'http://postposttechnical.com/',
        context : document.body,
        success : jasmine.any(Function)
    });

});

All of the code examples here can be found on Github.

Simple Design and Testing Conference

The Simple Design and Testing conference consists of two days of Open Space discussions between a small group, primarily developers with a keen interest in Agile. It kicked off in London on March 12th with lightning talks for introductions, before lining up the Open Space sessions.

On Day 2, Liz Keogh held two sessions that were closer to presentations than discussions: Feature Injection and The Evil Hat. Both were excellent, and it was quite a nice contrast to the open space sessions to have a couple that were more structured.

Tell Don’t Ask

Tom Crayford kicked off the session by asking what if all methods returned void? There was a lot of discussion about the patterns followed in the “GOOS” book before the group moved on to talking more about functional programming. Although I didn’t contribute much the discussion was interesting – I’d love to try a coding kata one day with all methods returning void, it sounds like an interesting challenge and I’d like to see how it turns out :)

Writing Tests against a Legacy Codebase is a Good Thing

I was looking for good ways to convince people that doing testing – unit and acceptance tests – against a legacy codebase is a Good Thing. I thoroughly enjoyed the discussion and plan to post more about it later, but what I found most interesting was the difference of opinion even within the group. Coming around to really, truly seeing and believing in the value of writing and maintaining automated tests can be a very long process.

Speculative Coding

Many teams and organizations still build software a layer at a time, with teams divided along technology and architectural lines. Entire APIs are built before the client that will consume them is started, with work being wasted because we speculate as to what will ultimately be needed. Finding ways to get fast feedback such as attacking the most complex and interesting scenarios first can help to avoid this waste.

Wicked Problems

I’m still a bit of a newbie when it comes to Systems Thinking, and I hadn’t heard of Wicked Problems. The definition is that this is a problem that doesn’t have a solution. You don’t know when you’re done, and you can’t experiment – every possible solution is a one-shot operation. Like testing software. You never know the best ways to test the software until you’ve done it. Liz pointed out that the same could be said of analysis. It was an interesting way of looking at things, and I definitely left the session with even more respect for testers.

Simplicity Debt

If we try to keep things simple, do we just defer complexity to a later date? Complex problems usually require complex solutions. We quickly realized that we didn’t really have a common understanding of “Simple”. Does it mean the least amount of effort, or the most elegant solution? Is simple really the opposite of complex, or do we actually mean the opposite of compound? I think we all came away from that session thinking wow, simplicity really is quite complex.

Feature Injection

Liz explains feature injection as a pull system to derive the requirements for the features we want by starting with the vision for a project. To meet the vision, we must achieve goals, which require the system to have certain capabilities. Features work together to form capabilities, and features are further decomposed into stories. Liz suggested that stories having true business value in their own right is often a myth – the main value of stories is that they can be completed on their own and provide a means to get feedback as fast as possible. Most stories can’t be shipped alone.

The other really useful insight from this session was that we often talk about stories from the point of view of the user, but in actual fact most of them are not for the user’s benefit. Does the user really want to see adverts on the page? If we think about the story from the stakeholder’s point of view, it’s far easier to articulate the goal – As a stakeholder, I want adverts on the page, so that when users look at the page I make money.

Why do we estimate?

Estimates, Daniel Temme proposes, are lies. There’s no value in lies, so why do we persist in trying to estimate all of our work? Why can’t we just put the work on the board and pull it in, and it’s done when it’s done? It’d probably be done faster if we weren’t wasting time estimating (and complaining about it). There are companies out there like Forward and DRW who are doing this successfully, blogging and talking about it. I haven’t made up my mind where I stand yet on this one.

Liz’s Evil Hat

I thoroughly enjoyed the Evil Hat talk. I think I just like hearing other people’s horror stories, and Liz has some great ones. It’s also a really interesting point of view to take, which is that most measurements and metrics are really just opportunities for people to game the system. Even if you don’t do it deliberately, that little devil is always on your shoulder, whispering in your ear.

Types of Testing

This session had been born out of a confusion in a previous one between terms for tests like functional, integration and acceptance. I think in the end, we were in general agreement that functional and acceptance tests and scenario tests are pretty much all the same thing. Integration tests usually test the interaction between two or more objects, but not the entire system. In the end, I think the most important thing is that your team have a common understanding of what you mean. Your functional test by any other name would still be green and shiny :)

Jasmine Testing Dojo

We attempted to run a jasmine testing dojo later in the afternoon, but ran into technological challenges (Liz had the only projector adapter, and she’d left for lunch …) but we managed to write a few tests and use some spies! I’d love to create a short Jasmine testing workshop to run in a user group in future, so watch this space!

Retrospective

We all agreed the weekend had been a great success, although for next time hopefully we can get more publicity and a slightly larger group. Finally, off home to rest my brain and have a well deserved glass of wine …

Dynamic dependencies in Jasmine

Despite having worked with JavaScript on most projects, and also unit tested it in the past, I’ve been a bit thrown by the issues I’ve been experiencing recently. I’ve been working on a fairly large existing JavaScript codebase – with no unit tests – to try and bring it under test. We chose to use Jasmine, a BDD testing framework that superseded JSpec. It’s proven to be easy to get to grips with and very pleasant to use, however, as our unit test suite has grown, I’ve tripped over on the dynamic nature of JavaScript.

Jasmine tests are divided into ‘describe’ blocks, which can be nested. Each describe block can have a ‘beforeEach’, however, any code that runs in the beforeEach – and indeed, in any of the tests – can affect the later ones running in the same suite. In particular: creating stubs.

The code we’re writing tests around has a high number of dependencies within it, which means we’ve needed to create rather a lot of stubs for each test. So what happens if I create a stub of an object for one test, then later try to run a test against the real object? It runs against the stub.

Supposing I had a Bookshelf class that looked like this:

var MyLibrary = {};

MyLibrary.BookShelf = function() {

    var books = [];

    this.addBook = function (isbn) {
        books.push(new MyLibrary.Book(isbn));
    }

    this.findBooksBy = function (author) {
        var i,
            matchingBooks = [];

        for(i=0; i<books.length; i++) {
            var book = books[i];
            if (book.isWrittenBy(author)) {
                matchingBooks.push(books)
            }
        }

        return matchingBooks;
    }

};

And supposing my Book class did something when it initialised itself that I didn’t want it to do within the test:

MyLibrary.Book = function(isbn) {
    var _title;
    var _author;
    var _isbn = isbn;

    this.isWrittenBy = function(author) {
        return author == _author;
    }

    var init = function() {
        $.getJSON("/BookDetails?isbn=" + _isbn, function(data) {
            _author = data.author;
            _title = data.title;
        });
    };

    init();
};

I could stub the Book to avoid the init function from running within my bookshelf tests:


describe("Bookshelf", function() {

    it("should find all the books by a given author", function() {

        MyLibrary.Book = function() {
            this.isWrittenBy = function() {return true;}
        };

        var shelf = new MyLibrary.BookShelf();
        shelf.addBook("1234");

        var books = shelf.findBooksBy("somebody");

        expect(books.length).toBe(1);

    });

});

Then I might want to write a test for the Book class. That’s tricky … but I could override the JQuery function:

describe("Book", function() {

    it("should return false if the author does not match", function() {

        $.getJSON = function(url, callback) {
            callback({ author: "Jo Cranford", title: "Post Post Technical" });
        }

        var myBook = new MyLibrary.Book("1234")

        expect(myBook.isWrittenBy("Not Jo Cranford")).toBe(false);
    });

});

If I now run the Bookshelf test followed by the Book test, the book test will fail, because it’s been redefined in my first test.

This may seem obvious, and in this rather contrived example, very easy to fix – but it’s also very easy to overlook, especially if the stub is coming from another file. With a team of several developers, code with lots of dependencies, and a suite of tests that’s growing fairly quickly, it can cause quite a bit of annoyance!

We’ve solved the problem for now by adapting the build to run each spec file in its own sandbox. Each spec file tests one object, and so far this approach is working. However, it does cause the build to run slower, which is fine right now since Jasmine is so fast and we don’t have that many tests, but it may become more painful in the future.

The longer term solution is of course to avoid writing code, even JavaScript code, with nasty dependencies and use Dependency Injection instead :)

Example code is on github here.