Upgrading to RSpec 3

I recently upgraded our Rails application to use RSpec 3, which is currently in its second beta, from 2.99. I was expecting it to be nice and straightforward, but sadly it was not! This was partly because we have been using RR for mocking and stubbing, and the syntax was not compatible with RSpec 3.

In the process of upgrading I learned a bunch about the new RSpec “allow” syntax, which in my opinion is far nicer than the RR one we were using.

Here’s how to stub:

allow(thing).to receive(:method).with(args).and_return(other_thing)

Mocking can be done in the same way by substituting “allow” for “expect” – although in most cases the tests read better if you test that the stub received the method using the following matcher:

expect(thing).to have_received(:method).with(args)

That this is different to the previous RR syntax, which was expect(thing).to have_received.method(args)
You can also use argument matchers, for example:

expect(thing).to have_received(:method).with(a_kind_of(Class))

And you can verify how many times a method was called:

expect(object).to have_received(method).at_least(:once)
expect(object).to have_received(method).exactly(3).times

The .with(args) and .and_return(other_thing) methods are optional. You can also invoke a different function:

allow(thing).to receive(:method) { |args| block }

Or call the original function:

allow(thing).to receive(:method).and_call_original

Another thing we used fairly often was any_instance_of. This is now cleaner (RR used to take a block):

allow_any_instance_of(Class).to receive(:method).and_return
allow_any_instance_of(Class).to receive(:method) { |instance, args| block}

If you pass a block, the first argument when it gets called is the instance it is called on.
In RSpec 3, be_false and be_true are deprecated. Instead, use eq false or eq true. You can use be in place of eq, but when the test fails you get a longer error message, pointing out that the error may be due to incorrect object reference, which is irrelevant and kind of annoying.

Using RSpec mocks means that we can create new mock or stub objects using double(Class_or_name) rather than Object.new, which results in tidier error messages and clearer test code.

Stubbing a chain of methods may also be a handy tool – I only found one place where we used it, but it is useful if we’re chaining together methods to search models.

allow(object).to receive_message_chain(:method1, :method2)

More info:

  1. https://relishapp.com/rspec/rspec-mocks/docs
  2. https://github.com/rspec/rspec-mocks

Update: it turns out I was missing a configuration option in RSpec. It should have worked with RR by doing this:

RSpec.configure do |rspec|
  rspec.mock_with :rr
end

Thanks Myron for clearing this up :)

Simple Design and Testing Conference

The Simple Design and Testing conference consists of two days of Open Space discussions between a small group, primarily developers with a keen interest in Agile. It kicked off in London on March 12th with lightning talks for introductions, before lining up the Open Space sessions.

On Day 2, Liz Keogh held two sessions that were closer to presentations than discussions: Feature Injection and The Evil Hat. Both were excellent, and it was quite a nice contrast to the open space sessions to have a couple that were more structured.

Tell Don’t Ask

Tom Crayford kicked off the session by asking what if all methods returned void? There was a lot of discussion about the patterns followed in the “GOOS” book before the group moved on to talking more about functional programming. Although I didn’t contribute much the discussion was interesting – I’d love to try a coding kata one day with all methods returning void, it sounds like an interesting challenge and I’d like to see how it turns out :)

Writing Tests against a Legacy Codebase is a Good Thing

I was looking for good ways to convince people that doing testing – unit and acceptance tests – against a legacy codebase is a Good Thing. I thoroughly enjoyed the discussion and plan to post more about it later, but what I found most interesting was the difference of opinion even within the group. Coming around to really, truly seeing and believing in the value of writing and maintaining automated tests can be a very long process.

Speculative Coding

Many teams and organizations still build software a layer at a time, with teams divided along technology and architectural lines. Entire APIs are built before the client that will consume them is started, with work being wasted because we speculate as to what will ultimately be needed. Finding ways to get fast feedback such as attacking the most complex and interesting scenarios first can help to avoid this waste.

Wicked Problems

I’m still a bit of a newbie when it comes to Systems Thinking, and I hadn’t heard of Wicked Problems. The definition is that this is a problem that doesn’t have a solution. You don’t know when you’re done, and you can’t experiment – every possible solution is a one-shot operation. Like testing software. You never know the best ways to test the software until you’ve done it. Liz pointed out that the same could be said of analysis. It was an interesting way of looking at things, and I definitely left the session with even more respect for testers.

Simplicity Debt

If we try to keep things simple, do we just defer complexity to a later date? Complex problems usually require complex solutions. We quickly realized that we didn’t really have a common understanding of “Simple”. Does it mean the least amount of effort, or the most elegant solution? Is simple really the opposite of complex, or do we actually mean the opposite of compound? I think we all came away from that session thinking wow, simplicity really is quite complex.

Feature Injection

Liz explains feature injection as a pull system to derive the requirements for the features we want by starting with the vision for a project. To meet the vision, we must achieve goals, which require the system to have certain capabilities. Features work together to form capabilities, and features are further decomposed into stories. Liz suggested that stories having true business value in their own right is often a myth – the main value of stories is that they can be completed on their own and provide a means to get feedback as fast as possible. Most stories can’t be shipped alone.

The other really useful insight from this session was that we often talk about stories from the point of view of the user, but in actual fact most of them are not for the user’s benefit. Does the user really want to see adverts on the page? If we think about the story from the stakeholder’s point of view, it’s far easier to articulate the goal – As a stakeholder, I want adverts on the page, so that when users look at the page I make money.

Why do we estimate?

Estimates, Daniel Temme proposes, are lies. There’s no value in lies, so why do we persist in trying to estimate all of our work? Why can’t we just put the work on the board and pull it in, and it’s done when it’s done? It’d probably be done faster if we weren’t wasting time estimating (and complaining about it). There are companies out there like Forward and DRW who are doing this successfully, blogging and talking about it. I haven’t made up my mind where I stand yet on this one.

Liz’s Evil Hat

I thoroughly enjoyed the Evil Hat talk. I think I just like hearing other people’s horror stories, and Liz has some great ones. It’s also a really interesting point of view to take, which is that most measurements and metrics are really just opportunities for people to game the system. Even if you don’t do it deliberately, that little devil is always on your shoulder, whispering in your ear.

Types of Testing

This session had been born out of a confusion in a previous one between terms for tests like functional, integration and acceptance. I think in the end, we were in general agreement that functional and acceptance tests and scenario tests are pretty much all the same thing. Integration tests usually test the interaction between two or more objects, but not the entire system. In the end, I think the most important thing is that your team have a common understanding of what you mean. Your functional test by any other name would still be green and shiny :)

Jasmine Testing Dojo

We attempted to run a jasmine testing dojo later in the afternoon, but ran into technological challenges (Liz had the only projector adapter, and she’d left for lunch …) but we managed to write a few tests and use some spies! I’d love to create a short Jasmine testing workshop to run in a user group in future, so watch this space!

Retrospective

We all agreed the weekend had been a great success, although for next time hopefully we can get more publicity and a slightly larger group. Finally, off home to rest my brain and have a well deserved glass of wine …

Agile 2006: Agile Testing – Brian Marick

Sunday 16:15-17:00

Summary

One role of an agile tester is to ask questions and catch the error and exception conditions. Testers generally focus on what goes wrong. They have a familiarity with bugs, they look at bugs, learn from them, and generalize. In agile projects, developers should get better at anticipating bugs.

With test driven development, the UI can be created, then the logic implemented. In test driven development, the sequence is:
Write a failing test -> Make it pass -> clean the code.

Cleaning the code is very important.

Exploratory testing or rapid testing is another form of agile testing.