Agile 2006: The Test-Driven Development Pairing Game – Peter Provost, Mitch Lacey, Brad Wilson

Thursday 15:30 – 17:00

Abstract

A new twist on pair programming that combines test driven development, or test driven design, with pair programming to create a game.

Summary

This was the only session of the entire conference that I had decided in advance there was no way I was going to miss. I’d heard about the technique and was really interested to find out more so that I could take it home. I wasn’t disappointed – Peter and Brad have so much energy and passion, and their constant banter made the session really entertaining.

It was a live demonstration, and the application they chose to demonstrate it (building a simple bank account object with unit tests) worked really well – I intend to use the same idea to demonstrate back to my team.

There are three legs to the development “stool”:

  • Test driven design
  • Real time code review
  • Refactoring

Developers want to get the most out of their time, and traditional pair programming can be boring for the navigator. The driver is the tactician and the observer should be the strategist, but in practice they are often thinking about email, lunch, or what they’re going to do next.

TDD evolves the design of the system. In TDD, a test is written before writing the code, then just enough code is written to pass the test, then the design is improved: write – run – refactor.

Refactoring improves the design without changing the functionality. If more tests are necessary, then it’s not refactoring.

In the TDD Pairing Game, the keyboard is passed at every step, which means it goes back and forth very rapidly – each step should not take more than 3-4 minutes. The pair negotiate over disagreements, so it ends up being a process of negotiation. The test writing drives the design.

  • Step 1 – write a test that compiles, but doesn’t pass. Pass the keyboard.
  • Step 2 – write enough code to make the test go green. Pass the keyboard
  • Step 3 – either write another test, or go back and do one refactoring.

The rules of the game are simple:

  • If the test light is red, the next step must be to make it go green.
  • If the test light is green, choose whether to write a successful test, a failing test, do a refactoring, or stop and think strategically about the system design – use the whiteboard to diagram it, etc.
  • If the system doesn’t compile, the keyboard can’t be passed.
  • Don’t write too much code. Throw exceptions if things are not yet implemented. You’ll be surprised where it leads you if you don’t write too much code.
  • One test at a time to evolve the game.
  • No email, no phones with email, no IM.
  • It’s recommended that developers should check in as much as possible – at every green light, if it’s cheap enough. Source control is a big giant undo button.

Tests have three separate code “paragraphs”: arrange, act, assert. Set up and teardown are considered evil – factory methods can be used instead. Whether we unit test edge cases or not, is a design question.

Ideas for new tests are kept in comments in a list, which evolves as you start coding. If a test gets too big, comment it out, then go back and refactor.

Recommended further reading

Avoid Setup and Teardown – http://www.agileprogrammer.com/oneagilecoder/archive/2005/07/23/6261.aspx

Agile 2006: Refactoring Games – Alan Harriman, Joshua Kerievsky

Monday 11:00-12:30

Abstract

There are many different types of code smells, refactorings, and quality criteria. The goal of this session was to use games to learn several techniques related to refactoring.

Summary

All of the games are played with a card deck produced by Industrial Logic, containing green Quality cards, yellow Code Smell cards, and blue Refactoring cards. The deck is fairly extensive – some examples are included below. Each card also contains an explanation of the quality, smell, or refactoring.

Overall, this was a decent, eye opening session that introduced a number of concepts in an interesting way. I do intend to try these with my team and report back on their success!

Game 1 – Rating the value of different types of refactoring using Refactoring War
This game reminded me of Top Trumps. Players have a set of cards of refactoring types, turn over the top card, and rate each one. The player whose card scores the highest keeps all of the cards.

The cards are scored on two criteria: frequency of use and design impact. The team works together to assign a score between 1 and 5 for each criteria and then add them together.

The “war” occurs when the team thinks that two or more cards are of equal highest value. In this case, players holding these cards flip over a new card on top and repeat the exercise.

The objective is simply to get teams discussing the different types of refactoring, get more familiar with the techniques and start to recognize when they can be applied. Without more background in refactoring, the game feels strange the first time it’s played, but over time it would probably be a useful exercise to keep teams thinking about the code that they’re writing and learn to recognize smells and apply refactoring techniques.

Game 2 – Identifying relevant refactorings in Refactoring Hunt
Players study several smelly code snippets and evaluate which of the refactoring cards could be applied to clean up the code. Each card can be applied to one piece of code or set aside if it doesn’t apply to any. Once all of the cards have been assigned, the team reviews the suggested solution.

The objective of the game is to help the team start to recognize how to apply refactorings. There was a lot of disagreement over whether the suggested solution was right, which wasn’t as constructive as it could have been. A better variation might have been to suggest that there;s no “right” solution, but simply let teams compare and discuss their own.

Game 3 – Relating code smells to qualities in Qualitaire
The team lay out the large green quality cards, then decide for each code smell card: “Which quality would be the best fix for code that suffers from this Smell?” Again, once the game is complete, the team reviews the suggested solution.

The objective of this game is to familiarize the team with code qualities, and identify the smells that prevent good quality code. As in Refactoring Hunt, the discussions were the most valuable part of the game, and having a suggested solution caused more disagreement than good discussion.

Example cards
Qualities: highly cohesive, loosely coupled, no duplication, simple, testable.
Refactoring: Substitute algorithm, unify interfaces, change reference to value.
Smells: Oddball solution, duplicated code, long parameter list, long method, feature envy.

Key points to apply

  • The team can learn to refactor in a structured way by familiarization with common code smells, qualities, and specific refactoring techniques.
  • The refactoring games are a great way of introducing this information to the team.
  • The games would have most value from focusing on discussion of solutions, rather than steering the team towards one right solution.