Legacy-code retreat - Rainsberger's Trivial

20 min. read

Caution! This article is 5 years old. It may be obsolete or show old techniques. It may also still be relevant, and you may find it useful! So it has been marked as deprecated, just in case.

8:30 AM, Saturday right after a three-day work retreat. My phone alarm reminds me that I have to unset it for weekend days. I facepalm internally while turning the noise off, and see a calendar notification: "You have a legacy-code retreat, like, RITE NAO". Your hero wakes up and manages to make it to Codurance HQ at 9:30. Halima receives me with a smile as usual, and pairs me with someone who was coding alone. Pheeew!

(But seriously, only devs are so masochist to have events on a Saturday at 8:30!)

Join us for a hands on Session, full day of Legacy Code retreat with @londonswcraft https://t.co/qv7xtUGHWf @codurance
It is the day long version of the two hours session we ran at @TECHKNOWDay London 2018 #Techknowday

— Haly (@hkoundi) 8 April 2018

TLDR

You don't have to deal with me; you can find the contents of the retreat here. And the page of the event is here.

Basically:

  • Identify inputs and outputs, generate a golden master and run a test that compares the input/output of the code with your golden master.
  • Refactor by extracting methods and using better naming when possible.
  • Eliminate as many conditionals as you can. If you can't, use guard clauses.
  • Write unit and characterization tests until you can get rid of the golden master.
  • You can now add new features (test-driving it, ideally) or change the code.

If you want more detail though, keep reading!

The format

The retreat was about working on a piece of legacy code and learning different techniques to bring it to a state where you don't have any fear of changing, deleting or adding new code.

The code was Rainsberger's fork of a very basic Trivial game with dummy questions. It decides the winner randomly, and it also prints a lot of stuff to the standard output. The repo comes with versions of the app in several languages.

There were 6 sessions of 45 min that started with an introduction to a specific technique and ended with a small retro followed by a break. During the exercise, the Codurance team members were around to receive questions.

We had to find a pair to go through every session, and then change pair and language for the next session, pretty much as in the Global Day of Coderetreat (an event where you work on Conway's Game of Life with different constraints in every session).

At the end of the day we did a retro for the whole retreat.

Session 1 - Understanding the code

Language used: Ruby

This was mostly about reading and taking notes about the business rules and behaviour of the system. I enjoyed it as it was very visual and creative. We also ran the code on the terminal and played with it. This made it easy to identify the inputs and outputs of the system.

Session 2 - Writing a Golden Master

Language used: Ruby

Amanda Rousseau says it's easy to find out if some malware is using a crypto algorithm just by looking for an XOR of two different values. The Golden Master is a technique used when there is a clear input and output at the system level. So the main thing to look for is any IO (stdout/stderr or file printing for example) or random logic, for example. After that it's all internal state.

This is the fastest way to immediately have end-to-end tests that cover the whole system, but it's just a starting point. It's more of a safety net that freezes the behaviour of the system under test (SUT) and observes what it's doing. But it is not a replacement for unit tests.

Steps:

  1. Find clear outputs of the system: This trivial is printing a lot of stuff to stdout, and it's generating dice numbers randomly.

  2. Find a way to capture the output of the system without changing the production code: in our case, you could redirect the stream of the console to an in-memory stream or a text file.

  3. Find a pattern of the outputs: Is it text? is it a data tree? etc. In this case we have text and random numbers.

  4. Generate enough random inputs and persist the pair input/output: This is very similar to property-based testing. You generate inputs pseudo-randomly and then you inject them to produce the outputs. The inputs have to cover most of the system, but the tests should still run in seconds, so here you decide if you generate 1000, 10000 or a million entry data. Then you persist the pair input/output (this is your golden master), and run your test suite against the system under test (SUT), whose outputs have to be the same as your golden master fixtures. In the trivial example, the random dice numbers are the inputs and the stuff printed to stdout are the outputs. We created the golden master by using a random number generator with a custom seed, and for every seed we ran the production code and redirected the output to a fixture text-file whose name was the seed we used.

  5. Write a system test to check the SUT against the persisted golden master: In our case we wrote tests where we used the seeds as input and asserted that the output was the fixture. At this stage we also checked the speed of the test suite.

  6. Commit the test: Once you are green you commit test and fixtures so that you can revert to a working state.

  7. Check test behaviour and coverage: Are all branches covered? If we change the code, are the tests red? If not then we can not refactor that code! So we would have to add more input/output fixtures to cover that part of the code. We fixed this and it allowed us to refactor later without fear.

Session 3 - Refactoring: Extracting and Renaming

Language used: Python

Refactoring is improving the design of the code without changing its behaviour.

This session was about finding a snippet of code that we understood and extracting it into a method. Also, finding better names for the concepts in the game. After doing a refactoring, we had to run the golden master, to make sure we didn't break anything.

We had the constraint to extract just query methods. As opposed to command methods, they don't change the state of te system, they just return a value.

All I can say is, this session felt sooooo good.

Session 4 - Refactoring: Simplifying Conditionals

Language used: Python

In this session we had to get rid of conditionals. For example, removing an else branch and using guard clauses at the top of a method. A guard clause is when you check all special cases first and return a special value, otherwise you jump to the normal code execution. Martin Fowler explained it better.

Again, we ran te golden master after every change.

Session 5 - Characterisation and Unit Tests

Language used: Java

Here we focused on a subset of the code. For the characterisation tests we tried to test a specific behaviour at a time. Testing behaviour does not necessarily mean that you have a 1:1 mapping between tests and methods, though. That's where unit tests came in.

For these tests we wrote a test that we knew would fail, then ran it and let the failure tell us what the actual behaviour was, then changed the test with that correct output.

At this stage we found most of the bugs in the code. For example, nobody was calling the method that checked if the game was playable. This method was checking if we had at least two players. When there were no players at all, the code will throw an error. However, you could play with one player and it would work!

Session 6 - Introducing new Features

Language used: Java

The suggestion was to add a new category, but we realized that this would break the golden master, and also we would have to change a lot of code. So we went with the other thing we could do: fix a bug.

We fixed the bug of the number of players, and added more unit tests for it.

This gave us confidence to address the new category task. Shall we regenerate a new golden master for this? We didn't like that idea, so we started working on a new Category class.

This worry of breaking the golden master was only there because in 45 min we didn't have enough time to add tests to a 100% coverage. In real life though, before this step you would have already gotten rid of the golden master and would have tests that you trust.

Wrap up

We mentioned Michael Feathers book a lot (Working effectively with legacy code) and we applied some of the techniques in the book.

I also met Ann Witbrock, who is a woman I had been talking to sporadically on twitter. Now we know each other! She made a good summary of the event in a Twitter thread:

Good day in a legacy code workshop at @codurance - exercising all the @mfeathers techniques and pairing with lovely people in slightly forgotten languages. Very tasty lunch too, thanks all!

— E-quipper (@annwitbrock) 21 April 2018

Retro

What went well

  • I was an hour late but didn't miss much and they weren't too mad at me!
  • I enjoyed switching languages!
  • The lunch food was provided by Codurance and was enough and filling
  • I found friends that I always bump into in other events and got to pair with them!
  • I still remember Java!
  • The discussion with the pairs about how to do things was really cool
  • I finally understand the golden master, how to generate it, and how to use it!

What went wrong

  • We didn't manage to finish the golden master, but in the next session I paired with someone that did.
  • 45 minutes felt not enough time, but the focus was on practice, not on finishing!
  • It's not clear from the session what to do when you want to change behaviour if you have to break the golden master. They just said that the golden master is something that you should never have to do
  • Some people complained that this example was much easier than real life projects
  • One of the male Codurance team members of the morning sessions was systematically interrupting Halima while she was trying to explain the sessions, which was very annoying because I wanted to hear the end of her sentences. I should have raised my hand and said something. No excuses. Shame on me.

What I learned

  • I had a bit of a confusion at first when I thought I wouldn't have to do the custom seed part as I could just fake the random generator in tests and make it return what I wanted. The problem is that there were several random generation happening and we needed all possible combinations, so seeding was the way to go.
  • I am very obsessed with clean and readable code, so my eyes cried blood a lot, and it was hard to contain the need to clean, refactor or even rewrite from scratch the whole thing. I think I'm getting better at resisting and embracing caos as years go by, though.
  • The people who did it in Javascript had problems because it is not possible to seed Math.random()! I wish I paired with them to see how they solved this, but Javascript and I are not good friends.
  • The golden master is so fast. Covering all the code with behavior and unit tests would take us really long, and this is a small app, imagine if this was a real world app.

What would I do differently

  • Try new languages. I defaulted to my pair's language, but that was Ruby, Python and Java, which I already know. I should have asked the attendees if any of them was not doing -insert languages I already know-.

Other people who has talked about This

Comments