vrijdag 7 september 2012

Making the Programming Pain Stop

Johannes Brodwall is organizing a panel debate at the upcoming JavaZone 2012 conference under the title “Making the programming pain stop”. I'm not on the panel, but here are my ideas about what's causing programming pain and how we can stop it.

Let's start by defining programming pain. I think this is what we feel when things are not what they're supposed to be. I'm thinking mainly of frameworks or tools that make us write extra code or unnecessary lines in configuration files. Or having to support code we're not able to understand –it doesn't matter whether we're talking about fixing bugs or adding new features–, because the method and variable names don't make sense, there's virtually no documentation, or much too much, and no unit tests are present or they're testing the wrong things. I think we've all been there. If not, just pick one of your own projects you worked on two or three years ago and see for yourself.

Now how can we make the programming pain stop? Johannes Brodwall has two suggestions: either firing all architects and project managers, or asking the developers to “grow the **** up”. I think the former is unrealistic, and won't make a big difference anyway. (I'm sorry to break you the news about that, architects and project managers.) But to all developers smiling right now: if it would have made a difference, it's probably going to be one for the worse, because I really think the big problem is that developers indeed need to grow up and get their act together.

For one thing, I'm still amazed that there are still so many developers out there thinking that writing automated unit tests is a waste of time. I can accept that “pure” TDD maybe doesn't work for you, and that you prefer to write your automated unit tests after you write your source code, but I'm going to be very suspicious about the quality of both your source code and your unit tests. But it's still better than no automated unit tests at all.

I know many project managers have to take part of the blame on this one too. Some of them still think automated unit testing is just gold plating. In my experience, it's very hard to write unit tests for anything close to gold plating. How would you do that, write a unit test for functionality you're going to add “just in case” or because it's nice to have? The unit test will either reveal that the functionality you want to add is useless, or that it isn't gold plating at all. I think that one of the main reasons why TDD speeds up development time –it does– is that it will keep developers from gold plating their code.

But there is more that developers should start doing. Unit testing and TDD in itself is not enough. Aim for high test coverage—not just system-wide, but in every single class you write. And think hard about why you really can't unit test the parts that aren't covered by automated unit tests yet. Use static code analysis tools with a sensible set of rules, and be strict about it. And if you're ready for it, have a look at mutation testing.

There are other things too that developers often are sloppy about. Is it really that hard to pick good names for all your methods and variables? You can afford to use ten seconds on every name, and if you can't come up with a good name after ten seconds, ask yourself whether you need the method or the variable at all. The fact that you can't come up with a good name may be an indication that you don't know what you're doing. Use some time to put in some documentation, but don't use time to put in incorrect, incomplete and/or unnecessary documentation. And this includes putting in a sensible message when you commit your code to your version control system.

While we're talking about version control systems: merge your code to the right branch(es) straight away. Don't even consider doing it later (like when you'll have more time—I mean, really?) or when you can do all the merges in one go. It's not going to work, you'll have lots of conflicts, and since you'll be out of context, you probably will have to use more time to fix things then if you did it right away. And sending a merge job to one of your colleagues is like asking to be fired on the spot.

Update you issue tracking system as soon as you start working on a new task, and every time it changes status. Add comments that will help testers to test the task, and chances are they will understand much faster why your task really is done, instead of sending it back to you because they couldn't figure out what's changed and how it should be tested.

Finally, if you're one of the hot shots in your company developing a framework or some services that will be used by other developers, how about some sensible defaults? Do I really have to specify that all my numbers are decimal? (Oh, by the way, this text uses the Latin alphabet.) And if you're a developer working on a project where all the numbers are octal, write a convenience method instead of spreading the number eight all over your code.

All the things listed above cause a lot of programming pain, not just for your colleagues, but for yourself too. Except for the TDD part, no architect and no project manager are involved in any of this, and no architect or project manager will ever stop you from doing these things correctly. So why don't you do so? It's amazing how much time you can save doing boring stuff if you put in a few seconds extra doing the boring stuff immediately and correctly.

donderdag 12 april 2012

Testing Better than the TSA

Yesterday I came across a posting titled “Testing like the TSA” by David at 37signals. He makes a case against over-testing, and argues that developers who fall in love with TDD have a tendency to do that in the beginning. Of course, when somebody learns or discovers a new tool, there's always the danger that he or she uses it too much – in fact, I'd say it's part of the learning process to find out where the new tool is applicable and where not. But judging from the seven don'ts of testing David lists in his posting, my impression is that he should rather try to do more testing than less. Here are my comments:

1. Don’t aim for 100% coverage.
Sure, but then again, what's 100% test coverage? In any system there will be places where it doesn't make sense to write tests because the code is trivial or writing the tests will require more energy than you'll ever be able to save if one of the tests would find a bug. Typical examples of the latter include interfaces, like the GUI or the main entry method from the command-line. But at other times, 100% line coverage isn't enough, and even 100% branch coverage won't do it because it leaves some important paths through your system untested. And to be honest, I've found myself writing unit tests to check logging messages because they were a vital part of the system.

I know something is wrong when a system has 100% test coverage, but the core classes should be pretty close to 100%. As a consequence, I do aim for 100% test coverage, even though I know I won't reach it. It's just like aiming for the bull's eye, even when you're not good at darts. If you're not aiming for 100% (or the bull's eye), then what are you aiming for, and what will the result be?

2. Code-to-test ratios above 1:2 is a smell, above 1:3 is a stink.
That just doesn't make sense. I'd say a code-to-test ratio below 1:2 sounds very wrong. Just think of a single function point, like e.g. a condition: you'll need at least one positive and one negative case to be able to say your code does what it's supposed to do. And this is really just the minimum. I think that if you're doing unit testing right, you should probably have something between two and five unit tests per function point.

3. You’re probably doing it wrong if testing is taking more than 1/3 of your time.
4. You’re definitely doing it wrong if it’s taking up more than half.
That's probably right, but only because your test code should be much simpler than the system you're trying to implement.

5. Don’t test standard Active Record associations, validations, or scopes. Reserve integration testing for issues arising from the integration of separate elements (aka don’t integration test things that can be unit tested instead). 
I don't know about those Active Records (I'm a Java programmer), but I agree that integration testing should be about integration testing.

6. Don’t use Cucumber unless you live in the magic kingdom of non-programmers-writing-tests (and send me a bottle of fairy dust if you’re there!)
I agree on this one. I like the idea of Cucumber, and I've seen some great talks about it at conferences, but I've never seen it used in a real system and have no idea how I would ever be able to use it in any system at all.

7. Don’t force yourself to test-first every controller, model, and view (my ratio is typically 20% test-first, 80% test-after).
I do force myself, and I know it works for me, but I also know it doesn't work for everybody. I can live with that (some can't). But as a consequence, my ratio is pretty different. In a typical project, where I can write code exactly the way I want, I break the TDD rule about writing tests first in a different way than David: I often find myself writing up two or three unit tests before I start programming, especially when I start working on a new component or on a new feature. Sometimes I have to write a second or a third unit test to get the first one right, e.g. in order to be able to decide how the component's interface should look like. I find that easier than getting it wrong the first time, and then having to start refactoring everything five minutes later. But I guess this makes my ratio something like 40% multiple tests first, 40% single tests first, and 20% tests after. The reason why I still have a number of unit tests that I write after the code, is that test coverage reports and mutation testing often point me to branches and paths in my code that haven't been tested well enough.

I don't like testing TSA-style either, i.e. writing lots of tests just to get a high test coverage. That's coverage theater, like Davids puts it, and usually leads only to a large number of brittle tests that break down after the first refactoring. But we should aim high when we write unit tests, and try to do better than the TSA. I'd rather put in another five minutes to get the tests right, than having to spend an hour four months later trying to find out where the bug is hidden…