Peppermint Spencer has been on something of a hiatus since that last post about the seaside. We have been terribly busy taking a well-earned excursion to France, not to mention some crucial top-level parenting. Oh, and trying to push a large test department in a direction it doesn’t want to go.
The cure to the perennial problem of how to find time to blog amongst all this comes in the form of an Android app. Yes, this post comes direct to you from our trusty HTC Desire, where we’re currently enjoying a coach ride up the M40 prior to beer and curry in Oxford.
Anyway, enough of such pleasantries. My topic today is a matter of great pith and gravity: document addiction. You might never have heard the term, but it’s my attempt to hang a name on something that has blighted at least one or two of the test teams I’ve worked in.
The meaning of the term should, I hope, be fairly obvious. The background is an article I read a while back which coined the term “test addiction” to refer to the situation, no doubt familiar to a lot of us, where a team continually runs large numbers of manual tests, most of which always pass, and so add little or no value. Worse, they leave no time to spend on more useful QA activities. These tests are often just there so that the daily status report can have a lot of green in it. They also cover our back if it all goes south when the release is deployed, because the test team Did Its Job and Has The Metrics To Prove it.
Document addiction, then, is a variation on the same theme, only instead of running tests that don’t find defects, we’re writing documents that, bluntly, no one needs. They might be snazzy, breathtakingly erudite, and stored with loving care in a well-organised industry-standard version control system – hell, people might even read them occasionally – but they don’t fulfil a document’s sole raison d’etre: to give the reader the information he needs.
So how does this happen? Well, here’s my story, which serves as an example.
Like many entering the world of testing, I soon realised that many employers won’t give your CV a second look unless it has the word ISEB on it. So I spent three pleasant days at a very nice venue in rural Cheshire taking the Foundation course. It had a steam room and self-service ice cream machine, so I was happy. The training manager got to spend some of her budget, so she was happy. I eventually got a better job that paid more, so my girlfriend was happy. So far, so good.
But of the things we covered that week, I’d say probably only one quarter was new material I could apply to the job I was doing. For the rest, a quarter consisted of things I already knew from studying Computer Science at university, another quarter was irrelevant because it dealt with things like code coverage, which is more the developer’s problem than the tester’s, and the final and most dangerous quarter was all about the IEEE 829 standard: aka test documentation. It’s this last section that led me to bad habits that have taken five years and half a dozen different jobs to kick out of my system. Ironically, and I doubt I’m alone in this, it’s also the part that I took on board most thoroughly. After all, it’s a darn sight easier to produce a template, paint-by-numbers document than it is to think seriously about ways to improve your testing.
Let’s not beat about the bush here: documentation is important, vital even, to the health of any project, but there is a big difference between documentation and documents. Documents for their own sake are worse than useless. For that reason I have serious beef with 829. Here’s why:
First, it teaches us to express our plans in prose. Prose is ambiguous, imprecise, and it’s all too easy for poor content to be hidden by pretty formatting and judicious use of the correct buzzwords. The fact remains that if you have a complex system or test to explain, you can usually do it better with a mindmap, flowchart, input-output grid or annotated screenshot. There’s no rule that says testers aren’t allowed to use their creativity, so use your imagination and remember: just because prose is the default option doesn’t mean it’s the best; incidentally, this applies just as much to business analysts as testers.
Second, 829 lays out a large suite of documents, most of which are very similar, while others are obscure to the point where it took me years to work out what they were actually for: Test Item Transmittal Report, anyone? I can see why it’s tempting to follow the standard to the letter: on one hand, we all instinctively want to stick to standards, because they are good insurance against failure (if you did what the book told you to do but it still went wrong, nobody can blame you, right?) and on the other hand, there’s an implicit “more is more” principle: big thick documents can’t fail to impress the management. Trust me: I’ve been there. The fact is, though, that most of the time either no one’s reading your 40-page Master Test Plan, or they are only reading it to pass the time because their own job bores them witless.
But most of all, I’m still mildly surprised that the Foundation course, at least back then, taught 829 as the only method for documenting test work. No alternatives were mentioned, and it wasn’t really made clear that, in most projects, only a little of it is useful.
Trying to “tick all the boxes” by producing all the prescribed documents (particularly on brownfield projects where the team already has well-established routines and methods) either results in a highly detailed but totally pointless description of the status quo, or an attempt to document away problems by producing a set of aspirations for how we’d do things in an ideal world with unlimited budget. But if all you’re doing is writing down how things are already being done, why bother? And if your test plan starts to look like a manifesto for a brave new world in which you can finally get down to business with that expensive automation tool that management will never fork out for, you’re wasting your time. If nuggets of genuinely new and useful information are buried amongst acres of boilerplate text, your audience will miss them.
I’ll go further: in today’s IT departments, a text-based test plan is no place for things like dates and milestones: these belong on a Gantt chart or an Agile release roadmap, where they are highly visible and easily altered when deadlines move or scope changes. Nor is it the right place for information about who is in the team and what the reporting structure looks like: that kind of thing will be out-of-date as soon as you have a leaver or a joiner.
I have worked for a couple of teams that didn’t use textual test plans at all. Now, one of them was a car crash, but that was because of poor management and ludicrous deadlines, so we’ll ignore it. But the other doesn’t need them because all the information is already available in Jira, on the departmental wiki, or in the test cases themselves. And that brings me to the key point: when you’re writing something, keep asking yourself these questions: first, is this content already available somewhere else? If the answer is yes, leave it out or just include a reference or link to it. Second, does my target audience really need to know this? If it’s a no, well, you know what to do. Be ruthless, because the less noise you produce, the more likely you are to be heard when you say something really important.
If you really must produce a master test plan, use a format that expresses the facts cleanly and minimises waffle. Don’t be afraid to use diagrams, or even avoid Word completely and use a spreadsheet or a mind map.
Remember that you add value by breaking software, not by churning out fat Word documents to try and prove to managers that you’re doing your job.
So go forth, be inquisitive, think of the crazy edge cases others miss. Most of all, don’t ever stop asking “what if?” Because, for me, that’s the most important talent a tester can have.