Last time I discussed what I called the “the death spiral” where I claimed that loosing focus on the true need gets transformed into unclear or badly stated system requirements.
If we for a moment start to think on how we usually “find errors” in our requirements we trust what’s known as the peer review technique. We typically generate a requirement specification or an export from our requirements database on a sub-set of the requirements ready for review and approval and then we invite some people and hope for the best (that is of cause that we don’t get massacred from senior colleagues).
The Peer review is the evaluation of work by one or more people of similar competence to the producers of the work (peers). It constitutes a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility (source: Wikipedia).
But, does peer-reviews really work and above that doe they work for reviewing requirements?
My point is that peer review is impossible to define in operational terms. Peer review is thus like poetry, love, or justice. But it is something to do with a document (or in this case a set of requirements) being scrutinized by a third party—who is neither the author nor the person making a judgement on whether to approve the content.
But who is a peer? Somebody doing exactly the same kind of work (in which case he or she is probably a already involved)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The document looks all right to me’, which is sadly what peer review sometimes seems to be. Or somebody pouring all over the document (or set of requirements), asking for data, rational, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.
If we start to look at the peer-review as a phenomenon we find evidence that’s peer reviews are one the point of useless and it’s a product of “this is what we always have done” or even worse “this is what I am told to do”. The process itself is often ill regulated and to my experience not well suited for reviewing requirements and can turn ugly and insulting:
“The process is unjust, unaccountable … often insulting, usually ignorant, occasionally foolish, and frequently wrong. We are correcting minors and missing majors.” – Richard Horton
A research made by IBM is actually supporting this idea:
Of ALL errors that’s later found in a system 70% of them originates from the requirement phase. Ok, now hold on for a moment! We typically don’t write the requirements once, but rather in smaller iterations. That’s fine, since this study looks at all requirements written on the system level regardless if they where formulated in the beginning of the project or at requirement freeze.
But ONLY 4% (yes, you read this correctly) where found through the different reviews. The rest where given to the next person or group in the design food chain to chew on.
Of those pitiful fractions of defects, I am pretty sure that the majority were of the character “spelling errors” or simple semantics like forgetting a comma. A rather famous requirements guru said:
“Fixing typographical and grammatical errors is useful because any changes that enhance effective communication are valuable. However, this should be done before sending out the document out for broad review. Otherwise, reviewers can trip on these superficial errors and fail to spot the big defects that lie underneath.” – Carl Wiegers
Ok, so that can we do? Well, I would say that it comes down to the old classical things like:
- Having proper knowledge on the importance of correct, complete and consistently formulated requirements.
- This goes on all levels from management that must enforce good practices to requirements authors who need the skills, and;
- Tools and techniques to write high quality requirements on the right things and of cause.
When it comes to tooling I must say that focus right now have been on requirements management tools like IBM Doors or Jama, but there have come some innovative tools that helps you in writing high quality requirements. Jama has some really nice features for collaborative reviewing and on-line work, but still you will not get any help on semantics or syntax. A bad requirement will be likely be less bad, since you wouldn’t get any help on the writing part of the problem, but you make it easier to review and spot lacy reviewers who just say “its fine” after a short glance.
One vendor of really cool tools is the REUSE Company with their tools tool Verification Studio and RAT (that’s correct like the mouse in the movie Ratatouille where the young chef has a small rat under his chef’s hat instructing him on how to prepare meals). This tools will guide you into writing correct requirements and use AI and reasoning in addressing how complete and inconsistent a specification is.
These tools can not replace human engineering, but they take you pretty far and for sure wash out all that dirt before going into a peer review.
Check out these tools on: https://www.reusecompany.com/verification-studio
So PLEASE stop relying on those peer reviews and start to take the requirements work more serious from start. Get help in writing high quality requirements. It will cost your project lots of hard work, frustration and a heap of money to fix it later in the design process if you don’t.
Next time I will explain what happens with those newly reviewed (but still bad) requirements when they are put to challenge by the design teams. Why are more errors introduced than spotted in that phase you might think?
Over and out!