peer reviews · requirements

Passive voice requirements – A pain in your Buttocks!?

Why ”passive voice” requirements actually can become a night mare?

If you are like me and like clear and precise requirements with as much clear meaning as possible you might try to wash out some bad language in your requirements. Perhaps you review a set of IT system requirements on the highest level and find a requirement written like this:

“All critical errors shall be logged”.

You might continue and think; “That’s a bit vague but I guess that the reader of the requirement understands this, and the requirement is under a specific section of the set named System management, so is should be fine. I need to discuss with the author on what a critical error is, and we need to investigate how to verify that these errors defined as critical get logged”.

But you might also be a nerd like me and immediately say that “no, we must not have any passive voice requirements in our set of requirements, period!”. Its also in violation of the INCOSE guide in writing requirements, i.e. Rule R2 – Use active voice. But the author of the requirement is much likely not aware of that guide or quality rule in the first place..

Ok, fine. But what does this mean and why is this so dangerous to allow passive voice requirements?

Does it really matter if I write a passive voice requirement? I understand it and this should everyone else also do. Besides, it’s a short high level system requirement and we will get back to the verification details later. These details will come when I start to specify the subordinate level of requirements derived from this one or when I prepare a design solution which will take care of the requirement.

INCOSE says in the rule R2 mentioned above that “requirements not stating the subject must be avoided”. Whats that? I have written that “errors shall be logged”. The word Error is my subject, right?

No, its not actually. When looking into the example sentence in more depth its quite clear that it lacks a SUBJECT. The sentence does not explain WHO will actually log the errors that the writer of the requirement see must be logged. Let’s look at this figure:

From this figure its quite easy to understand that the solution for a passive voice requirement can both be a system function of some sort or a service provided by someone outside of the system.

Same applies to ALL types of passive voice requirements:

Lights shall be integrated in the dashboard… OK. Fair enough, but it this a requirement saying that we require someone to integrate the lights or is the purpose something else?

Perhaps the lights are supposed to be hidden (integrated) in the dashboard and the requirement is trying to capture this?!

The ONLY solution if you ever encounter requirements written in passive voice (and you will) is to go back to the author (or the stakeholder with the need) and ask what’s the purpose of the requirement in question. Ask WHY do you want to integrate these lights? Or WHY do you have to log critical errors?

The answer to these questions will help you to rewrite the requirement so it captures the intention and purpose behind its existence. Because if you don’t do this you will live with the impression that everything is fine, and the development is on its way towards a solution that will fulfil your requirement. The problem(s) will emerge with the first prototype or even worse during verification of the system towards the requirement. Then its LATE in the process and you will have to manage the probable delay or the added cost of doing a late design change.

So, PLEASE don’t ever write OR accept requirements written in passive voice. You have no idea what the result will be, and it will for sure at one point in time pop up and bite you in your butt!

One of the solutions can be to use requirement patterns, butt (hehe) more on that another time….

NLP · peer reviews · requirements · Uncategorized

Fifty shades of requirements, part two – The peer-review folly

Last time I discussed what I called the “the death spiral” where I claimed that loosing focus on the true need gets transformed into unclear or badly stated system requirements.

If we for a moment start to think on how we usually “find errors” in our requirements we trust what’s known as the peer review technique. We typically generate a requirement specification or an export from our requirements database on a sub-set of the requirements ready for review and approval and then we invite some people and hope for the best (that is of cause that we don’t get massacred from senior colleagues).

peer review

The Peer review is the evaluation of work by one or more people of similar competence to the producers of the work (peers). It constitutes a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility (source: Wikipedia).

But, does peer-reviews really work and above that doe they work for reviewing requirements?

My point is that peer review is impossible to define in operational terms. Peer review is thus like poetry, love, or justice. But it is something to do with a document (or in this case a set of requirements) being scrutinized by a third party—who is neither the author nor the person making a judgement on whether to approve the content.

But who is a peer? Somebody doing exactly the same kind of work (in which case he or she is probably a already involved)? Somebody in the same discipline? Somebody who is an expert on methodology? And what is review? Somebody saying `The document looks all right to me’, which is sadly what peer review sometimes seems to be. Or somebody pouring all over the document (or set of requirements), asking for data, rational, repeating analyses, checking all the references, and making detailed suggestions for improvement? Such a review is vanishingly rare.

If we start to look at the peer-review as a phenomenon we find evidence that’s peer reviews are one the point of useless and it’s a product of “this is what we always have done” or even worse “this is what I am told to do”. The process itself is often ill regulated and to my experience not well suited for reviewing requirements and can turn ugly and insulting:

“The process is unjust, unaccountable … often insulting, usually ignorant, occasionally foolish, and frequently wrong. We are correcting minors and missing majors.” – Richard Horton

A research made by IBM is actually supporting this idea:

Requirements faulire graphs

Of ALL errors that’s later found in a system 70% of them originates from the requirement phase. Ok, now hold on for a moment! We typically don’t write the requirements once, but rather in smaller iterations. That’s fine, since this study looks at all requirements written on the system level regardless if they where formulated in the beginning of the project or at requirement freeze.

But ONLY 4% (yes, you read this correctly) where found through the different reviews. The rest where given to the next person or group in the design food chain to chew on.

Of those pitiful fractions of defects, I am pretty sure that the majority were of the character “spelling errors” or simple semantics like forgetting a comma. A rather famous requirements guru said:

“Fixing typographical and grammatical errors is useful because any changes that enhance effective communication are valuable. However, this should be done before sending out the document out for broad review. Otherwise, reviewers can trip on these superficial errors and fail to spot the big defects that lie underneath.” – Carl Wiegers

Ok, so that can we do? Well, I would say that it comes down to the old classical things like:

  • Having proper knowledge on the importance of correct, complete and consistently formulated requirements.
  • This goes on all levels from management that must enforce good practices to requirements authors who need the skills, and;
  • Tools and techniques to write high quality requirements on the right things and of cause.

When it comes to tooling I must say that focus right now have been on requirements management tools like IBM Doors or Jama, but there have come some innovative tools that helps you in writing high quality requirements. Jama has some really nice features for collaborative reviewing and on-line work, but still you will not get any help on semantics or syntax. A bad requirement will be likely be less bad, since you wouldn’t get any help on the writing part of the problem, but you make it easier to review and spot lacy reviewers who just say “its fine” after a short glance.

One vendor of really cool tools is the REUSE Company with their tools tool Verification Studio and RAT (that’s correct like the mouse in the movie Ratatouille where the young chef has a small rat under his chef’s hat instructing him on how to prepare meals). This tools will guide you into writing correct requirements and use AI and reasoning in addressing how complete and inconsistent a specification is.

ratatouille

These tools can not replace human engineering, but they take you pretty far and for sure wash out all that dirt before going into a peer review.

Check out these tools on: https://www.reusecompany.com/verification-studio

So PLEASE stop relying on those peer reviews and start to take the requirements work more serious from start. Get help in writing high quality requirements. It will cost your project lots of hard work, frustration and a heap of money to fix it later in the design process if you don’t.

Next time I will explain what happens with those newly reviewed (but still bad) requirements when they are put to challenge by the design teams. Why are more errors introduced than spotted in that phase you might think?

Over and out!