Poll : do you do peer reviews / code reviews / design reviews

Poll : do you do peer reviews / code reviews / design reviews

Post by Shane will » Wed, 13 Apr 2011 22:04:33

I recently read on Phil Koopman's blog
that peer reviews should be finding approx half of errors and testing
find the other half.

Where I work we do very few peer reviews of software. We have mostly
experienced people and my boss thinks code reviews cost time with not
much benefit.

I'm wondering how many embedded software places consistently do code
reviews of software and how many do little or none?

Does the level of review vary according to the experience/ ability of
the programmer?

Does the design and logic/ correctness of the code get reviewed or
just the form/ style?

Poll : do you do peer reviews / code reviews / design reviews

Post by NeedClever » Thu, 14 Apr 2011 00:18:46


In the software industry, QA is assigned to the most junior people.
In all other industries, QA is assigned to the most senior person. It
is assigned to the most senior people because they are the only ones
with enough experience to spot mistakes. IMnsHO, this is a large
contributor to the rampant suckage in the software industry.

Oh, and management can be such asses.


True story:

At one contract, the code was peer-reviewed using a very formal
style. A fair number of problems were spotted during the review.
This helped make a better product, and the total time (review+coding
+testing) was less than the typical time (coding+testing) without a

On a later project, I asked about doing the same review. I was told
that not only did they not do code reviews, but that they had never
done them in the past.

It turns out that the first product (the one that did get the review)
was never seen by the QA department again. Every other product was
constantly getting tested and fixed; this one was just being sold.

Because it had never been back to QA, they forgot all about it, and
the review that made that possible. BTW, it was one of their biggest
selling products.

My conclusion? Review good, management silly.



Poll : do you do peer reviews / code reviews / design reviews

Post by D Yuniski » Thu, 14 Apr 2011 01:39:29

i Shane,

On 4/12/2011 6:04 AM, Shane williams wrote:

The singlemost important item is a formal specification
for "The Code" (i.e., whatever it is you are reviewing/testing).
If you don't have a document that unambiguously *defines*
what it is you are building, then how can you agree on what
it supposed to do or *how* it is supposed to do it?

The beauty of specifications is they give you a chance to
challenge the design *before* you have undertaken it.
As I've said before re: the "a-word", it's amazing just
how many things tend NOT to get "explicitly stated" when
coming up with a new product/design/etc.! The specification
is your chance to "what-if" the process -- any time the
"client" (whether that is a third party, "Marketing",
"Management", "Special Customer", etc.) can't tell you
what *must*/should happen in those "what-if" scenarios,
you know the concept hasn't been completely thought out
and you have potential pitfalls.

For "products", I have found writing the User Manual *first*
to be an excellent way to tackle the specification -- it's
a lot easier to express behaviors in that more colloquial
format than a *stiff*, "formal specification". And, since
a good user manual should cover all possibilities (anything
unspecified represents a potential call for customer support),
it helps you focus on the fringe areas (where a large portion
of problems lie).

E.g., what happens if there is no SD card installed in the device?
What happens if the SD card fills up? What happens if the batteries
fail in the middle of _________? What happens if the user types
" " for the name of the file?

Look at it from the other end -- what sort of *quality* are you
delivering? Are you continuously patching up bugs that weren't
caught previously? Are you adding features that, with a bit of
forethought, you should have added in the first place? Are your
customers happy (are you gaining or losing them? would you like
*more* of them??)?

If you are turning out "good product", then your "experienced people"
*seem* to be doing a good (enough) job. OTOH, will they be there
forever? What happens when/if they move on (elsewhere or within the
organization) and are replaced with people "of a different calibre"?

You may find it beneficial to put this sort of thing in place now
EVEN IF IT DOESN'T APPEAR YOU NEED IT in order to begin establishing
that sort of culture in your workplace. "Suddenly" *telling* people
they now have to open up their code to inspection and criticism from
others can be a very "jolting" experience. Programmers, IME, tend
to be very possessive about "their code" (forgetting that it isn't
*theirs* but, rather, The Company's!).

I think a lot depends on the size of the company, the market it
serves, management attitudes, self-confidence of the coders, etc.

Style, IMO, is something that should be handled by an automated tool.
Anything showing up *at* a code review should have already been
pretty-printed by whatever tool enforces your coding guidelines, etc.
(so everyone knows what to expect *visually*)

IME, you want your code review to focus on:
- *whether* you are meeting the requirements laid out in the spec
- how *well* you are doing that (marginally vs. thoroughly)
- the quality of algorithms chosen
- potential problem areas (unforeseen scenarios, etc.)
- boundary conditions that might not be safeguarded

You don't want to waste a lot of time looking

Poll : do you do peer reviews / code reviews / design reviews

Post by tim... » Thu, 14 Apr 2011 02:52:54


In the software industry, QA is assigned to the most junior people.
In all other industries, QA is assigned to the most senior person.


Not helped by the view of their pears.

When I first started work the experience bods on the team used to proclaim
that QA were all people who had failed to make the grade as an engineer.

With attitudes like that why would any self respecting engineer want to go
near a job in QA?


Poll : do you do peer reviews / code reviews / design reviews

Post by Tim Wescot » Thu, 14 Apr 2011 03:09:05

Getting new tires when you have no tread on your old ones costs money,
as do brake jobs.

I worked for nine years at a place, long enough to go from "don't waste
time on that quality stuff", to "quality is king", and back. So I can't
answer -- but when we did _all_ the right stuff, our code hit production
and stayed there, instead of bouncing back in our faces.

Yes, but even an inexperienced programmer can add value reviewing an
experienced programmer's code.

It works best if you can assign roles to different people: Paul the
Punctuation Nut gets assigned to check style, Susan the Demon of Type
Correctness checks structure, "No Race Condition" Ralph checks for
operational issues, etc.

That way you don't have five people hypothesizing a conflict between
your serial ISR and your parser, while no one notices that you're trying
to stuff 20 significant bits into a 16-bit integer.


Tim Wescott
Wescott Design Services

Do you need to implement control loops in software?
"Applied Control Theory for Embedded Systems" was written for you.
See details at http://www.yqcomputer.com/

Poll : do you do peer reviews / code reviews / design reviews

Post by Paul E. Be » Thu, 14 Apr 2011 04:23:52

Review effort should start at the receipt of the customer's requirements
specification. The first thing to review is the customer requirements
documents. This should then lead to a technical specification which is fully
reviewed to ensure it delivers the (corrected) customer requirements. Follow
this through with the hardware/software design which is also reviewed. From
here, building the hardware should lead you into some testing effort, and
likewise, the software coding should be reviewed and tested as well. Once
you have the system together and tested, review the test results before you
kick it out the door and make sure that all the specified documentation is
present in order to pass the final review.

It would take a very sneaky bug to survive that level of in-depth,
persistent through review and testing. So, with the provision that you do a
proper review at each of the review steps (plenty if them), then you can see
that getting half of the errors out at each review would reduce the number
of bugs enormously.

So I am very definitely with Phil Koopman and Don Yunuskis on the
desirability of plenty of opportunities to review everything from start to

Paul E. Bennett...............<email:// XXXX@XXXXX.COM >
Forth based HIDECS Consultancy
Mob: +44 (0)7811-639972
Tel: +44 (0)1235-510979
Going Forth Safely ..... EBA. www.electric-boat-association.org.uk..

Poll : do you do peer reviews / code reviews / design reviews

Post by kopk » Fri, 15 Apr 2011 20:41:53

On Tue, 12 Apr 2011 09:39:29 -0700, D Yuniskis

We do software from small embedded up to server and backend. The
"lower" parts (embedded uC) do the most code reviews. It lessens
farther up. But they have the highest device numbers and the most
problems correcting an error afterwards.

But enough errors are hidden in these individual lines. There are very
few if any programmers who don't do any errors at that level.
Especially in cases, when code is seldom used or the error is by luck
handled by some other part of the program. So simple tests won't find

I learned to do CodeReviews just after coding before much tests have
been done. So there will probably be errors in the code.

Some problems will come to the mind of the presenter just by
explaining it the the others. Some by questions from "outsiders" which
never came up before.

And there's another benefit from codereview, at least if programmers
from the same domain are involved. More people know the code and may
work with it if needed in the future.


Poll : do you do peer reviews / code reviews / design reviews

Post by D Yuniski » Fri, 15 Apr 2011 23:51:30

i Reinhard,

On 4/14/2011 4:41 AM, Reinhard Kopka wrote:

Code running in the servers, typically, is a lot "easier"
to correct (update). They tend to have better user
interfaces, connectivity, peripherals, etc.

But, I see little reason to differentiate between them
when it comes to testing, reviews, etc. A bug is a bug
regardless of where it manifests. Customer cares little
about where the bug resides -- he just knows the product
isn't doing what he *wants* it to do...

Of course! But, it takes a *lot* more time to go through
line-by-line rechecking all these little details. Look
at loop conditions, etc. -- "off by one". You don't want
to tie up *several* people doing the work that one was
*supposed* to have done, competently.

[I guess it depends on the caliber of the people you have
working for/with you]

Test each *module*. You should know what each "routine" is
expected to do. It's easy to look at a *documented* interface
and hypothesize the sorts of things that might have been
omitted or poorly implemented. E.g., what if this is invoked
with a NULL pointer? A cursory look through the code will
tell you *if* the programmer even tried to address that

I like to test while writing. My code will often have comments
in it describing boundary conditions that could otherwise have
tripped it up.

If you document interfaces, someone else can put together test
cases to stress your code as soon as you think it is "done".
This impacts the way you design your system -- since you want
to be able to automate as much of the testing as possible,
you don't mix interactive stuff with things that don't need
to be interactive.

There is nothing wrong with being "chagrined" by a peer. I designed
a processor many years ago while my best friend wrote the code for it.
When it was built and ready for test, **nothing** worked. We each
had boundless confidence in ourselves *and* the other party! So,
the fact that it didn't do *anything* had us stumped.

The problem was eventually traced to different assumptions about
addressing -- the hardware could only address "words" (since
everything *was* a word) so the address field in all instructions
expected a "word address". My friend, being used to writing in
more conventional environments, had assumed *byte* addresses
(so, all of his addresses had a rightmost bit of '0' -- my
hardware had already made that *assumption*).

We *both* were chagrined at our assumptions -- though neither
of us could realistically fault the other :-/

I find it most (self)educational to explain things to non-technical
people. They ask questions that others would *assume* you had
already addressed. The very act of trying to relate, by analogy,
something to them causes you to examine your design in a level of
detail that you might otherwise have glossed over.

This is the easiest way to "sell" code reviews to Management (who
often see this activity as "not PRODUCING anything")