Whiteboards for Everyone!

Do you like designing on whiteboards?  I do.   Colorful markers against a clean, white surface inspire all kinds of creativity and fun.

Recently David Crossett of Ready Receipts gave me a great tip.  He told me that instead of going to your local OfficeBOX superstore and paying $200 for a 4×8 whiteboard, just hit HomeDepot instead and get a $12 piece of showerboard.  It works just as good and if you need a smaller size they will cut it for you on site for no additional charge!  At that price, you can line your walls with thinking space.  Power to the Consumer–thanks David!

Mike J. Berry
www.RedRockResearch.com

Software Development Best Practices – Software Requirements Management

I recently hosted Red Rock Research’s second weekly software development best practices seminar for the general public.  Our topic was Software Requirements Management.Requirements Management is perhaps the most controversial topic in software development.  Everyone seems to have their own technique.  It is also the most important skill-set–statistically more important than development skills–to the overall success of a software project (Standish CHAOS Report, 2009).Let me say that another way because this principle is not intuitive…if you want to improve the performance of your development projects, improve the skill-sets of your business analysts who generate requirements.  Statistically, this has more of a performance boost on a projects outcome than any other skill-based area.Many published requirements management techniques exists, and yet in a $220 Billion industry with a project failure/delay rate of 64%, it appears that most of these published techniques are not embraced.Our seminar covered Eliciting, Prioritizing, Validating, and Documenting a requirements baseline.  We discussed the progression of system context diagrams, UML actors, use cases, data-flow diagrams, High-Level Overview diagrams, High-Level Design diagrams and finally the Software Requirements Specification document.   We talked briefly about  a Concept of Operations document and a System Design Description document.We discussed the difference between a plan-based documentation stack, and a minimized Agile-development documentation stack–which would be generated during a Sprint-Zero.  (Yes BTW, you DO create documentation for Agile projects!)We discussed techniques to control scope creep after the requirements baseline, and then discussed techniques for dealing with what I call ‘approval noise.’What puzzles me the most about this topic is an entrenchment I encounter occasionally, as expressed by one of the seminar participants.   He stated, after the seminar, that all of this was interesting in a textbook-like manner, but that he felt none of it was pratically applicable.I asked him to explain how his company performs requirements practices and he said “Well, we have nothing written.  We have everything in our head and we just talk across the cubicles.”  He then told me he was frustrated at some additional items he was asked to add to his project that morning because it was supposed to be completed two weeks ago.  He also told me that the owner of his organization wished they had a structured approach to software project management, and that–oh, by they way–many of the programmers were given layoff notices at the beginning of the week because the company is failing.Hmm, it’s almost as if the problem is not properly in focus.  Downstream problems are caused by upstream actions or omissions.  I mean no disrespect, I just wish to point out the obvious that if companies like this would adopt upstream structure they would benefit from downstream success.You see, the problem proper requirements practices solves is not at the development effort level, it is at the project management, estimation, budget, and strategy planning–or business level.Software centric business level practices become predictable and executives can be proactive if their projects properly consume the time estimated.Projects will consume the time estimated if they include all of the functionality needed for a desired level of business value, and those functions are identified in whole, at the beginning of the project.  This way the software project time-frames and feature-sets can be included accurately in the estimation, budgeting, resource planning, and strategic planning of a company.  This way, scope creep will be minimal, and the whole company will benefit from a predictable project delivery process.Without proper requirements skills, entire feature-sets get missed upstream and need to be added ‘at the last moment’ downstream,  the risk of re-work increases drastically, and recurring cycles of this erode project managers and the development team’s credibility in the eyes of the executive team and the waiting customers.  In worst case scenarios, this can lead to layoffs and finally company failures.If you haven’t been trained on proper requirement management techniques, you are holding your organization at risk.  Attend our next three-day Software Requirements Management training course held September 7-9 in SLC.Mike J. Berry, PMP, CSM, CSPMwww.RedRockResearch.com

Software Development Best Practices – Software Estimation

Posted by mikeberry | Plan-based Development,Software Estimation,Software Requirements,Uncategorized | Friday 10 July 2009 11:02 am

Red Rock Research held our first of a weekly series of seminars on software development best practices yesterday at the Miller Campus – Professional Development Center.  Our topic was Software Estimation.

We covered the typical informal methods: Fuzzy Logic, Wide-band Delphi, Planning Poker, and the primary formal methods: Function Point counting, the Putnam Model, COCOMO II, and COSMIC-FFP.   We also discussed how to estimate the percent of defects still in your application at the time of release.

Along with ‘how’ to estimate software projects accurately, we discussed how to manage the expectations of the executive team and the investors who typically want everything now.  Chris Perry, from the Utah iEEE CS chapter was in attendance and said “All the things your talking about I’ve been living for the past 10 years!”

Join us this Thursday, July 16 for our 2-hour seminar on Software Requirements Management.  The cost is only $10!

Mike J. Berry
www.RedRockResearch.com

Don’t miss these Software Development Best Practice Workshops…

I’m hosting weekly Software Development Best Practice workshops each Thursday during the next four weeks.  These are held during work hours so ask your manager/VP/CIO and perhaps they would like to come along.  The topics are different each week.

This is basically a summary of my three day courses that I am now offering.  I’m giving the info away to get some attention in the valley.  Each workshop is from 3:00 – 5:00pm Thursday afternoon at the Miller Campus – Professional Development Center  This represents a tremendous value as I have put over 3000 hours of research into the material and consumed over 100 industry books.

Topics

Software Estimation – July 9th

Software Requirements Management – July 16th

Software Quality Systems Management – July 23rd

Software Development Life Cycle (SDLC) Management – July 30th

Event Calendar and Info

http://www.utahtechcouncil.org/Events/Community-Events/Community-Calendar.aspx

Hope to see you there!

Mike J. Berry
www.RedRockResearch.com

How to compute % defects removed from release candidate code

Recently someone on StackOverflow.com asked me to explain how to compute the defect removal rate for release candidate software.  There are two methods for producing this number and I teach both in several of my seminars, but I’ll explain the simpler method in this post…

Lawrence Putnam presented this model in his 1992 Book titled Measures for Excellence.  His book reads more like a math text than a software development guide, and suffers from an unfortunate formula typo which has lead to widespread confusion about his models in the industry, but I will  explain his defect removal rate calculation process.  (I hired a math wizard to examine his data and correct the formula!)

1. For a typical project, code is produced at a rate which resembles a Rayleigh curve.  A Rayleigh curve looks like a bell curve with a long-tail.  See my ASCII graphics below:

||||
|||||||||||
|||||||||||||||||
|||||||||||||||||||||||

2. Error ‘creation’ typically happens in parallel and proportional to code creation.  So, you can think of errors created (or injected) into code as a smaller Rayleigh curve:

||||
|||+++|||||
||||+++++|||||
||||+++++++||||||||

where ‘|’ represents code, and ‘+’ represents errors

3. Therefore, as defects are found, their ‘detection rate’ will also follow a Rayleigh curve.  At some point your defect discovery rate will peak and then start to lesson.  This peak, or apex, is about 40% of the volume of a Rayleigh curve.

4. So, when your defect rate peaks and starts to diminish, factor the peak as 40% of all defects found, then use regression analysis to calculate how many defects are still in the code and not found yet.

By regression analysis I mean if you found 37 defects at the apex after three weeks of testing, you know two things:  37 = 40% of defects in code, so code contains ~ (37 * 100/40) = ~ 93 errors total, and your finding about 10.2 defects per week, so total testing time will be about 9 weeks.

Of course, this assumes complete code coverage and a constant rate of testing.

Hope this is clear.

Mike J. Berry
www.RedRockResearch.com

A Free Software Requirements Specification Template (SRS)!

Need a good software requirements specification (SRS) template?  Use an industry-standard SRS.  Can’t find one?  Well now you have-get it here for free.  Enjoy!

Mike J. Berry
www.RedRockResearch.com
Software Development Process Guidance

The Three P’s of a Quality Management System

A Quality Management System, sometimes referred to as a Total Quality Management (TQM) System, is a simple concept that will dramatically improve software production quality over time.

Companies that don’t have a quality system are commonly reacting to production and support issues due to omissive events.

A simple rule of thumb is to ask yourself how many fires your development team has put out this month.  If any come to mind, then chances are you don’t have a proper quality management system in place, and should read on…

I remember early in my career I struggled to get my employees to follow our procedures.  Whenever we’d encounter a production problem with our software, it would inevitably be a result of someone not having completely followed an established procedure.

We would have a big discussion about what should have happened, and about how “we can’t forget to do that next time,” yet we’d experience the same omission later.

I would get frustrated because I could never seem to find a way to get my team accountable for following our established procedures–until I discovered the “Quality Management System.”

A Quality Management System has the following three elements (the Three P’s!):

  1. Process (documented–most of us have processes or procedures we are supposed to follow.)
  2. Proof (a separate checklist, or “receipt” that the process was followed for each software release.)
  3. Process-Improvement (a discussion, and then an addition or adjustment to the documented process.)

Most companies have an established–and hopefully documented–software development process.  (If you don’t you can download one from my website for Waterfall, or Agile here.)  This is the first ‘P’ and should be in place at every established development shop.

A great question to ask the team is “How do you know the process was followed for each release?”  This is where you may get the deer in the headlights response.  This is the second ‘P’ and is the piece missing from most software development shops.

Think of this ‘Proof’ document as a checklist accompanying each software release.  The checklist would include every major step in the documented process, names of team members performing specific functions, and locations of final source code, test scripts, install files, etc.  The checklist would also require a series of quality checks.  Ie: Were requirements signed off by the customer, stakeholder, tester, and developer?  Was the help file updated with the new release number and appropriate functionality?  Was the source code checked in?  Where is it located?

As problems occur, the checklist would be added to so that the product would be protected against a similar failure in the future.

The governing driver considered here is that one particular problem might broadside the development team once, but after the process is improved, that problem should never occur again.

For example, you might have a stored procedure that goes into production without a “Go” statement at the end.  After the error is discovered, and fixed in production, your team should have a discussion and conclude that a checkbox needs to be added to the quality document stating “All Stored Procedures Confirmed to have ‘Go’ at the end.”

From that point on, whenever a stored procedure is moved into production, the developer presenting it must check for ‘Go’ statements at the end and then sign their name at the bottom of the checklist.

This is the difference between process improvement, and hope.  Many companies view process improvement as a discussion and some verbal affirmations.  What they are really doing is “hoping.”

Actually, the “act” of process improvement is physically altering a written process or procedure.  This is the real definition of process improvement–the third ‘P.’

The final endpoint of a quality management system is to achieve excellence.  I’ve heard excellence defined once as “Crisp execution of established procedures.”

You can’t have excellence without procedures, proof, and process-improvement.

Mike J. Berry
www.RedRockResearch.com