Whiteboards for Everyone!

Do you like designing on whiteboards?  I do.   Colorful markers against a clean, white surface inspire all kinds of creativity and fun.

Recently David Crossett of Ready Receipts gave me a great tip.  He told me that instead of going to your local OfficeBOX superstore and paying $200 for a 4×8 whiteboard, just hit HomeDepot instead and get a $12 piece of showerboard.  It works just as good and if you need a smaller size they will cut it for you on site for no additional charge!  At that price, you can line your walls with thinking space.  Power to the Consumer–thanks David!

Mike J. Berry
www.RedRockResearch.com

Software Development Best Practices – Software Requirements Management

I recently hosted Red Rock Research’s second weekly software development best practices seminar for the general public.  Our topic was Software Requirements Management.Requirements Management is perhaps the most controversial topic in software development.  Everyone seems to have their own technique.  It is also the most important skill-set–statistically more important than development skills–to the overall success of a software project (Standish CHAOS Report, 2009).Let me say that another way because this principle is not intuitive…if you want to improve the performance of your development projects, improve the skill-sets of your business analysts who generate requirements.  Statistically, this has more of a performance boost on a projects outcome than any other skill-based area.Many published requirements management techniques exists, and yet in a $220 Billion industry with a project failure/delay rate of 64%, it appears that most of these published techniques are not embraced.Our seminar covered Eliciting, Prioritizing, Validating, and Documenting a requirements baseline.  We discussed the progression of system context diagrams, UML actors, use cases, data-flow diagrams, High-Level Overview diagrams, High-Level Design diagrams and finally the Software Requirements Specification document.   We talked briefly about  a Concept of Operations document and a System Design Description document.We discussed the difference between a plan-based documentation stack, and a minimized Agile-development documentation stack–which would be generated during a Sprint-Zero.  (Yes BTW, you DO create documentation for Agile projects!)We discussed techniques to control scope creep after the requirements baseline, and then discussed techniques for dealing with what I call ‘approval noise.’What puzzles me the most about this topic is an entrenchment I encounter occasionally, as expressed by one of the seminar participants.   He stated, after the seminar, that all of this was interesting in a textbook-like manner, but that he felt none of it was pratically applicable.I asked him to explain how his company performs requirements practices and he said “Well, we have nothing written.  We have everything in our head and we just talk across the cubicles.”  He then told me he was frustrated at some additional items he was asked to add to his project that morning because it was supposed to be completed two weeks ago.  He also told me that the owner of his organization wished they had a structured approach to software project management, and that–oh, by they way–many of the programmers were given layoff notices at the beginning of the week because the company is failing.Hmm, it’s almost as if the problem is not properly in focus.  Downstream problems are caused by upstream actions or omissions.  I mean no disrespect, I just wish to point out the obvious that if companies like this would adopt upstream structure they would benefit from downstream success.You see, the problem proper requirements practices solves is not at the development effort level, it is at the project management, estimation, budget, and strategy planning–or business level.Software centric business level practices become predictable and executives can be proactive if their projects properly consume the time estimated.Projects will consume the time estimated if they include all of the functionality needed for a desired level of business value, and those functions are identified in whole, at the beginning of the project.  This way the software project time-frames and feature-sets can be included accurately in the estimation, budgeting, resource planning, and strategic planning of a company.  This way, scope creep will be minimal, and the whole company will benefit from a predictable project delivery process.Without proper requirements skills, entire feature-sets get missed upstream and need to be added ‘at the last moment’ downstream,  the risk of re-work increases drastically, and recurring cycles of this erode project managers and the development team’s credibility in the eyes of the executive team and the waiting customers.  In worst case scenarios, this can lead to layoffs and finally company failures.If you haven’t been trained on proper requirement management techniques, you are holding your organization at risk.  Attend our next three-day Software Requirements Management training course held September 7-9 in SLC.Mike J. Berry, PMP, CSM, CSPMwww.RedRockResearch.com

How to compute % defects removed from release candidate code

Recently someone on StackOverflow.com asked me to explain how to compute the defect removal rate for release candidate software.  There are two methods for producing this number and I teach both in several of my seminars, but I’ll explain the simpler method in this post…

Lawrence Putnam presented this model in his 1992 Book titled Measures for Excellence.  His book reads more like a math text than a software development guide, and suffers from an unfortunate formula typo which has lead to widespread confusion about his models in the industry, but I will  explain his defect removal rate calculation process.  (I hired a math wizard to examine his data and correct the formula!)

1. For a typical project, code is produced at a rate which resembles a Rayleigh curve.  A Rayleigh curve looks like a bell curve with a long-tail.  See my ASCII graphics below:

||||
|||||||||||
|||||||||||||||||
|||||||||||||||||||||||

2. Error ‘creation’ typically happens in parallel and proportional to code creation.  So, you can think of errors created (or injected) into code as a smaller Rayleigh curve:

||||
|||+++|||||
||||+++++|||||
||||+++++++||||||||

where ‘|’ represents code, and ‘+’ represents errors

3. Therefore, as defects are found, their ‘detection rate’ will also follow a Rayleigh curve.  At some point your defect discovery rate will peak and then start to lesson.  This peak, or apex, is about 40% of the volume of a Rayleigh curve.

4. So, when your defect rate peaks and starts to diminish, factor the peak as 40% of all defects found, then use regression analysis to calculate how many defects are still in the code and not found yet.

By regression analysis I mean if you found 37 defects at the apex after three weeks of testing, you know two things:  37 = 40% of defects in code, so code contains ~ (37 * 100/40) = ~ 93 errors total, and your finding about 10.2 defects per week, so total testing time will be about 9 weeks.

Of course, this assumes complete code coverage and a constant rate of testing.

Hope this is clear.

Mike J. Berry
www.RedRockResearch.com

A Free Software Requirements Specification Template (SRS)!

Need a good software requirements specification (SRS) template?  Use an industry-standard SRS.  Can’t find one?  Well now you have-get it here for free.  Enjoy!

Mike J. Berry
www.RedRockResearch.com
Software Development Process Guidance

25 Most Dangerous Information Security Programming Errors

Want to visit ground-zero for data security?  Experts from SANS, MITRE, SAFECode, EMC, Juniper, Microsoft, Nokia, SAP, Symantec, and the U.S. Department of Homeland Security’s National Cyber Security Division last week presented a listing of The Top 25 Most Dangerous (Information Security) Programming Errors.  Expect to see future government and big-money RFP’s mandate these items be addressed.

Mike J. Berry
www.RedRockResearch.com

Why you should stop using SQL Server 2000+ (even though it’s a superior product!)

SQL Server 2005 is fantastic.  SQL Server 2000 was wonderful.  SQL Server 7 was OK.  I hear SQL Server 2008 will be even better…

…but wait a minute??  Really, SQL Server 2000 does everything I need.  So does Oracle versions 6, 7, 8, 9, and 10!  So does PostgreSQL, so does MySQL.  So what gives?

Don’t get me wrong.  I’ve grown up in the Microsoft garden.  I still have a VB for DOS development kit, and I have used VB 3, 4, 5, 6, and .NET 2003 and 2005.  These are superb and superior products.  The madness is accumulating with a new release every 2 years.  Microsoft is now forced to offer a downgrade from Vista to XP option because folks are getting fed up.

One client of mine is busy upgrading their flagship product from SQL Server 2000 to SQL Server 2005.  Why?  Not because of any new features.  Not because of a better price.  Not because of really any reason at all, except for the fact their customers are asking for it by name.

Because SQL Server (and Oracle) are highly visible in the public’s radar, many trade journals speak volumes of marketing info about the new bells and whistles they contain.  As developers, however, we all know that perhaps the management UI is better, and 64-bit it great, but basically the engine worked fine and met all of our needs several versions earlier.

So my client is spending $500,000 and seven or eight months upgrading their core product–and all of the internal support tools that must work with it–to SQL 2005.  Again why?  To ‘add value to the customer experience’…meaning that the salesperson can say “yes, it works with SQL 2005.”

You see, in the real world, often non-technical or semi-technical people make the final purchasing decisions for enterprise software.   They grasp for ‘clues’ of what should separate an inferior product from a superior product, and–well–version 2005 should be better than 2000, right?  Therefore, a product based on 2005 is better than a product based on 2000!  Right?  In reality, somebody must pay for the $500,000 so that same customer is misleading themselves into requesting a higher price-point.

A better model would be to choose a database that is solid, but not in the customer’s radar.  This way, they would have no ‘journal information’ about the database and would not be pressuring you to spend $500,000 every other year to keep up with Microsoft.

PostgreSQL, MySQL, Interbase, etc. are all robust databases with 64-bit support now.  If your software is for an internal client (your own company), this whole dynamic shouldn’t affect you, but if you make a commercial client-server product, there’s no doubt you’ve experienced this already.

Mike J Berry
www.RedRockResearch.com