Software Development Best Practices – Software Requirements Management

I recently hosted Red Rock Research’s second weekly software development best practices seminar for the general public.  Our topic was Software Requirements Management.Requirements Management is perhaps the most controversial topic in software development.  Everyone seems to have their own technique.  It is also the most important skill-set–statistically more important than development skills–to the overall success of a software project (Standish CHAOS Report, 2009).Let me say that another way because this principle is not intuitive…if you want to improve the performance of your development projects, improve the skill-sets of your business analysts who generate requirements.  Statistically, this has more of a performance boost on a projects outcome than any other skill-based area.Many published requirements management techniques exists, and yet in a $220 Billion industry with a project failure/delay rate of 64%, it appears that most of these published techniques are not embraced.Our seminar covered Eliciting, Prioritizing, Validating, and Documenting a requirements baseline.  We discussed the progression of system context diagrams, UML actors, use cases, data-flow diagrams, High-Level Overview diagrams, High-Level Design diagrams and finally the Software Requirements Specification document.   We talked briefly about  a Concept of Operations document and a System Design Description document.We discussed the difference between a plan-based documentation stack, and a minimized Agile-development documentation stack–which would be generated during a Sprint-Zero.  (Yes BTW, you DO create documentation for Agile projects!)We discussed techniques to control scope creep after the requirements baseline, and then discussed techniques for dealing with what I call ‘approval noise.’What puzzles me the most about this topic is an entrenchment I encounter occasionally, as expressed by one of the seminar participants.   He stated, after the seminar, that all of this was interesting in a textbook-like manner, but that he felt none of it was pratically applicable.I asked him to explain how his company performs requirements practices and he said “Well, we have nothing written.  We have everything in our head and we just talk across the cubicles.”  He then told me he was frustrated at some additional items he was asked to add to his project that morning because it was supposed to be completed two weeks ago.  He also told me that the owner of his organization wished they had a structured approach to software project management, and that–oh, by they way–many of the programmers were given layoff notices at the beginning of the week because the company is failing.Hmm, it’s almost as if the problem is not properly in focus.  Downstream problems are caused by upstream actions or omissions.  I mean no disrespect, I just wish to point out the obvious that if companies like this would adopt upstream structure they would benefit from downstream success.You see, the problem proper requirements practices solves is not at the development effort level, it is at the project management, estimation, budget, and strategy planning–or business level.Software centric business level practices become predictable and executives can be proactive if their projects properly consume the time estimated.Projects will consume the time estimated if they include all of the functionality needed for a desired level of business value, and those functions are identified in whole, at the beginning of the project.  This way the software project time-frames and feature-sets can be included accurately in the estimation, budgeting, resource planning, and strategic planning of a company.  This way, scope creep will be minimal, and the whole company will benefit from a predictable project delivery process.Without proper requirements skills, entire feature-sets get missed upstream and need to be added ‘at the last moment’ downstream,  the risk of re-work increases drastically, and recurring cycles of this erode project managers and the development team’s credibility in the eyes of the executive team and the waiting customers.  In worst case scenarios, this can lead to layoffs and finally company failures.If you haven’t been trained on proper requirement management techniques, you are holding your organization at risk.  Attend our next three-day Software Requirements Management training course held September 7-9 in SLC.Mike J. Berry, PMP, CSM, CSPMwww.RedRockResearch.com

Book Review: The Book of Five Rings

Recently, while attending the ’09 Agile Roots conference in Salt Lake City, UT, Alistair Cockburn–the keynote speaker–referenced Miyamoto Musashi’s 16th-century book called The Book of Five Rings.

I like Asian philosophy (and swords and such) so I picked up the book and read it.  The book was written in 1643 by an undefeated Japanese samurai master who was so effective he was rumoured to have spent the latter part of his career entering sword-fights purposely without a weapon.  Although meant as a battlefield manual, the book has gained popularity as a handbook for conducting business in the 21st century.

The book was translated into English by Thomas Cleary at some point and the edition I read was published in 2005.   Improperly named “The Book of Five Rings,” the book is actually a compilation of five scrolls.

The Earth Scroll: Musashi talks about how a straight path levels the contours of the Earth and how various occupations provide life-improving principles.  He talks about observing patterns and learning from them.  Certainly a great primer for any business trying to get across the chasm.

The Water Scroll: Here Musashi talks about how water conforms to the shape of its container.  He suggests a separation of one’s inward mind against it’s outward posture, maintaining that one’s control over one’s mind must not be relinquished to outward circumstances.  He translates these philosophies into about 80 pages of sword fighting techniques.  An interesting modern parallel is found in Jim Collins book, Good to Great, where he talks about how the most successful companies are able to say ‘No’ and not be influenced by immediate but non-strategic opportunities.

The Fire Scroll: As with any book written by a 16th century samurai master, you’d expect a core discussion on combat strategy.   The fire scroll is full of combat strategies, positioning, and pre-emptive theory.  Very interesting.  Did anyone notice how Apple’s announcement of the latest iPhone came about 1 day after the Palm Pre phone was officially launched–killing it’s market blitz?  No coincidence there.

The Wind Scroll: The wind scroll contains a directive to study and be aware of your opponents techniques.  Translated into business speak, this means one should always study ones competitors.  Be aware of new offerings, partnerships, markets, etc. that they pursue.  Emphasis is placed on observing rhythms and strategically harmonizing, or dis-harmonizing with them as appropriate.

Finally, The Emptiness Scroll:  This scroll discusses the value of escaping personal biases.  Emphasis is placed on not lingering on past situations and being able to adjust quickly to new scenarios.

Overall I found this book ‘enlightening’ to read.  If you like metaphors and inferences, or sword-fighting, then you will enjoy this book.

Mike J. Berry
www.RedRockResearch.com

Two Days with Alistair Cockburn

Posted by mikeberry | Agile Development,Most Popular,PMI-ACP,Product Owner,ScrumMaster | Saturday 11 July 2009 9:16 am

I recently attended an Agile Development Product Owner class taught by Alistair Cockburn.  The content was excellent.  He taught us about the proper perspectives an Agile Product Owner needs to successfully interact with the project sponsors, users, and the development team.Alistair Cockburn has authored several books on Agile Development, and is one of the original signers of the Agile Manifesto.I would describe Alistair’s environment as squirrely and fun.  We built user-stories out of the Rumpelstiltskin and Cinderella stories (from the original Nicht fur Kinder european versions–full of voilence and gore!)We also discussed the differences between Use Cases and User Stories.  I was happy to hear he prefers Use Cases, because so do I.All class attendees had already been through the ScrumMaster course, so as we executed sprints for our product backlog, it was interesting to see how many attendees actually sought the sponsors/users feedback during the iterations–without being reminded.Overall it was an educational and enjoyable experience.Mike J. Berrywww.RedRockResearch.com

Software Development Best Practices – Software Estimation

Posted by mikeberry | Plan-based Development,Software Estimation,Software Requirements,Uncategorized | Friday 10 July 2009 11:02 am

Red Rock Research held our first of a weekly series of seminars on software development best practices yesterday at the Miller Campus – Professional Development Center.  Our topic was Software Estimation.

We covered the typical informal methods: Fuzzy Logic, Wide-band Delphi, Planning Poker, and the primary formal methods: Function Point counting, the Putnam Model, COCOMO II, and COSMIC-FFP.   We also discussed how to estimate the percent of defects still in your application at the time of release.

Along with ‘how’ to estimate software projects accurately, we discussed how to manage the expectations of the executive team and the investors who typically want everything now.  Chris Perry, from the Utah iEEE CS chapter was in attendance and said “All the things your talking about I’ve been living for the past 10 years!”

Join us this Thursday, July 16 for our 2-hour seminar on Software Requirements Management.  The cost is only $10!

Mike J. Berry
www.RedRockResearch.com

Don’t miss these Software Development Best Practice Workshops…

I’m hosting weekly Software Development Best Practice workshops each Thursday during the next four weeks.  These are held during work hours so ask your manager/VP/CIO and perhaps they would like to come along.  The topics are different each week.

This is basically a summary of my three day courses that I am now offering.  I’m giving the info away to get some attention in the valley.  Each workshop is from 3:00 – 5:00pm Thursday afternoon at the Miller Campus – Professional Development Center  This represents a tremendous value as I have put over 3000 hours of research into the material and consumed over 100 industry books.

Topics

Software Estimation – July 9th

Software Requirements Management – July 16th

Software Quality Systems Management – July 23rd

Software Development Life Cycle (SDLC) Management – July 30th

Event Calendar and Info

http://www.utahtechcouncil.org/Events/Community-Events/Community-Calendar.aspx

Hope to see you there!

Mike J. Berry
www.RedRockResearch.com

How to compute % defects removed from release candidate code

Recently someone on StackOverflow.com asked me to explain how to compute the defect removal rate for release candidate software.  There are two methods for producing this number and I teach both in several of my seminars, but I’ll explain the simpler method in this post…

Lawrence Putnam presented this model in his 1992 Book titled Measures for Excellence.  His book reads more like a math text than a software development guide, and suffers from an unfortunate formula typo which has lead to widespread confusion about his models in the industry, but I will  explain his defect removal rate calculation process.  (I hired a math wizard to examine his data and correct the formula!)

1. For a typical project, code is produced at a rate which resembles a Rayleigh curve.  A Rayleigh curve looks like a bell curve with a long-tail.  See my ASCII graphics below:

||||
|||||||||||
|||||||||||||||||
|||||||||||||||||||||||

2. Error ‘creation’ typically happens in parallel and proportional to code creation.  So, you can think of errors created (or injected) into code as a smaller Rayleigh curve:

||||
|||+++|||||
||||+++++|||||
||||+++++++||||||||

where ‘|’ represents code, and ‘+’ represents errors

3. Therefore, as defects are found, their ‘detection rate’ will also follow a Rayleigh curve.  At some point your defect discovery rate will peak and then start to lesson.  This peak, or apex, is about 40% of the volume of a Rayleigh curve.

4. So, when your defect rate peaks and starts to diminish, factor the peak as 40% of all defects found, then use regression analysis to calculate how many defects are still in the code and not found yet.

By regression analysis I mean if you found 37 defects at the apex after three weeks of testing, you know two things:  37 = 40% of defects in code, so code contains ~ (37 * 100/40) = ~ 93 errors total, and your finding about 10.2 defects per week, so total testing time will be about 9 weeks.

Of course, this assumes complete code coverage and a constant rate of testing.

Hope this is clear.

Mike J. Berry
www.RedRockResearch.com

A Free Software Requirements Specification Template (SRS)!

Need a good software requirements specification (SRS) template?  Use an industry-standard SRS.  Can’t find one?  Well now you have-get it here for free.  Enjoy!

Mike J. Berry
www.RedRockResearch.com
Software Development Process Guidance

25 Most Dangerous Information Security Programming Errors

Want to visit ground-zero for data security?  Experts from SANS, MITRE, SAFECode, EMC, Juniper, Microsoft, Nokia, SAP, Symantec, and the U.S. Department of Homeland Security’s National Cyber Security Division last week presented a listing of The Top 25 Most Dangerous (Information Security) Programming Errors.  Expect to see future government and big-money RFP’s mandate these items be addressed.

Mike J. Berry
www.RedRockResearch.com

Anatomy of an Execution Plan

Have you been challenged with performing a high-risk task like upgrading a prominent server, for example?

Here’s an execution plan template that you can use to guide you.

I. Executive Summary
Brief overview of intended event.

II. Review of Discovery
Details of what efforts were made to research what is listed in the following sections.  Meetings, Vendor consultations,  OnLine Resources, and Conventional Wisdom can be included.

III. Pre-Upgrade Procedures
Steps identified to be taken before the event.

IV. Upgrade Procedures
Steps identified to be taken during the event.

V. Post-Upgrade Procedures
Steps identified to be taken after the event.

VI. Test Plan
Verification procedures to confirm the event was a success.  This section should define the success criteria.

VII. Rollback Plan
In case the worst happens, what to do.

IIX. Situational Awareness Plan
After-the-event steps to validate the success of the event with the system’s business users.  This would include a two-way communication between your group and the business users, announcing the success, and providing contact information for them to contact you in case there is still a problem.

IX. Risk-Management plan
A plan listing risks associated with the steps above and recommendations as to how to lower those risks.

X. Schedule
If the event spans many hours or days, you may want to draft a schedule for the benefit of all involved.  Include on the schedule the ‘rollback point,’ which would be the latest time a rollback could be successfully performed.  Your success criteria whould have to be met by this point to avoid a rollback.

Be sure the Execution Plan is in a checklist format, not a bullet-list format.  Require participants in the event to ‘check’ completed checklist items and sign-off sections they are responsible for.

For critical areas of high-risk, (ie: setting up replication), for example, you may want to require two individuals to perform the checklist steps and sign their names when that section is complete.

If you like, add a ‘lessons learned’ section to be completed later, and keep a copy of the execution plan for historical purposes.

Mike J. Berry
www.RedRockResearch.com

Excellence over Heroics

I value Excellence over Heroics.

‘Excellence’ can be defined as “the crisp execution of established procedures.”  Think about that for a minute.

Do you know of a software development shop where several prominent developers often stay up late into the night, or come in regularly over the weekend to solve high-profile problems, or put out urgent mission-critical fires?

The thrill of delivering when the whole company’s reputation is at stake can be addictive.  I remember once staying up 37 hours in-a-row to deliver an EDI package for a bankers convention.  I was successful, delivering the application just before it was to be demo’d.  I went home and slept for 24 hours straight afterwards.

The problem with ‘Heriocs’ is that the hero is compensating for the effects of a broken process.  Think about that for a minute.

If heroes are needed to make a software development project successful, then really something upstream is broken.

Most problems requiring heroics at the end of a project stem from improper effort estimations, inability to control scope, inadequate project tracking transparency, mismanaged Q/A scheduling, unnecessary gold-plating, or inadequate communication between the development team and the project users/stakeholders.

A well-organized development group humms along like a well-oiled machine.  Proper project scoping, analysis, design deconstruction, estimating, tracking, and healthy communication between development and the users/stakeholders will bring that excellence that trumps heroics.

Hey, I hear that Microsoft is looking for some Heroes.

Mike J. Berry
www.RedRockResearch.com

« Previous PageNext Page »