Software Development Best Practices - Software Requirements Management

I recently hosted Red Rock Research’s second weekly software development best practices seminar for the general public.  Our topic was Software Requirements Management.Requirements Management is perhaps the most controversial topic in software development.  Everyone seems to have their own technque.  It is also the most important skill-set–statistically more important than development skills–to the overall success of a software project (Standish CHAOS Report, 2009).Let me say that another way because this principle is not intuitive…if you want to improve the performance of your development projects, improve the skill-sets of your business analysts who generate requirements.  Statistically, this has more of a performance boost on a projects outcome than any other skill-based area.Many published requirements management techniques exists, and yet in a $220 Billion industury with a project failure/delay rate of 64%, it appears that most of these published techniques are not embraced.Our seminar covered Eliciting, Prioritizing, Validating, and Documenting a requirements baseline.  We discussed the progression of system context diagrams, UML actors, use cases, data-flow diagrams, High-Level Overview diagrams, High-Level Design diagrams and finally the Software Requirements Specification document.   We talked briefly about  a Concept of Operations document and a System Design Description document.We discussed the difference between a plan-based documentation stack, and a minimized Agile-development documentation stack–which would be generated during a Sprint-Zero.  (Yes BTW, you DO create documentation for Agile projects!)We discussed techniques to control scope creep after the requirements baseline, and then discussed techniques for dealing with what I call ‘approval noise.’What puzzles me the most about this topic is an entrenchment I encounter occasionally, as expressed by one of the seminar participants.   He stated, after the seminar, that all of this was interesting in a textbook-like manner, but that he felt none of it was pratically applicable.I asked him to explain how his company performs requirements practices and he said “Well, we have nothing written.  We have everything in our head and we just talk across the cubicles.”  He then told me he was frustrated at some additional items he was asked to add to his project that morning because it was supposed to be completed two weeks ago.  He also told me that the owner of his organization wished they had a structured approach to software project management, and that–oh, by they way–many of the programmers were given layoff notices at the beginning of the week because the company is failing.Hmm, it’s almost as if the problem is not properly in focus.  Downstream problems are caused by upstream actions or omissions.  I mean no disrespect, I just wish to point out the obvious that if companies like this would adopt upstream structure they would benefit from downstream success.You see, the problem proper requirements practices solves is not at the development effort level, it is at the project management, estimation, budget, and strategy planning–or business level.Software centric business level practices become predictable and executives can be proactive if their projects properly consume the time estimated.Projects will consume the time estimated if they include all of the functionality needed for a desired level of business value, and those functions are identified in whole, at the beginning of the project.  This way the software project time-frames and feature-sets can be included accurately in the estimation, budgeting, resource planning, and strategic planning of a company.  This way, scope creep will be minimal, and the whole company will benefit from a predictable project delivery process.Without proper requirements skills, entire feature-sets get missed upstream and need to be added ‘at the last moment’ downstream,  the risk of re-work increases drastically, and recurring cycles of this erode project managers and the development team’s credibility in the eyes of the executive team and the waiting customers.  In worst case scenarios, this can lead to layoffs and finally company failures.If you haven’t been trained on proper requirement management techniques, you are holding your organization at risk.  Attend our next three-day Software Requirements Management training course held September 7-9 in SLC.Mike J. Berry, PMP, CSM, CSPMwww.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

How to compute % defects removed from release candidate code

Recently someone on StackOverflow.com asked me to explain how to compute the defect removal rate for release candidate software.  There are two methods for producing this number and I teach both in several of my seminars, but I’ll explain the simpler method in this post…

Lawrence Putnam presented this model in his 1992 Book titled Measures for Excellence.  His book reads more like a math text than a software development guide, and suffers from an unfortunate formula typo which has lead to widespread confusion about his models in the industry, but I will  explain his defect removal rate calculation process.  (I hired a math wizard to examine his data and correct the formula!)

1. For a typical project, code is produced at a rate which resembles a Rayleigh curve.  A Rayleigh curve looks like a bell curve with a long-tail.  See my ASCII graphics below:

||||
|||||||||||
|||||||||||||||||
|||||||||||||||||||||||

2. Error ‘creation’ typically happens in parallel and proportional to code creation.  So, you can think of errors created (or injected) into code as a smaller Rayleigh curve:

||||
|||+++|||||
||||+++++|||||
||||+++++++||||||||

where ‘|’ represents code, and ‘+’ represents errors

3. Therefore, as defects are found, their ‘detection rate’ will also follow a Rayleigh curve.  At some point your defect discovery rate will peak and then start to lesson.  This peak, or apex, is about 40% of the volume of a Rayleigh curve.

4. So, when your defect rate peaks and starts to diminish, factor the peak as 40% of all defects found, then use regression analysis to calculate how many defects are still in the code and not found yet.

By regression analysis I mean if you found 37 defects at the apex after three weeks of testing, you know two things:  37 = 40% of defects in code, so code contains ~ (37 * 100/40) = ~ 93 errors total, and your finding about 10.2 defects per week, so total testing time will be about 9 weeks.

Of course, this assumes complete code coverage and a constant rate of testing.

Hope this is clear.

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

A Free Software Requirements Specification Template (SRS)!

Need a good software requirements specification (SRS) template?  Use an industry-standard SRS.  Can’t find one?  Well now you have-get it here for free.  Enjoy!

Mike J. Berry
www.RedRockResearch.com
Software Developement Process Guidance

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

25 Most Dangerous Information Security Programming Errors

Want to visit ground-zero for data security?  Experts from SANS, MITRE, SAFECode, EMC, Juniper, Microsoft, Nokia, SAP, Symantec, and the U.S. Department of Homeland Security’s National Cyber Security Division last week presented a listing of The Top 25 Most Dangerous (Information Security) Programming Errors.  Expect to see future government and big-money RFP’s mandate these items be addressed.

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

Excellence over Heroics

I value Excellence over Heroics.

‘Excellence’ can be defined as “the crisp execution of established procedures.”  Think about that for a minute.

Do you know of a software development shop where several prominent developers often stay up late into the night, or come in regularly over the weekend to solve high-profile problems, or put out urgent mission-critical fires?

The thrill of delivering when the whole company’s reputation is at stake can be addictive.  I remember once staying up 37 hours in-a-row to deliver an EDI package for a bankers convention.  I was successful, delivering the application just before it was to be demo’d.  I went home and slept for 24 hours straight afterwards.

The problem with ‘Heriocs’ is that the hero is compensating for the effects of a broken process.  Think about that for a minute.

If heroes are needed to make a software development project successful, then really something upstream is broken.

Most problems requiring heroics at the end of a project stem from improper effort estimations, inability to control scope, inadequate project tracking transparency, mismanaged Q/A scheduling, unnecessary gold-plating, or inadequate communication between the development team and the project users/stakeholders.

A well-organized development group humms along like a well-oiled machine.  Proper project scoping, analysis, design deconstruction, estimating, tracking, and healthy communication between development and the users/stakeholders will bring that excellence that trumps heroics.

Hey, I hear that Microsoft is looking for some Heroes.

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

Software Production Support

In a conversation with a friend once, they jokingly described their inability to play racquetball against other seasoned players as ”They are playing racquetball, while I am just hitting a ball around the room.”

I’ll borrow that reference and apply it to Software Production Support.

Is your Software Production Support group ”playing racquetball,” or are they “just hitting a ball around the room?”

From a distance they can appear like the same activities.  On closer inspection however, one is much more organized, elegant, patterned, and proactive–while the other is only reactive.

Finding the order from all the choas separates the effective from the ineffective.

There are three particular areas your Software Production Support team should be focus on.  These three areas are:

1. Maintaining Systems
2. Managing Customer Expectations
3. Become a Quick-Reaction Force

1. Maintaining Systems:

Think of your production servers like a fleet of cars.  In a fleet plan, the company sends every car to get an oil change after x number of miles, a tire rotation after y number of miles, and a general tune-up, fluid change, etc. after z number of miles.  This pattern repeats itself for the life of the car that is serviced by the fleet manager.

How often are your server hard drives defragmented?  How often are the transaction-logs backed up?  How often are the indexes reindexed, and the statistics updated?

How often are memory settings adjusted for performance? Latest patches applied? How often are your servers checked to see if there any impending disk space issues?

To maximize system performance, create a “fleet plan” for your servers which checks all of these items at regular intervals.

2. Managing Customer Expectations:

If a server fails, do you know which systems depend on it? If a database goes corrupt, do you know which applications need it, and which corresponding business units will be impacted when that happens?

Do you have a way to communicate to those groups immediately?

Create a dependency map for your products.  A dependency map illustrates which servers host which databases, and then which databases are used by which applications, and finally the names, numbers, and email groups of the business users that are affected by that server/database failure.  This will enable your team to proactively manage your customers expectations.  You can notify them before they have to notify you.

3. Become a Quick-Reaction Force:

The SWAT team, the FireStation, and the Ambulance services all have something in common: they are ready to take action at a moment’s notice.

They have the information they need available to them, and additional services available with a simple call.

Do your products have support information organized and readily available?  Do you have the names and numbers of your account representative for each third-party product or tool you support?  Do you have the product-support phone numbers and your support plan credentials readily available?

Do you know who knows what about each application in your enterprise?  Who programmed it originally?  Who has supported it lately?  Which business units use it?  Where is the source code located?

Keeping information about each system updated in a central location should also be part of your “fleet plan.”

Another effective tool for a Quick-Response group is a monitoring system.  Something that indicates the overall attitude of each of your production servers?  Disk Space available? Will the system reply to a ping?  Is SQL Agent running? Is that required Windows Service up and running?  Monitoring tools like Nagios can do this for you.

Another great idea is to keep a lessons-learned log for each component you support.  Track problems, fixes to problems, assumptions to be confirmed, and ways to test if the component is functioning properly.

All of these pieces in place will make your production support much more effective.

So, think about it…is your Software Production Support team playing racquetball, or are they just hitting a ball around a room?

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

The Three P’s of a Quality Management System

A Quality Management System, sometimes referred to as a Total Quality Management (TQM) System, is a simple concept that will dramatically improve software production quality over time.

Companies that don’t have a quality system are commonly reacting to production and support issues due to omissive events.

A simple rule of thumb is to ask yourself how many fires your development team has put out this month.  If any come to mind, then chances are you don’t have a proper quality management system in place, and should read on…

I remember early in my career I struggled to get my employees to follow our procedures.  Whenever we’d encounter a production problem with our software, it would inevitably be a result of someone not having completely followed an established procedure.

We would have a big discussion about what should have happened, and about how “we can’t forget to do that next time,” yet we’d experience the same omission later.

I would get frustrated because I could never seem to find a way to get my team accountable for following our established procedures–until I discovered the “Quality Management System.”

A Quality Management System has the following three elements (the Three P’s!):

  1. Process (documented–most of us have processes or procedures we are supposed to follow.)
  2. Proof (a separate checklist, or “receipt” that the process was followed for each software release.)
  3. Process-Improvement (a discussion, and then an addition or adjustment to the documented process.)

Most companies have an established–and hopefully documented–software development process.  (If you don’t you can download one from my website for Waterfall, or Agile here.)  This is the first ‘P’ and should be in place at every established development shop.

A great question to ask the team is “How do you know the process was followed for each release?”  This is where you may get the deer in the headlights response.  This is the second ‘P’ and is the piece missing from most software development shops.

Think of this ’Proof’ document as a checklist accompanying each software release.  The checklist would include every major step in the documented process, names of team members performing specific functions, and locations of final source code, test scripts, install files, etc.  The checklist would also require a series of quality checks.  Ie: Were requirements signed off by the customer, stakeholder, tester, and developer?  Was the help file updated with the new release number and appropriate functionality?  Was the source code checked in?  Where is it located?

As problems occur, the checklist would be added to so that the product would be protected against a similar failure in the future.

The governing driver considered here is that one particular problem might broadside the development team once, but after the process is improved, that problem should never occur again.

For example, you might have a stored procedure that goes into production without a “Go” statement at the end.  After the error is discovered, and fixed in production, your team should have a discussion and conclude that a checkbox needs to be added to the quality document stating “All Stored Procedures Confirmed to have ‘Go’ at the end.”

From that point on, whenever a stored procedure is moved into production, the developer presenting it must check for ‘Go’ statements at the end and then sign their name at the bottom of the checklist.

This is the difference between process improvement, and hope.  Many companies view process improvement as a discussion and some verbal affirmations.  What they are really doing is “hoping.”

Actually, the “act” of process improvement is physically altering a written process or procedure.  This is the real definition of process improvement–the third ‘P.’

The final endpoint of a quality management system is to achieve excellence.  I’ve heard excellence defined once as “Crisp execution of established procedures.”

You can’t have excellence without procedures, proof, and process-improvement.

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

The Bat-Phone

Do you have one of those executives that harasses you with status updates to projects, yet never attends the status update meetings?

Perhaps they call you, email you, stop in to your office, and want to know what the latest on project X is?

Is the behavior effecient?  What suggestions do you have about how to convey project status communication within your organization?

Mike J. Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

Three-dimensional value systems

Posted by mikeberry | Agile Executives, Software Quality Management, SDLC Management, Leadership | Wednesday 2 January 2008 12:43 pm

What is a value system?

As of late, corporations have discovered that mission-statements are only somewhat helpful in providing direction to a company.  Being strategic in nature, they don’t provide enough detail to govern tactical decisions made by the corporate employees on a daily basis.

To answer this need, value-statements, and value-systems have come into vogue.  Many companies have value-statements to underscore their mission statements.

Just as some mission statements are more effective than others, some value-systems are more effective than others.

The simple approach to establishing corporate, department, or team values is to get everyone together in a room and have them suggest values the team should adopt.  Voting happens, and the group committs to their agree-upon values.

After one of these sessions, the group might come up with a list like:

  • respect
  • trust
  • excellance
  • high performance

This list is a start, but only representative of a one-dimentional value system.  These values, by themselves, realy don’t project any context or weight.

A more effective approach would be a two-dimensional value system.  A two dimensional value-system provides a greater context fabric.  For example, you could say your group values:

  • respect over cynicism
  • trust over hope
  • excellence over heroics
  • high-performance over sub-optimization

These comparison value statements proved direction and context.  This represents a two-dimensional value system, and is more effective that a simple list of values.

A three-dimensional value system is a prioritized list of these comparison statements.  For example, you could say your group values these statements in this order:

  1. trust over hope
  2. excellence over heroics
  3. high-performance over sub-optimization
  4. respect over cynicism

This list shows that trust is the highest factor in inter-departmental dynamics.  It shows that excellence is more important than high-performance (so no cutting corners!), and that the group values trust, excellence, and high-performance more than respect.

Every group will have their own values and differences in priorioties, but putting a three-dimensional value-system in place with your team is a great step forward in building functional team cohesion.

Once in place, a reward-systems can be built around your value system to promote it’s effectivness.

Mike J Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb

Great Mission Statements

Jack Welch, in his book, Winning, talks about how to create great mission statements.

He says most mission statements are dull, uninspired, and even unhelpful.  Most groups write their mission statement to describe only what they are in business to do.  While this is not wrong, it creates a whole bunch of mission statements that all look the same among competitors, and are not really valuable.

Welch suggests that a good mission statement not only describes what the company is in business to do, but how they are going to succeed at it.

For example, “We are going to sell lots of chickens,” is not as effective as “we are going to sell lots of chickens by growing the largest free-range chickens and advertising their value to the industry.”

Following his logic, I did some research and found some interesting comparisons:

Ford Motor Company in Europe’s mission statement (couldn’t find the U.S. mission statement anywhere online) is:

“Our Mission: we are a global, diverse family with a proud heritage, passionately committed to providing outstanding products and services.”

OK, so Ford’s mission is noble, but there is no explanation as to how they will succeed at their mission.  Compare this to Toyota’s mission statement:

“To sustain profitable growth by providing the best customer experience and dealer support.”

Toyota’s mission statement expresses their intention to make money by providing the best customer experience and dealer support.

Indeed, their mission statement tells what they are doing and how they will succeed.  This is an example of an effective mission statement.

There is a business principle at hand here:  Ambiguity is the enemy to progress.  It’s nice Ford wants to provide outstanding products and services, but there is no formula or direction given in their mission statement as to how they plan to do this.

Toyota states it will succeed by providing the best customer experience and dealer support.   Are they succeeding at this?

In 2007, Toyota became the largest seller of cars in America.  As customers, we vote with our money.  It seems then,  that they are providing the best customer experience, and are fulfilling their mission statement.

On a lighter note, Enron’s mission statement is/was:

“Respect, Integrity, Communication and Excellence.”

Mike J Berry
www.RedRockResearch.com

Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Digg
  • del.icio.us
  • Fark
  • Netscape
  • StumbleUpon
  • Technorati
  • YahooMyWeb