It may be wasteful to do walkthroughs or consensus reviews unless a document has successfully exited from Inspection. Otherwise you may be wasting people's time by giving them documents of unknown quality, which probably contain far too many opportunities for misunderstanding, learning the wrong thing and agreement about the wrong things.
Inspection is not an alternative to wal1cthroughs for training, or to reviews for consensus. In some cases it is a pre-requisite. The difference processes have different purposes. You cannot expect to remove faults effectively with walkthroughs, reviews or distribution of documents for comment. However, in other cases it may be wasteful to Inspect documents, which have not yet 'settled down' technically. Spending time searching for and removing faults in large chucks, which are later discarded, is not a good idea. In this case it may be better to aim for approximate consensus documents. The educational walkthrough could occur either before or after Inspection.
4.7.3 Statistical quality improvement
The fundamental differences between Inspection and other review methods is that Inspection provides a tool to help improve the entire development process, through a well-known quality engineering method, statistical process control, [Godfrey, 1986].
This means that the data which is gathered and analyzed as part of the Inspection process - data on faults, and the hours spent correcting them - is used to analyze the entire software engineering process. Widespread weakness in a work process can be found and corrected. Experimental improvement to work processes can be confirmed by the Inspection metrics - and confidently spread to other software engineers.
4.7.4 Comparison of Inspection and testing
Inspection and testing both aim at evaluating and improving the quality of the software engineering product before it reaches the customers. The purpose of both is to find and then fix errors, faults and other potential problems.
Inspection and testing can be applied early in software development, although Inspection can be applied earlier than test. Both Inspection and test, applied early, can identify faults, which can then be fixed when it is still much cheaper to do so.
Inspection and testing can be done well or badly. If they are done badly, they will not be effective at finding faults, and this causes problems at later stages, test execution, and operational use.
We need to learn from both Inspection and test experiences. Inspection and testing should both ideally (but all too rarely in practice) produce product-fault metrics and process¬ improvement metrics, which can be used to evaluate the software development process. Data should be kept on faults found in Inspection, faults found in testing, and faults that escaped both Inspection and test, and were only discovered in the field. This data would reflect frequency, document location, security, cost of finding, and cost of fixing.
There is a trade-off between fixing and preventing. The metrics should be used to fine-tune the balance between the investment in the fault detection and fault prevention techniques used. The cost of Inspection, test design, and test running should be compared with the cost of fixing. The faults at the time they were found, in order to arrive at the most cost-effective software development process.
4.7.5 Differences between Inspection and testing
Inspection can be used long before executable code is available to run tests. Inspection can be applied mud earlier than dynamic testing, but can also be applied earlier than test design activities. Test can only be defined when a requirements or design specification has been written, since that specification is the source for knowing the expected result of a test execution.
The one key thing that testing does and Inspection does not, is to evaluate the software while it is actually performing its function in its intended (or simulated) environment. Inspection can only examine static documents and models; testing can evaluate the product working.
Inspection, particularly the process improvement aspect, is concerned with preventing software engineers from inserting any form of fault into what they write. The information gained from faults found in running tests could be used in the same way, but this is rare in practice.
4.7.6 Benefits of Inspection
Opinion is divided over whether Inspection is a worthwhile element of any product 'development' process. Critics argue that it is costly; it demands too much 'upfront' time and is unnecessarily bureaucratic. Supporters claim that the eventual savings and benefits outweigh the costs and the short-term investment is crucial for long-term savings.
In IT-Start's Developers Guide (1989), it is estimated that 'The cost of non-quality software typically accounts for 30% of development costs and 50% to 60% of the lifecycle costs'. Faults then are costly, and this cost increases the later they are discovered. Inspection applied to all software is arguably the prime technique to reduce defect levels (in some cases to virtually zero defects) and to provide an increased maturity level through the use of Inspection metrics. The savings can be substantial.
Direct savings
Development productivity is improved.
Fagan, in his original article, reported a 23% increase in 'coding productivity alone' using Inspection [Fagan, 1976, IBM Systems Journal, p 187]. He later reported further gains with the introduction of moderator training, design and code change control, and test fault tracking.
Development timescale is reduced.
Considering only the development timescales, typical net savings for project development are 35% to 50%.
Cost and time taken for testing is reduced.
Inspection reduces the number of faults still in place when testing starts because they have been removed at an earlier stage. Testing therefore runs more smoothly, there is less debugging and rework and the testing phase is shorter. At most sites Inspection eliminates 50% to 90% of the faults in the development process before test execution starts.
Lifetime costs are reduced and software reliability increased.
Inspection can be expected to reduce total system maintenance costs due to failure reduction and improvement in document intelligibility, therefore providing a more competitive product.
Indirect savings
Management benefits.
Through Inspection, managers can expect access to relevant facts and figures about their software engineering environment, meaning they will be able to identify problems earlier and understand the payoff for dealing with these problems.
Deadline benefits.
Although it cannot guarantee that an unreasonable deadline will be met, through quality and cost metrics Inspection can give early warning of impending problems, helping avoid the temperature of inadequate correction nearer the deadline.
Organizational and people benefits.
For software professionals Inspection means their work is of better quality and more maintainable. Furthermore, they can expect to live under less intense deadline pressure. Their work should be more appreciated by management, and their company's products will gain a competitive edge.
4.7.7 Costs of Inspection
The cost of running an Inspection is approximately 10% - 15% of the development budget. This percentage is about the same as other walkthrough and review methods. However, Inspection finds far more faults for the time spent and the upstream costs can be justified by the benefits of early detection and the lower maintenance costs that result.
As mentioned earlier, the costs of Inspection include additional 'up front' time in the development process and increased time spent by authors writing documents they know will be Inspected. Implementing and running Inspections will involve long-term costs in new areas. An organization will find that time and money go on:
Inspection leading training.
Management training.
Management of the Inspection leaders.
Metric analysis.
Experimentation with new techniques to try to improve Inspection results.
Planning, checking and meeting activity: the entire Inspection process itself.
Quality improvement: the work of the process improvement teams.
The company may also find it effective to consider computerized tools for documentation and consistency checking. Another good investment might be improved meeting rooms or sound insulation so members of the Inspection team can concentrate during checking.¬
4.7.8 Product Inspection steps
The Inspection process is initiated with a request for Inspection by the author or owner of a task product document.
The Inspection leader checks the document against entry criteria, reducing the probability of wasting resources on a product destined to fail.
The Inspection objectives and tactics are planned. Practical details are decided upon and the leader develops a master plan for the team.
A kickoff meeting is held to ensure that the checkers are aware of their individual roles and the ultimate targets of the Inspection process.
Checkers work independently on the product, document using source documents, rules, procedures and checklists. Potential faults are identified and recorded.
A logging meeting is convened during which potential faults and issues requiring explanations, identified by individual checker, are logged. The checkers now work as a team aiming to discover further faults. And finally suggestions for methods of improving the process itself are logged.
An editor (usually the author) is given the log of issues to resolve. Faults are now classified as such and a request for permission to make the correction and improvements to the product is made to the document's owner. Footnotes might be added to avoid misinterpretation. The editor may also make further process improvement suggestions.
The leader ensures that the editor has taken action to correct all known faults, although the leader need not check the actual corrections.
The exit process is performed by the Inspection leader who uses application generic and specific exit criteria.
The Inspection process is closed and the product made available with an estimate of the remaining faults in a 'warning label'.
Exercise
Comparison between Various Techniques
Take a few moments to complete the following table.
4.7.2 Reviews and walk-through
4.9.2 McCabe’s complexity metric
McCabe's complexity metric is a measure of the complexity of a module's decision structure. It is the number of linearly independent paths and therefore, the minimum number of paths that should be tested. The metric can be calculated in three different ways. The number of decisions plus one, the number of 'holes' or connected regions (bearing in mind that there is a fictitious link between the entry and exit of the program), or thirdly the equation:
M=L-N+2P
Where: L = the no. Of links in graph
N = the no. Of nodes in the graph
P = the no. Of disconnected parts of the graph
Despite its simplicity the McCabe metric is based on deep properties of program structure. The greatest advantage is that it is almost as easy to calculate as the 'lines of code' metric, and results in a considerably better correlation of complexity to faults and the difficulty of testing.
McCabe advises partitioning programs where complexity is greater than la and this has been supported by studies such as Walsh who found that 23% of the routines with an M value of greater than 10 contained 53% of the faults. There does appear to be a discontinuous jump in the fault rate around M = 10. As an alternative to partitioning, others have suggested that the resources for development and testing should be allocated in relation to the McCabe measure of complexity, giving greater attention to modules ht exceed this value.
The weakness of the McCabe metric is found in the assumption that faults are proportional to decision complexity, in other words that processing complexity and database structure, amongst other things, are irrelevant. Equally it does not distinguish between different kinds of decisions. A simple "IF-THEN-ELSE" statement is treated the same as a relatively complicated loop yet we intuitively know that the loop is likely to have more faults. Also CASE statements are treated the same as nested IF statements which is again counter intuitive
No comments:
Post a Comment