Exercise
Configuration management -1
Make list of items that you think Test Manager should insist placed under configuration management control.
Exercise
Configuration management - 2
There are very many points to consider when implementing CM. We have summarized them into the following three categories:
CM Processes
The framework that specifies how our CM system is to operate and what it is to encompass.
Roles & Responsibilities
Who does what and at what time?
CM Records
The type of records we keep and the manner in which we keep and maintain them.
Quite a short list you might say. Using the information we have learned so far, try and construct a minimal Configuration Management Plan. Do not try and expand the processes required, but give hem suitable titles in an appropriate sequence.
Additionally, for every process you identify, try and match it to one or more segments of the CM Bubble diagram.
5.11 Test estimation, monitoring and control
Test estimation
Effort required to perform activities specified in high-level test plan must be calculated in advance. You must remember to allocate time for designing and writing the test scripts as well as estimating the test execution time. If you are going to use the test automation, there will be a steep learning curve for new people and you must allow for this as well. If your tests are going to run on multiple test environments add in extra time here too. Finally, you will never expect to complete all of the testing in one cycle, as there will be faults to fix and test will have to be re-run. Decide on how many test cycles you will require and try and estimate the amount of re¬work (fault fixing and re-testing time).
Test monitoring
Many test efforts fail despite wonderful plans. One of the reasons might be that the test team was so engrossed in detailed testing effort (working long hours, finding many faults) that they did not have time to monitor progress. This however is vitally important if the project is to remain on track. (e.g. use a weekly status report).
Exercise
Try and list what you might think are useful measures for tracking test progress.
Test Manager will have specified some exit (or completion) criteria in the master test plan and will use the monitoring mechanism to help judge when the test effort should be concluded. The test manager may have to report on deviations from the project/test plans such as running out of time before completion criteria have been achieved.
Test control - in order to achieve the necessary test completion criteria it may be necessary to re-allocate resources, change the test schedule, increase or reduce test environments, employ more testers, etc.
5.12 Incident Management
An incident is any significant, unplanned event that occurs during testing that requires subsequent investigation and/or correction. Incidents are raised when expected and actual test results differ.
5.12.1 what is an incident?
You may now be thinking that incidents are simply another name for faults but his is not the case. We cannot determine at the time an incident has occurred whether there is really a fault in the software, whether environment was perhaps set up incorrectly or whether in fact test script was incorrect. Therefore we log incident and move on to the next test activity.
5.12.2 Incidents and the test process
An incident occurs whenever an error, query or problem arises during the test process. There must be procedures in place to ensure accurate capture of all incidents. Incident recording begins as soon as testing is introduced into system's development life cycle. First incidents that will be raised therefore are against documentation as project proceeds; incidents will be raised against database designs, and eventually program code of system under test.
5.12.3 Incident logging
Incidents should be logged when someone other than author of product under test performs testing. When describing incident, diplomacy is required to avoid unnecessary conflicts between different teams involved in testing process (e.g. developers and testers). Typically, information logged on an incident will include:
. Name of tester(s), data/time of incident, Software under test ID
. Expected and actual results
. Any error messages
. Test environment
. Summary description
. Detailed description including anything deemed relevant to reproducing/fixing potential fault (and continuing with work)
. Scope
. Test case reference
. Severity (e.g. showstopper, unacceptable, survivable, trivial)
. Priority (e.g. fix immediately, fix by release date, fix in next release)
. Classification. Status (e.g. opened, fixed, inspected, retested, closed)
. Resolution code (what was done to fix fault)
Incidents must be graded to identify severity of incidents and improve quality of reporting information. Many companies use simple approach such as numeric scale of I to 4 or high, medium and low. Beizer has devised a list and weighting for faults as follows:
1 Mild Poor alignment, spelling etc.
2 Moderate Misleading information, redundant information
3 Annoying Bills for 0.00, truncation of name fields etc.
4 Disturbing Legitimate actions refused, sometimes it works, sometimes not
5 Serious Loss of important material, system loses track of data, records etc.
6 Very serious The mis-posting of transactions
7 Extreme Frequent and widespread mis-postings
8 Intolerable Long term errors from which it is difficult or impossible to recover
9 Catastrophic Total system failure or out of control actions
la Infectious Other systems are being brought down
In practice, in the commercial world at least, this list is over the top and many companies use a simple approach such as numeric scale of 1 to 4 as outlined below:
1 Showstopper Very serious fault and includes GPF, assertion failure or
complete system hang
2 Unacceptable Serious fault where software does not meet business
requirements and there is no workaround
3 Survivable Fault that has an easy workaround - may involve partial manual
operation
4 Cosmetic Covers trivial faults like screen layouts, colors, alignments, etc
Note that incident priority is not the same as severity. Priority relates to how soon the fault will be fixed and is often classified as follows:
1. Fix immediately.
2.Fix before the software is released.
3.Fix in time for the following release.
4.No plan to fix.
It is quite possible to have a severity 1 priority 4 incident and vice versa although the majority of severity 1 and 2 faults are likely to be assigned a priority of 1 or 2 using the above scheme.
5.12.4 Tracking and analysis
Incidents should be tracked from inception through various stages to eventual close-out and resolution. There should be a central repository holding the details of all incidents.
For management information purposes it is important to record the history of each incident. There must be incident history logs raised at each stage whilst the incident is tracked through to resolution for trace ability and audit purposes. This will also allow ht formal documentation of the incidents (and the departments who own them) at a particular point in time.
Typically, entry and exit criteria take the form of the number of incidents outstanding by severity. For this reason it is imperative to have a corporate standard for the severity levels of incidents.
Incidents are often analyzed to monitor test process and to aid in test process improvement. It is often useful to look at sample of incidents and try to determine the root cause.
5.13 Standards for testing
There are now many standards for testing, classified as QA standards, industry-specific standards and testing standards. These are briefly explained in this section. QA standards simple specify that testing should be performed, while industry-specific standards specify what level of testing to perform. Testing standards specify how to perform testing.
Ideally testing standards should be referenced from the other two.
The following table gives some illustrative examples of what we mean:
Type Standard
QA Standards ISO 9000
Industry specific Railway signaling
standard standard
Testing Standards BS 7925-1, BS 7925-2
5.14 Summary.
In module five you have learnt that the Test Manager faces an extremely difficult challenge in managing the test team and estimating and controlling a particular test effort for a project. In particular you can now:
Suggest five different ways in which a test team might be organized.
Describe at least five different roles that a test team might have.
Explain why the number of test cycles and re-work costs are important factors in estimating.
Describe at least three ways that a test effort can be monitored.
List three methods of controlling the test effort to achieve the necessary completion criteria.
Prioritize incidents.
Understand the importance of logging all incidents.
Understand the need for tracking and analysis of incidents
No comments:
Post a Comment