Software Testing Basics....
What is Software Testing ???
There are various definitions of testing --
"Testing is the process of executing a program with the intent to certify its quality"-- Mills
"Testing is the process of executing a program with the intent of finding errors/ failures/ faults"-- Myers
To get a good definition of testing we need to combine elements of the Mills and Myers definitions--
"Testing is the process of exercising software to detect errors and to verify that it satisfies specified functional & non-functional requirements"A more scientific definition of testing is given by Cem Kaner -- " Testing is a technical investigation of the product under test conducted to provide stakeholders with quality related information"
Testing Terminology ....
Error : A human action that produces an incorrect result
Fault : A manifestation of an error in software
Failure : A fault if encountered may cause a failure which is a deviation of the software from its expected delivery or service
Defect : The departure of a quality characteristic from its specified value that results in a product or service not satisfying its normal usage requirements
Reliability : The probability that the software will not cause failure of a system for a specified time under specified conditions
Objectives of Testing Go To Top
Testing is a quality filter. Its objective is to detect errors and not to prove that there are no errors in the product or program. Testing can show the presence of error but it cannot prove their absence. Some specific objectives of testing are : -- Find important bugs to get them fixed -- Assess the quality of the product-- Help Managers make release decisions-- Block premature product releases-- Help predict and control costs of product support-- Check interoperability with other products-- Find safe scenarios for use of the product-- Assess conformance to specifications-- Certify the product meets a particular standard-- Ensure the testing process meets accountability standards-- Minimize the risk of safety related lawsuits-- Help clients improve product quality and testability-- Evaluate the product for a third party Different objectives will require different testing strategies and will lead to different test documentation and results.
Limitations of Testing
Testing is a very important activity in the Software Development Life Cycle (SDLC) and is a major contributor in the success of a product. Yet there are some limitations of testing : -- Testing cannot be used to build quality. Quality of the software depends on design. It can be built into the software by using good software engineering practices for analyzing, designing, coding and testing-- If something is coded and designed badly testing cannot correct it beyond a point-- Testing is expensive as it appears at the end of the process. Though the static testing techniques (reviews and walktroughs) help a lot in overcoming this limitation-- Testing cannot be exhaustive and selective testing cannot detect all errors-- Testing itself may have some errors or the testing tools used may have some defects-- Testing Cannot prevent errors, it only detects them
Economics of Testing
The cost of faults escalates as we move the product towards field use. Early test design can prevent fault multiplication at different stages of development. Analysis of specifications during test preparation often brings faults in specifications to light. The cost of testing is generally lower than the cost associated with major faults (Losing customer base and market share, cost of releasing fixed software versions , business risks etc) , although few organizations have figures to confirm this. Therefore testing is used as a method of risk assessment and must be prioritized for those areas of greatest risk to the business.
How Much Testing is Enough ???
It is possible to do enough testing but determining the how much is enough is difficult. Simply doing what is planned is not sufficient since it leaves the question as to how much should be planned. What is enough testing can only be confirmed by assessing the results of testing. If lots of faults are found with a set of planned tests it is likely that more tests will be required to assure that the required level of software quality is achieved. On the other hand, if very few faults are found with the planned set of tests, then (providing the planned tests can be confirmed as being of good quality) no more tests will be required.
-- Saying that enough testing is done when the customers or end-users are happy is a bit late, even though it is a good measure of the success of testing. However, this may not be the best test stopping criteria to use if you have very demanding end-users who are never happy!
-- Why not stop testing when you have proved that the system works? It is not possible to prove that a system works without exhaustive testing (which is totally impractical for real systems).
-- Have you tested enough when you are confident that the system works correctly? This may be a reasonable test stopping criterion, but we need to understand how well justified that confidence is. It is easy to give yourself false confidence in the quality of a system if you do not do good testing.
Ultimately, the answer to “How much testing is enough?” is “It depends!”. It depends on risk, the risk of missing faults, of incurring high failure costs, of losing creditability and market share. All of these suggest that more testing is better. However, it also depends on the risk of missing a market window and the risk of over-testing (doing ineffective testing) which suggest that less testing may be better.
We should use risk to determine where to place the emphasis when testing by prioritizing our test cases. Different criteria can be used to prioritize testing including complexity, criticality, visibility and reliability.
Testing Measures Software Quality Go To Top
Testing in itself does not improve the quality, it is merely a measure of quality. We don’t know how good the software is until we have run some tests. Once we have run some good tests we can state how many faults we have found (of each severity level) and may also be able to predict how many faults remain (of each severity level). Quality can be measured by testing the relevant factors such as correctness, reliability, usability, maintainability, reusability, testability etc.
Other Factors That Influence Testing
Other factors that affect our decision on how much testing to perform include possible contractual obligations. For example, a contract between a customer and a software supplier for a bespoke system may require the supplier to achieve 100% statement coverage . Similarly, legal requirements may impose a particular degree of thoroughness in testing although it is more likely that any legal requirements will require detailed records to be kept (this could add to the administration costs of the testing
Fundamental Test Process
The Fundamental Test Process comprises five activities: Planning, Specification, Execution, Recording, and Checking for Test Completion. The test process always begins with Test Planning and ends with Checking for Test Completion. Any and all of the activities may be repeated (or at least revisited) since a number of iterations may be required before the completion criteria defined during the Test Planning activity are met. One activity does not have to be finished before another is started; later activities for one test case may occur before earlier activities for another. The five activities are described in more detail below.
Planning :
The basic philosophy is to plan well. All good testing is based upon good test planning. There should already be an overall test strategy and possibly a project test plan in place.
This Test Planning activity produces a test plan specific to a level of testing (e.g. system testing). These test level specific test plans should state how the test strategy and project test plan apply to that level of testing and state any exceptions to them. When producing a test plan, clearly define the scope of the testing and state all the assumptions being made. Identify any other software required before testing can commence (e.g. stubs & drivers, word processor, spreadsheet package or other 3rd party software) and state the completion criteria to be used to determine when this level of testing is complete. Example completion criteria are ---
· 100% statement coverage;
· 100% requirement coverage;
· all screens I dialogue boxes I error messages seen;
· 100% of test cases have been run;
· 100% of high severity faults fixed;
· 80% of low & medium severity faults fixed;
· maximum of 50 known faults remain;
· maximum of 10 high severity faults predicted;
· time has run out;
· testing budget is used up.
Specification :
The fundamental test process describes this activity as designing the test cases using the techniques selected during planning. For each test case, specify its objective, the initial state of the software, the input sequence and the expected outcome. Specification can be considered as three separate tasks:
· Identify test conditions — determines ‘what’ is to be tested
· Design test cases — determine ‘how’ the ‘what’s’(test conditions) are going to be exercised
· Build test cases — implementation of the test cases (scripts, data, etc.).
Execution:
The purpose of this activity is to execute all of the test cases (though not necessarily all in one go). This can be done either manually or with the use of a test execution automation tool (providing the test cases have been designed and built as automated test cases in the previous stage).
The order in which the test cases are executed is significant. The most important test cases should be executed first. In general, the most important test cases are the ones that are most likely to find the most serious faults but may also be those that concentrate on the most important parts of the system.There are a few situations in which we may not wish to execute all of the test cases. When testing just fault fixes we may select a subset of test cases that focus on the fix and any likely impacted areas (most likely all the test cases will have been run in a previous test effort). If too many faults are found by the first few tests we may decide that it is not worth executing the rest of them (at least until the faults found so far have been fixed). In practice time pressures may mean that there is time to execute only a subset of the specified test cases. In this case it is particularly important to have prioritized the test cases to ensure that at least the most important ones are executed.
Recording :
In practice the Test Recording activity is done in parallel with Test Execution. To start with we need to record the versions of the software under test and the test specification being used. Then for each test case we should record the actual outcome and the test coverage levels achieved for those measures specified as test completion criteria in the test plan. In this way we will be marking off our progress. The test record is also referred to as the “test log”, but “test record” is the terminology used in the testing world. Note that this has nothing to do with the recording or capturing of test inputs that some test tools perform!
The actual outcome should be compared against the expected outcome and any discrepancy found logged and analyzed in order to establish where the fault lies. It may be that the test case was not executed correctly in which case it should be repeated. The fault may lie in the environment set-up or be the result of using the wrong version of software under test. The fault may also lie in the specification of the test case: for example, the expected outcome could be wrong. Of course the fault may also be in the software under test! In these cases the fault should be fixed and the test case executed again.
The records made should be detailed enough to provide an unambiguous account of the testing carried out. They may be used to establish that the testing was carried out according to the plan.
Checking For Test Completion :
This activity has the purpose of checking the records against the completion criteria specified in the test plan. If these criteria are not met, it will be necessary to go back to the specification stage to specify more test cases to meet the completion criteria. There are many different types of coverage measure and different coverage measures apply to different levels of testing.
Go To Top
--- Yogini Kale