Tuesday, July 1, 2008

Test Categories and objectives

Test Categories and objectives


Interactivity ( Pull down menus, buttons)
- Layout
- Readability
- Aesthetics
- Display characteristics
- Time sensitivity
- Personalization


•Using specialized Test Labs a rigorous testing process is conducted to get quantitative and qualitative data on the effectiveness of user interfaces
• Representative or actual users are asked to perform several key tasks under close observation, both by live observers and through video recording
•During and at the end of the session, users evaluate the product based on their experiences


Recovery Testing


A system test that forces the software to fail in variety of ways , checks performed
- recovery is automatic ( performed by the system itself)
- reinitialization
- check pointing mechanisms
- data recovery
- restarts are evaluated for correctness
•This test confirms that the program recovers from expected or unexpected events Events can include shortage of disk space, unexpected loss of communication


Documentation Testing


This testing is done to ensure the validity and usability of the documentation
•This includes user Manuals, Help Screens, Installation and Release Notes
•Purpose is to find out whether documentation matches the product and vice versa
•Well-tested manual helps to train users and support staff faster


Configuration test

•Attempts to uncover errors that are specific to a particular client or server environment.
•Create a cross reference matrix defining all probable operating systems, browsers, hardware platforms and communication protocols.
•Test to uncover errors associated with each possible configuration


Regression Testing


•Regression Testing is the testing of software after a modification has been made to ensure the reliability of each software release.
•Testing after changes have been made to ensure that changes did not introduce any new errors into the system.
•It applies to systems in production undergoing change as well as to systems under development
•Re-execution of some subset of test that have already been conducted
•Test suite contains
- Sample of tests that will exercise all software functions
- Tests that focus on software functions that are likely to be affected by the change
- Tests for software components that have been changed


User Acceptance Testing


•A test executed by the end user(s) in an environment simulating the operational environment to the greatest possible extent, that should demonstrate that the developed system meets the functional and quality requirements
•Not a responsibility of the Developing Organization


Acceptance Testing


•To test whether or not the right system has been created
•Usually carried out by the end user
•Two types are :
ALPHA TESTING :Generally in the presence of the developer at the developers site
BETA TESTING : Done at the customers site with no developer in site


Exploratory Testing


Also known as “Random” testing or “Ad-hoc” testing
•Exploratory testing is simultaneous learning, test design, and test execution. (…James Bach)
•A methodical approach-style is desirable


Exploratory Testing - Tips


•Test design Crafting
•Careful Observation
•Critical thinking
•Diverse Ideas
•Pooling resources (knowledge, learnings)

What Is a Test Strategy?


•It provides a road map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time and resources will be required.
• It must incorporate test planning, test case design, test execution and resultant data collection and evaluation


Debugging


•Occurs as a consequence of successful testing
•Is an action that results in the removal of the error
•Results of the test give a “symptomatic” indication of the software problem

•The symptoms and the cause may be geographically remote.
•The symptoms may be caused by human errors.
•The symptom may be because of a timing problem rather than processing problem.
•The symptom may be intermittent


First step in fixing a broken program is getting it to fail in a repeatable manner. – T.Duff


•Testing is a structured process that identifies an error’s “symptoms”
•Debugging is a diagnostic process that identifies an error’s “cause”

Methods/Approach used for debugging :
• Brute force
• Cause elimination – Induction or deduction
• Backtracking


Example of debugging by Brute Force are
1.By studying Storage Dumps I.e. usually a crude display of storage location
2.by invoking run-time traces
3.by scattering print statements
4.by use of automated debugging tools


Cause elimination
Debugging by Induction
1.Locate data about what program did correctly/incorrectly
2.Organize data
3.Device a hypothesis about the cause of the error
4.Prove the hypothesis
Debugging by deduction
1.Enumerate the causes of error
2.Eliminate each cause of error


Debugging by Backtracking

Beginning at the place where the symptom is uncovered, the source code is traced backward until the site of the cause is found


Error Analysis - When Correcting the error , ask these three questions :
- Is the cause of the bug reproduced in another program?
- What “next bug” might be introduced by the fix that I’m about to make?
- What could we have done to prevent this bug in the first place?


Metrics


•Why Measure
Tracking Projects against plan
Take timely corrective actions
Getting early warnings
Basis for setting benchmarks
Basis for driving process improvements
Tracking process performance against business
objectives


Testing Metrics


Defect Density
•Total Defect density = (Total number of defects including both impact and non-impact, found in all the phases + Post delivery defects)/Size
Average Defect Age
•Average Defect age = (Sum of ((Defect detection phase number – defect injection phase number) * No of defects detected in the defect detection phase))/(Total Number of defects till date)
Defect Removal Efficiency
•DRE = 100 * No. of pre-delivery defects / Total No. of Defects

Review Effectiveness
•Review Effectiveness = 100 * Total no. of defects fond in review / Total no. of defects
Cost of finding a defect in review(CFDR)
•Cost of finding a defect in reviews = (Total efforts spent on reviews / No. of defects found in reviews)
Cost of finding a defect in testing(CFDT)
•Cost of finding a defect in testing = (Total efforts spent on testing / defects found in testing)


Cost of Quality
•% Cost of Quality = (Total efforts spent on Prevention + Total efforts spent on Appraisal + Total efforts spent on failure or rework)*100/(Total efforts spent on project)
•Failure cost = Efforts spent on fixing or reworking the pre-delivery defects + (3 * efforts spent on fixing or reworking the post-delivery defects)

Test Case Effectiveness
•Test Case Effectiveness = # of defects detected using the test cases * 100/ total # of defects detected in testing
•This metrics defines the effectives of the test cases which is measured in terms of the number of defects found in testing with using the test cases
•Source of Data
- Defect data and number of test cases from PMS
P.S.: - These metrics are mainly applicable to V&V projects


Test Case Adequacy
•Test Case Adequacy = No. of actual Test cases * 100 / No. of test cases estimated
•This metrics defines the number of actual test cases created v/s the estimated test cases at the end of the test case preparation phase
•The estimated No. of the test cases are based baseline figures and then added to ePMS
•Number of Actual Test cases is also derived from ePMS
P.S.: - These metrics are mainly applicable to
V&V projects


Test Case Adequacy
•Test Case Adequacy = No. of actual Test cases * 100 / No. of test cases estimated
•This metrics defines the number of actual test cases created v/s the estimated test cases at the end of the test case preparation phase
•The estimated No. of the test cases are based baseline figures and then added to ePMS
•Number of Actual Test cases is also derived from ePMS
P.S.: - These metrics are mainly applicable to
V&V projects


Defect Detection Index
• Defect Detection Index = # of defects detected in each phase / total # of defects planned to be detected in each phase
•This is a measure of actual v/s planned defects at the end of each phase
•Source
- Defect data from PMS
P.S.: - These metrics are mainly applicable to V&V projects

No comments: