Showing posts with label objectives of testing. Show all posts
Showing posts with label objectives of testing. Show all posts

Tuesday, July 1, 2008

Test Categories and objectives

Test Categories and objectives


Interactivity ( Pull down menus, buttons)
- Layout
- Readability
- Aesthetics
- Display characteristics
- Time sensitivity
- Personalization


•Using specialized Test Labs a rigorous testing process is conducted to get quantitative and qualitative data on the effectiveness of user interfaces
• Representative or actual users are asked to perform several key tasks under close observation, both by live observers and through video recording
•During and at the end of the session, users evaluate the product based on their experiences


Recovery Testing


A system test that forces the software to fail in variety of ways , checks performed
- recovery is automatic ( performed by the system itself)
- reinitialization
- check pointing mechanisms
- data recovery
- restarts are evaluated for correctness
•This test confirms that the program recovers from expected or unexpected events Events can include shortage of disk space, unexpected loss of communication


Documentation Testing


This testing is done to ensure the validity and usability of the documentation
•This includes user Manuals, Help Screens, Installation and Release Notes
•Purpose is to find out whether documentation matches the product and vice versa
•Well-tested manual helps to train users and support staff faster


Configuration test

•Attempts to uncover errors that are specific to a particular client or server environment.
•Create a cross reference matrix defining all probable operating systems, browsers, hardware platforms and communication protocols.
•Test to uncover errors associated with each possible configuration


Regression Testing


•Regression Testing is the testing of software after a modification has been made to ensure the reliability of each software release.
•Testing after changes have been made to ensure that changes did not introduce any new errors into the system.
•It applies to systems in production undergoing change as well as to systems under development
•Re-execution of some subset of test that have already been conducted
•Test suite contains
- Sample of tests that will exercise all software functions
- Tests that focus on software functions that are likely to be affected by the change
- Tests for software components that have been changed


User Acceptance Testing


•A test executed by the end user(s) in an environment simulating the operational environment to the greatest possible extent, that should demonstrate that the developed system meets the functional and quality requirements
•Not a responsibility of the Developing Organization


Acceptance Testing


•To test whether or not the right system has been created
•Usually carried out by the end user
•Two types are :
ALPHA TESTING :Generally in the presence of the developer at the developers site
BETA TESTING : Done at the customers site with no developer in site


Exploratory Testing


Also known as “Random” testing or “Ad-hoc” testing
•Exploratory testing is simultaneous learning, test design, and test execution. (…James Bach)
•A methodical approach-style is desirable


Exploratory Testing - Tips


•Test design Crafting
•Careful Observation
•Critical thinking
•Diverse Ideas
•Pooling resources (knowledge, learnings)

What Is a Test Strategy?


•It provides a road map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time and resources will be required.
• It must incorporate test planning, test case design, test execution and resultant data collection and evaluation


Debugging


•Occurs as a consequence of successful testing
•Is an action that results in the removal of the error
•Results of the test give a “symptomatic” indication of the software problem

•The symptoms and the cause may be geographically remote.
•The symptoms may be caused by human errors.
•The symptom may be because of a timing problem rather than processing problem.
•The symptom may be intermittent


First step in fixing a broken program is getting it to fail in a repeatable manner. – T.Duff


•Testing is a structured process that identifies an error’s “symptoms”
•Debugging is a diagnostic process that identifies an error’s “cause”

Methods/Approach used for debugging :
• Brute force
• Cause elimination – Induction or deduction
• Backtracking


Example of debugging by Brute Force are
1.By studying Storage Dumps I.e. usually a crude display of storage location
2.by invoking run-time traces
3.by scattering print statements
4.by use of automated debugging tools


Cause elimination
Debugging by Induction
1.Locate data about what program did correctly/incorrectly
2.Organize data
3.Device a hypothesis about the cause of the error
4.Prove the hypothesis
Debugging by deduction
1.Enumerate the causes of error
2.Eliminate each cause of error


Debugging by Backtracking

Beginning at the place where the symptom is uncovered, the source code is traced backward until the site of the cause is found


Error Analysis - When Correcting the error , ask these three questions :
- Is the cause of the bug reproduced in another program?
- What “next bug” might be introduced by the fix that I’m about to make?
- What could we have done to prevent this bug in the first place?


Metrics


•Why Measure
Tracking Projects against plan
Take timely corrective actions
Getting early warnings
Basis for setting benchmarks
Basis for driving process improvements
Tracking process performance against business
objectives


Testing Metrics


Defect Density
•Total Defect density = (Total number of defects including both impact and non-impact, found in all the phases + Post delivery defects)/Size
Average Defect Age
•Average Defect age = (Sum of ((Defect detection phase number – defect injection phase number) * No of defects detected in the defect detection phase))/(Total Number of defects till date)
Defect Removal Efficiency
•DRE = 100 * No. of pre-delivery defects / Total No. of Defects

Review Effectiveness
•Review Effectiveness = 100 * Total no. of defects fond in review / Total no. of defects
Cost of finding a defect in review(CFDR)
•Cost of finding a defect in reviews = (Total efforts spent on reviews / No. of defects found in reviews)
Cost of finding a defect in testing(CFDT)
•Cost of finding a defect in testing = (Total efforts spent on testing / defects found in testing)


Cost of Quality
•% Cost of Quality = (Total efforts spent on Prevention + Total efforts spent on Appraisal + Total efforts spent on failure or rework)*100/(Total efforts spent on project)
•Failure cost = Efforts spent on fixing or reworking the pre-delivery defects + (3 * efforts spent on fixing or reworking the post-delivery defects)

Test Case Effectiveness
•Test Case Effectiveness = # of defects detected using the test cases * 100/ total # of defects detected in testing
•This metrics defines the effectives of the test cases which is measured in terms of the number of defects found in testing with using the test cases
•Source of Data
- Defect data and number of test cases from PMS
P.S.: - These metrics are mainly applicable to V&V projects


Test Case Adequacy
•Test Case Adequacy = No. of actual Test cases * 100 / No. of test cases estimated
•This metrics defines the number of actual test cases created v/s the estimated test cases at the end of the test case preparation phase
•The estimated No. of the test cases are based baseline figures and then added to ePMS
•Number of Actual Test cases is also derived from ePMS
P.S.: - These metrics are mainly applicable to
V&V projects


Test Case Adequacy
•Test Case Adequacy = No. of actual Test cases * 100 / No. of test cases estimated
•This metrics defines the number of actual test cases created v/s the estimated test cases at the end of the test case preparation phase
•The estimated No. of the test cases are based baseline figures and then added to ePMS
•Number of Actual Test cases is also derived from ePMS
P.S.: - These metrics are mainly applicable to
V&V projects


Defect Detection Index
• Defect Detection Index = # of defects detected in each phase / total # of defects planned to be detected in each phase
•This is a measure of actual v/s planned defects at the end of each phase
•Source
- Defect data from PMS
P.S.: - These metrics are mainly applicable to V&V projects

Objectives of Testing & Testing Techniques

Objectives of Testing

To find greatest possible number of errors with manageable amount of efforts applied over a realistic time span with a finite number of test cases.


What Does Software testing Reveal?

1. Errors
2. Requirements conformance or the lack of it
3. Performance
4. An indication of quality


Testing Techniques


•Static Testing - Testing a software without execution on a computer. Involves just examination/review and evaluation.
•Dynamic Testing - Testing software through executing it.


Types Of Testing Techniques


Static Testing ::

Review

Code inspection

Walkthrough

Desk check


Dynamic Testing

White box

Black box


Static Testing


•Static Testing is a process of reviewing the work product and reviewing is done using a checklist.
•Static Testing helps weed out many errors/bugs at an early stage
•Static Testing lays strict emphasis on conforming to specifications.
•Static Testing can discover dead codes, infinite loops, uninitialised and unused variables, standard violations and is effective in finding 30-70% of errors


Static Testing Methods
• Self Review
• Code Inspection
• Walk Through
• Desk Checking


Code Review Checklist
•Data Reference Errors
•Data Declaration Errors
•Computation errors
•Comparison errors
•Control Flow errors
•Interface errors
•Input/output errors


Code Inspection
•Code inspection is a set of procedures and error detection techniques for group code reading.
•Involves reading or visual inspection of a program by a team of people , hence it is a group activity.
•The objective is to find errors but not solutions to the errors
•An inspection team usually consists of:
o A moderator
o A programmer
o The program designer
o A test specialist


Code Inspection Procedure

•The moderator distributes the program’s listing and design specification to the group well in advance of the inspection session
•During the inspection
• The programmer narrates the logic of the program, statement by statement
• During the discourse, questions are raised and pursued to determine if errors exist
• The program is analyzed w.r.t a check list of historically common programming errors

Code Inspection Helps in
Detect Defects
Conformance to standards/spec
Requirements Transformation into product


Walkthroughs
•Code Walkthrough is a set of procedures and error detection techniques for group reading
•Like code inspection it is also an group activity
•In Walkthrough meeting, three to five people are involved. Out of the three, one is moderator, the second one is Secretary who is responsible for recording all the errors and the third person plays a role of Test Engineer


Desk Checking
•Human error detection technique
•Viewed as a one person inspection or walkthrough
•A person reads a program and checks it with respect to an error list and/or walks test data through it


Dynamic Testing


White Box Test Techniques
• Code Coverage
• Statement Coverage
• Decision Coverage
• Condition Coverage
• Loop Testing
• Code complexity
•Cyclomatic Complexity
• Memory Leakage


Black Box Test Techniques
• Equivalence Partitioning
• Boundary Value Analysis
• Use Case / UML
• Error Guessing
• Cause-Effect Graphing
• State Transition Testing


White Box Test Techniques


•White box is logic driven testing and permits Test Engineer to examine the internal structure of the program
•Examine paths in the implementation
•Make sure that each statement, decision branch,or path is tested with at least one test case
•Desirable to use tools to analyze and track Coverage
•White box testing is also known as structural, glass-box and clear-box


White Box Test Techniques
• Code Coverage
Statement Coverage
Decision Coverage
Condition Coverage
Loop Testing
• Code complexity
• Memory Leakage

Code Coverage


•Measure the degree to which the test cases exercise or cover the logic (source code) of the program
•Types
• Statement Coverage
• Decision Coverage
• Conditional Coverage
• Loop Testing


Statement Coverage


•Test cases must be such that all statements in the program is traversed at least once
•Consider the following snippet of code
void procedure(int a, int b, int x)
{ If (a>1) && (b==0)
{ x=x/a; }
If (a==2 x>1)
{ x=x+1; }
}

Decision Coverage


Test cases must be such that each decision has a true and false outcome at least once. If we consider the same example as before, we need at least two test cases to execute the true and false outcome of the decisions at least once



Condition Coverage


•Test cases are written such that each condition in a decision takes on all possible outcomes at least once.


Loop Testing

•Loops testing is a white box testing technique that focuses exclusively on validity of Loop construct
•Types of loops
• Simple Loop
• Nested Loop
• Concatenated Loop
• Spaghatti Loop


Basis Path Testing


•Basis Path Testing is a white box testing technique that enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining the basis set of execution paths.
• Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.


Flow Graph


•Main tool for test case identification
•Shows the relationship between program segments , which is the sequence of statements having the property that if the first member of the sequence is executed then all other statements in that sequence will also be executed
•Nodes represent one program segment
•Areas bounded by edges and nodes are called regions
•An independent path is any path through the program that introduces at least one new set of processing statements or a new condition


Cyclomatic Complexity


•Cyclomatic Complexity is a software metric that provides a quantitative measure of logical complexity of a program
•When Used in the context of the basis path testing method, value for cyclomatic complexity defines number of independent paths in basis set of a program
•Also provides an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once
•Cyclomatic complexity is often referred to simply as
program complexity, or as McCabe's complexity


Calculating Cyclomatic Complexity


•The cyclomatic complexity of a software module is calculated from a flow graph of the module , when used in context of the basis path testing method
•Cyclomatic Complexity V(G) is calculated one of the three ways:
1.V(G) = E - N + 2 , where E is the number of edges and N = the number of nodes of the graph
2.V(G) = P+1, where P is the number of predicate nodes
3.V(G) = R , where number of region in the graph


Cyclomatic Complexity – Risk Evolution


Cyclomatic Complexity
Risk Evaluation
1-10 A simple program without much risk
11-20 More Complex, Moderate Risk
21-50 Complex, High Risk Program
Greater than 50 Highly complex,Very high risk program


Memory Leak


•Memory leak is present whenever a program loses track of memory.
•Memory leaks are most common types of defect and difficult to detect
•Performance degradation or a deadlock condition occurs
•Memory leak detection tools help to identify
• memory allocated but not deallocated
• uninitialized memory locations



Black Box Test Techniques


•Black box is data-driven, or input/output-driven testing
•The Test Engineer is completely unconcerned about the internal behavior and structure of program
•Black box testing is also known as behavioral, functional, opaque-box and closed-box


Black Box Test Techniques


Tests are designed to answer the following questions:
•How is functional validity tested ?
•What classes of input will make good test cases?
•Is the system particularly sensitive to certain input values?
•What effect will specific combinations of data have on system operations?


Black Box Test Techniques


•Equivalence Partitioning
•Boundary Value Analysis
•Error Guessing
•Cause Effect Graphing
•State transition testing


Equivalence Partitioning


•This method divides the input domain of a program into categories of data for deriving test cases.
•Identify equivalence classes - the input ranges which are treated the same by the software
- Valid classes: legal input ranges
- Invalid classes: illegal or out of range input values
•The aim is to group and minimize the number of test cases required to cover these input conditions


Assumption:
•If one value in a group works, all will work
One from each partition is better than all from one
•Thus it consists of two steps:
Ø Identify the Equivalence class
Ø Write test cases for each class

Examples of types of equivalence classes
•1. If an input condition specifies a continuous range of values, there is one valid class and two invalid classes
Example: The input variable is a mortgage applicant’s income. The valid range is $1000/mo. to $75,000/mo.
- Valid class: {1000 > = income < = 75,000} - Invalid classes: {income <> 75,000}

2. If an input condition specifies that a variable, say count, can take range of values(1 - 999),
Identify - one valid equivalence class (1 <>999)

3. If a “must be” condition is required, there is one valid equivalence class and one invalid class
Example: The mortgage applicant must be a person.
- Valid class: {person}
- Invalid classes:{corporation, ...anything else...}

Example
If we have to test function int Max(int a , int b) the Equivalence Classes for the arguments of the functions will be
Arguments Valid Values Invalid Values
A -32768 <= Value <= 32767 < - 32768 , >32767
B -32768 <= Value <= 32767 < - 32768 , >32767

Boundary Value Analysis


•“Bugs lurk in corners and congregate at boundaries …..”
Boris Beizer
•Boundary Conditions are those situations directly on, above, and beneath the edges of input equivalence classes and output equivalence classes.
•Boundary value analysis is a test case design technique that complements Equivalence partitioning
•Test cases at the boundary of each input Includes the values at the boundary , just below the boundary and just above the boundary

From previous example, we have the valid equivalence class as (1 < count="0," count="1,count=" count="998,count=" count="1000">


Error Guessing


•Based on experience and intuition one may add more test cases to those derived by following other methodologies.
•It is an ad hoc approach
•The basis behind this approach is in general people have the knack of “smelling out” errors


Error Guessing


•Make a list of possible errors or error-prone situations and then develop test cases based on the list.
•Defects’ history are useful. Probability that defects that have been there in the past are the kind that are going to be there in the future.
• Some examples :
• Empty or null lists/strings
• Zero occurrences
• Blanks or null character in strings
• Negative numbers


•Example : Suppose we have to test the login screen of an application. An experienced test engineer may immediately see if the password typed in the password field can be copied to a text field which may cause a breach in the security of the application.
•Error guessing testing for sorting subroutine situations
- The input list empty
- The input list contains only one entry
- All entries in the list have the same value
- Already sorted input list

Cause Effect Graphing


•A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases.
•It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.
• Steps:
- Identify the causes and effects from the specification
- Develop the cause effect diagram
- Create a decision table.
- Develop test cases from the decision table.


Insurance policy renewal example
•An insurance agency has the following norms to provide premium to its policy holders
•If age<=30 and no claim made premium increase will be 200 else 500 •For any age if claims made is 1 to 4 then premium increase will be 1000 •If one or more claims made then warning letter, if 5 or more claims made then cancel policy


State Transition Testing


A testing techniques that aids to validate various states when an program moves from one visible state to another.
Menu System Example
•The program starts with an introductory menu. As an option is selected the program changes state and displays a new menu. Eventually it displays some information , data input screen.
•Each option in each menu should be tested to validate that each selection made takes us to the state we should reach next