Tuesday, July 1, 2008

Test Design & Technique - Summary

Test Design & Technique - Summary


•The test case techniques discussed so far need to be combined to form overall strategy
•Each technique contributes a set of useful test cases, but none of them by itself contributes a thorough set of test cases


Use of the following design strategy may not guarantee all errors are found but it represents a reasonable compromise
1. If the specification contains combinations of input conditions - start with cause-effect graphing
2. Identify valid and invalid equivalence classes for input and output and supplement the test cases
3. Use boundary value analysis
4. Use error guessing to add additional test cases


SDLC and V Model


•There are some distinct test phases that take place in each of the software life cycle activity
• It is easier to visualize through the famous Waterfall model of development and V- model of testing
• The V proceeds from left to right, depicting the basic sequence of development and testing activities

V Model

The model is valuable because it highlights the existence of several levels or phases of testing and depicts the way each relates to a different development phase.


Why Write Test Cases Before Coding?


•When adding a new feature or enhancing an existing solution, writing test cases forces you to think about what the code is supposed to accomplish.
•You end up with a clean and simple design that does exactly what you expect it to do.


Testing Phases


Unit testing is code-based and performed primarily by developers to demonstrate that their smallest pieces of executable code function suitably.
Integration testing demonstrates that two or more units or other integrations work together properly, and tends to focus on the interfaces specified in low-level design.


Unit Testing


•The most 'micro' scale of testing to test particular functions, procedures or code modules. Also called as Module testing.
•Typically done by the programmer and not by Test Engineers, as it requires detailed knowledge of the internal program design and code.
•Purpose is to discover discrepancies between the unit's specification and its actual behavior.
•Testing a form, a class or a stored procedure can be an example of unit testing


Integration Testing


•Testing of combined parts of an application to determine if they function together correctly.
•The main three elements are interfaces, module combinations and global data structures.
•Attempts to find discrepancies between program & its external specification (program’s description from the point of view of the outside world).
•Testing a module to check if the component of the modules are integrated properly is example of integration testing


Integration Testing


•Modules are integrated by two ways.
A)Non-incremental Testing (Big Bang Testing)
Each Module is tested independently and at the end, all modules are combined to form a application.
B)Incremental Module Testing.
There are two types by which incremental module testing is achieved.
A)Top down Approach
B)Bottom up Approach


•Top Down Incremental Module Integration:
Firstly top module is tested first. Once testing of top module is done then any one of the next level modules is added and tested. This continues till last module at lowest level is tested.


Integration approach can be done Depth first or Breadth-first.
Top down testing
• The main control module is used as a test driver
•Stubs are substituted for all components directly subordinate to the main control module.
•Depending on the approach subordinate stubs are replaced by actual components.


•Bottom Up Incremental Module Integration:
Firstly module at the lowest level is tested first. Once testing of that module is done then any one of the next level modules is added to it and tested. This continues till top most module is added to rest all and tested.


Bottom-Up testing
•Low-level components are combined into clusters (builds) that perform a specific sub function.
•A driver is written to coordinate test case input and output.
•Drivers are removed and clusters are combined moving upward in the program structure.


Verification and Validation


Verification
• Verification refers to a set of activities which ensures that software correctly implements a specific function.
• Purpose of verification is to check: Are we building the product right?
Example: code and document reviews , inspections, walkthroughs


Validation
•Purpose of Validation is to check : Are we building the right product?
•Validation refers to a different set of activities which ensures that the software that has been built is traceable to customer requirements.


Validation
•After each validation test has been conducted,one of two possible conditions exist:
1. The function or performance characteristics conform to specification and are accepted, or
2. Deviation from specification and a deficiency list is created.
Example : a series of black box tests that demonstrate conformity with requirements.

System Testing


•Test the software in the real environment in which it is to operate. (hardware,people,information,etc.)
•Observe how the system performs in its target environment, for example in terms of speed, with volumes of data, many users, all making multiple requests


•Test how secure the system is and how can the system recover if some fault is encountered in the middle of procession
•System Testing, by definition, is impossible if the project has not produced a written set of measurable objectives for its product.


Types of System Testing


•Performance
•Volume
• Background
• Stress
• Security
•Usability
• Recovery
• Documentation
• Configuration
• Installation



Performance Testing

Performance is the behavior of the system w.r.t. goals for time, space, cost and reliability.
Performance objectives:
• Throughput : The number of tasks completed per unit time. Indicates how much work has been done within an interval.
• Response time : The time elapsed during input arrival and output delivery.
• Utilization : The percentage of time a component(CPU, Channel, storage, file server) is busy.


•The objective of performance testing is to devise test case that attempts to show that the program does not satisfy its performance objectives.
•To ensure that the system is responsive to user interaction and handles extreme loading without unacceptable operational degradation.
•To test response time and reliability by increased user traffic.
•To identify which components are responsible for performance degradation and what usage characteristics cause degradation to occur


Volume Testing


This testing is subjecting the program to heavy volumes of data. For e.g.
Ø A compiler would be fed a large source program to compile.
Ø An operating systems job queue would be filled to full capacity.
Ø A file system would be fed with enough data to cause the program to switch from one volume to another.


Stress Testing


Stress testing involves subjecting the program to heavy loads or stresses. The idea is to try to “break” the system. That is, we want to see what happens when the system is pushed beyond design limits
It is not same as volume testing. A heavy stress is a peak volume of data encounters over a short time


•In Stress testing a considerable load is generated as quickly as possible in order to stress the application and analyze the maximum limit of concurrent users the application can support
•Stress tests executes a system in a manner that demands resources in abnormal quantity,frequency, or volume
Example :
1. Generate 5 interrupts when the average rate is 2 or 3
2. Increase input data rate
3. Test cases that require max. memory

Stress Tests should answer the following questions
•Does the system degrade gently or does the server shut down
•Are appropriate messages displayed ? E.g. Server not available
•Are transactions lost as capacity is exceeded
•Are certain functions discontinued as capacity reaches the 80 or 90 percent level

Security Testing


•Security Testing verifies that protection mechanisms built into the system will protect it from improper penetration.
•Security testing is the process of executing test cases that subvert the program’s security checks


Example :
•One tries to break the operating systems memory protection mechanisms
•One tries to subvert the DBMS’s data security mechanisms
•The role of the developer is to make penetration cost more than the value of the information that will be obtained


Localization Testing


Localization translates the product UI and occasionally changes some settings to make it suitable for another region
•The test effort during localization testing focuses on
Areas affected during localization, UI and content
Culture/locale-specific, language specific and region specific areas


Usability Testing- Human Computer Interactions


Usability is
•The effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in a particular environment ISO 9241-11
•Effective–- Accomplishes user’s goal
•Efficient-- Accomplishes the goal quickly
•Satisfaction–- User enjoys the experience

Objectives of Testing & Testing Techniques

Objectives of Testing

To find greatest possible number of errors with manageable amount of efforts applied over a realistic time span with a finite number of test cases.


What Does Software testing Reveal?

1. Errors
2. Requirements conformance or the lack of it
3. Performance
4. An indication of quality


Testing Techniques


•Static Testing - Testing a software without execution on a computer. Involves just examination/review and evaluation.
•Dynamic Testing - Testing software through executing it.


Types Of Testing Techniques


Static Testing ::

Review

Code inspection

Walkthrough

Desk check


Dynamic Testing

White box

Black box


Static Testing


•Static Testing is a process of reviewing the work product and reviewing is done using a checklist.
•Static Testing helps weed out many errors/bugs at an early stage
•Static Testing lays strict emphasis on conforming to specifications.
•Static Testing can discover dead codes, infinite loops, uninitialised and unused variables, standard violations and is effective in finding 30-70% of errors


Static Testing Methods
• Self Review
• Code Inspection
• Walk Through
• Desk Checking


Code Review Checklist
•Data Reference Errors
•Data Declaration Errors
•Computation errors
•Comparison errors
•Control Flow errors
•Interface errors
•Input/output errors


Code Inspection
•Code inspection is a set of procedures and error detection techniques for group code reading.
•Involves reading or visual inspection of a program by a team of people , hence it is a group activity.
•The objective is to find errors but not solutions to the errors
•An inspection team usually consists of:
o A moderator
o A programmer
o The program designer
o A test specialist


Code Inspection Procedure

•The moderator distributes the program’s listing and design specification to the group well in advance of the inspection session
•During the inspection
• The programmer narrates the logic of the program, statement by statement
• During the discourse, questions are raised and pursued to determine if errors exist
• The program is analyzed w.r.t a check list of historically common programming errors

Code Inspection Helps in
Detect Defects
Conformance to standards/spec
Requirements Transformation into product


Walkthroughs
•Code Walkthrough is a set of procedures and error detection techniques for group reading
•Like code inspection it is also an group activity
•In Walkthrough meeting, three to five people are involved. Out of the three, one is moderator, the second one is Secretary who is responsible for recording all the errors and the third person plays a role of Test Engineer


Desk Checking
•Human error detection technique
•Viewed as a one person inspection or walkthrough
•A person reads a program and checks it with respect to an error list and/or walks test data through it


Dynamic Testing


White Box Test Techniques
• Code Coverage
• Statement Coverage
• Decision Coverage
• Condition Coverage
• Loop Testing
• Code complexity
•Cyclomatic Complexity
• Memory Leakage


Black Box Test Techniques
• Equivalence Partitioning
• Boundary Value Analysis
• Use Case / UML
• Error Guessing
• Cause-Effect Graphing
• State Transition Testing


White Box Test Techniques


•White box is logic driven testing and permits Test Engineer to examine the internal structure of the program
•Examine paths in the implementation
•Make sure that each statement, decision branch,or path is tested with at least one test case
•Desirable to use tools to analyze and track Coverage
•White box testing is also known as structural, glass-box and clear-box


White Box Test Techniques
• Code Coverage
Statement Coverage
Decision Coverage
Condition Coverage
Loop Testing
• Code complexity
• Memory Leakage

Code Coverage


•Measure the degree to which the test cases exercise or cover the logic (source code) of the program
•Types
• Statement Coverage
• Decision Coverage
• Conditional Coverage
• Loop Testing


Statement Coverage


•Test cases must be such that all statements in the program is traversed at least once
•Consider the following snippet of code
void procedure(int a, int b, int x)
{ If (a>1) && (b==0)
{ x=x/a; }
If (a==2 x>1)
{ x=x+1; }
}

Decision Coverage


Test cases must be such that each decision has a true and false outcome at least once. If we consider the same example as before, we need at least two test cases to execute the true and false outcome of the decisions at least once



Condition Coverage


•Test cases are written such that each condition in a decision takes on all possible outcomes at least once.


Loop Testing

•Loops testing is a white box testing technique that focuses exclusively on validity of Loop construct
•Types of loops
• Simple Loop
• Nested Loop
• Concatenated Loop
• Spaghatti Loop


Basis Path Testing


•Basis Path Testing is a white box testing technique that enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining the basis set of execution paths.
• Test Cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing.


Flow Graph


•Main tool for test case identification
•Shows the relationship between program segments , which is the sequence of statements having the property that if the first member of the sequence is executed then all other statements in that sequence will also be executed
•Nodes represent one program segment
•Areas bounded by edges and nodes are called regions
•An independent path is any path through the program that introduces at least one new set of processing statements or a new condition


Cyclomatic Complexity


•Cyclomatic Complexity is a software metric that provides a quantitative measure of logical complexity of a program
•When Used in the context of the basis path testing method, value for cyclomatic complexity defines number of independent paths in basis set of a program
•Also provides an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once
•Cyclomatic complexity is often referred to simply as
program complexity, or as McCabe's complexity


Calculating Cyclomatic Complexity


•The cyclomatic complexity of a software module is calculated from a flow graph of the module , when used in context of the basis path testing method
•Cyclomatic Complexity V(G) is calculated one of the three ways:
1.V(G) = E - N + 2 , where E is the number of edges and N = the number of nodes of the graph
2.V(G) = P+1, where P is the number of predicate nodes
3.V(G) = R , where number of region in the graph


Cyclomatic Complexity – Risk Evolution


Cyclomatic Complexity
Risk Evaluation
1-10 A simple program without much risk
11-20 More Complex, Moderate Risk
21-50 Complex, High Risk Program
Greater than 50 Highly complex,Very high risk program


Memory Leak


•Memory leak is present whenever a program loses track of memory.
•Memory leaks are most common types of defect and difficult to detect
•Performance degradation or a deadlock condition occurs
•Memory leak detection tools help to identify
• memory allocated but not deallocated
• uninitialized memory locations



Black Box Test Techniques


•Black box is data-driven, or input/output-driven testing
•The Test Engineer is completely unconcerned about the internal behavior and structure of program
•Black box testing is also known as behavioral, functional, opaque-box and closed-box


Black Box Test Techniques


Tests are designed to answer the following questions:
•How is functional validity tested ?
•What classes of input will make good test cases?
•Is the system particularly sensitive to certain input values?
•What effect will specific combinations of data have on system operations?


Black Box Test Techniques


•Equivalence Partitioning
•Boundary Value Analysis
•Error Guessing
•Cause Effect Graphing
•State transition testing


Equivalence Partitioning


•This method divides the input domain of a program into categories of data for deriving test cases.
•Identify equivalence classes - the input ranges which are treated the same by the software
- Valid classes: legal input ranges
- Invalid classes: illegal or out of range input values
•The aim is to group and minimize the number of test cases required to cover these input conditions


Assumption:
•If one value in a group works, all will work
One from each partition is better than all from one
•Thus it consists of two steps:
Ø Identify the Equivalence class
Ø Write test cases for each class

Examples of types of equivalence classes
•1. If an input condition specifies a continuous range of values, there is one valid class and two invalid classes
Example: The input variable is a mortgage applicant’s income. The valid range is $1000/mo. to $75,000/mo.
- Valid class: {1000 > = income < = 75,000} - Invalid classes: {income <> 75,000}

2. If an input condition specifies that a variable, say count, can take range of values(1 - 999),
Identify - one valid equivalence class (1 <>999)

3. If a “must be” condition is required, there is one valid equivalence class and one invalid class
Example: The mortgage applicant must be a person.
- Valid class: {person}
- Invalid classes:{corporation, ...anything else...}

Example
If we have to test function int Max(int a , int b) the Equivalence Classes for the arguments of the functions will be
Arguments Valid Values Invalid Values
A -32768 <= Value <= 32767 < - 32768 , >32767
B -32768 <= Value <= 32767 < - 32768 , >32767

Boundary Value Analysis


•“Bugs lurk in corners and congregate at boundaries …..”
Boris Beizer
•Boundary Conditions are those situations directly on, above, and beneath the edges of input equivalence classes and output equivalence classes.
•Boundary value analysis is a test case design technique that complements Equivalence partitioning
•Test cases at the boundary of each input Includes the values at the boundary , just below the boundary and just above the boundary

From previous example, we have the valid equivalence class as (1 < count="0," count="1,count=" count="998,count=" count="1000">


Error Guessing


•Based on experience and intuition one may add more test cases to those derived by following other methodologies.
•It is an ad hoc approach
•The basis behind this approach is in general people have the knack of “smelling out” errors


Error Guessing


•Make a list of possible errors or error-prone situations and then develop test cases based on the list.
•Defects’ history are useful. Probability that defects that have been there in the past are the kind that are going to be there in the future.
• Some examples :
• Empty or null lists/strings
• Zero occurrences
• Blanks or null character in strings
• Negative numbers


•Example : Suppose we have to test the login screen of an application. An experienced test engineer may immediately see if the password typed in the password field can be copied to a text field which may cause a breach in the security of the application.
•Error guessing testing for sorting subroutine situations
- The input list empty
- The input list contains only one entry
- All entries in the list have the same value
- Already sorted input list

Cause Effect Graphing


•A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases.
•It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.
• Steps:
- Identify the causes and effects from the specification
- Develop the cause effect diagram
- Create a decision table.
- Develop test cases from the decision table.


Insurance policy renewal example
•An insurance agency has the following norms to provide premium to its policy holders
•If age<=30 and no claim made premium increase will be 200 else 500 •For any age if claims made is 1 to 4 then premium increase will be 1000 •If one or more claims made then warning letter, if 5 or more claims made then cancel policy


State Transition Testing


A testing techniques that aids to validate various states when an program moves from one visible state to another.
Menu System Example
•The program starts with an introductory menu. As an option is selected the program changes state and displays a new menu. Eventually it displays some information , data input screen.
•Each option in each menu should be tested to validate that each selection made takes us to the state we should reach next

Principles of Testing

Principles of Testing

Economics of Testing
It is both the driving force and the limiting factor
Driving - Earlier the errors are discovered and removed in the lifecycle, lower the cost of their removal
Limiting - Testing must end when the economic returns cease to make it worth while i.e the costs of testing process significantly outweigh the returns
Exhaustive Testing

•Testing every possible input over every possible output
• Can use every possible input condition as a test case

Is Exhaustive Testing feasible?

E.g a COBOL Compiler
• Impossible to create test cases to represent all valid cases.
• Impossible to create test cases for all invalid COBOL Programs
•The compiler has to be tested to see that it does not do what it is not supposed to do
E.g to successfully compile a syntactically incorrect program
•Exhaustive testing is hence impossible
•Implications: One cannot test a program completely to guarantee that it is error free
Economics
•Objective is to therefore find maximum errors with a finite number of test cases

Limitations of Software Testing

Even if we could generate the input, run the tests, and evaluate the output, we would not detect all faults:
•Correctness is not checked:
The programmer may have misinterpreted the specs, the specs may have misinterpreted the requirements
•There is no way to find missing paths due to coding errors

Psychology of Testing

•Test Engineers pursue defects not people
•Don’t assume that no error(s) will be found
•Test for Valid and Expected as well as Invalid and Unexpected
•The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section
•Testing is extremely creative and intellectually challenging

Test Case

•“A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.”
(…IEEE)
In other words, a planned sequence of actions (with the objective of finding errors)


A Good Test-Case
•Has a high probability of detecting error(s)
•Test cases help us discover information (.. Kaner)
e.g. of information objectives
• Help managers make ship / no-ship decisions.
• Minimize technical support costs.
• Assess conformance to specification.
• Minimize safety-related lawsuit risk.
• Verify correctness of the product.


Other Terminologies

Test Suite – A set of individual test cases/scenarios that are executed as a package, in a particular sequence and to test a particular aspect.
E.g. Test Suite for a GUI or Test Suite for functionality
Test Cycle – A test cycle consists of a series of test suites which comprises a complete execution set from the initial setup to the the test environment through reporting and clean up.
E.g. Integration test cycle / regression test cycle