Wednesday, 10 August 2011

Testing Types



Black-box Testing: 

A test technique that focuses on testing the functionality of the
program component or application against its specifications without knowledge of how the
system constructed.


Boundary value analysis: 

A data selection technique in which test data is chosen from the
"boundaries" of the input or output domain classes, data structures and procedure
parameters. Choices often include the actual minimum and maximum boundary values, the
maximum value plus or minus one and the minimum value plus or minus one.


Branch Testing:  

A test method that requires that each possible branch on each decision be
executed on at least once.


Brainstorming:
A group process for generating creative and diverse ideas.

Bug: 

A catchall term for all software defects or errors.

Certification testing: 
Acceptance of software by an authorized agent after the software
has been validated by the agent or after its validity has been demonstrated to the agent.


Checkpoint(or verification point):
 
Expected behaviour of the application which must be
validated with the actual behaviour after certain action has been performed on the
application.


Client:
 
The customer that pays for the product received and receives the benefit from the
use of the product.


Condition Coverage:
 
A white-box testing technique that measures the number of or
percentage of decision outcomes covered by the test cases designed.100% condition
coverage would indicate that every possible outcome of each decision had been executed at
least once during testing.


Configuration Management Tools

Tools that are used to keep track of changes made to systems and all related artifacts.
These are also known as version control tools.


Configuration testing:
 
Testing of an application on all supported hardware and software
platforms.This may include various combinations of hardware types, configuration settings
and software versions.


Completeness:
 
A product is said to be complete if it has met all requirements.

Consistency: 

Adherence to a given set of rules.

Correctness: 

The extent to which software is free from design and coding defects. It is
also the extent to which software meets the specified requirements and user objectives.


Cost of Quality: 

Money spent above and beyond expected production costs to ensure that
the product the customer receives is a quality product. The cost of quality includes
prevention, appraisal, and correction or repair costs.



Conversion Testing: 

Validates the effectiveness of data conversion processes, including
field-field mapping and data translation.


Debugging: 
The process of analysing and correcting syntactic, logic and other errors
identified during testing.


Decision Coverage: 

A white-box testing technique that measures the number of - or
percentage - of decision directions executed by the test case designed. 100% Decision
coverage would indicate that all decision directions had been executed at least once during
testing. Alternatively each logical path through the program can be tested.


Decision Table
A tool for documenting the unique combinations of conditions and associated results in
order to derive unique test cases for validation testing.


Defect Tracking Tools
Tools for documenting defects as they are found during testing and for
tracking their status through to resolution.


Desk Check: 

A verification technique conducted by the author of the artifcat to verify the
completeness of their own work. This technique does not involve anyone else.


Dynamic Analysis: 

Analysis performed by executing the program code.Dynamic analysis
executes or simulates a development phase product and it detects errors by analyzing the
response of the product to sets of input data.


Entrance Criteria: 

Required conditions and standards for work product quality that must be
present or met for entry into the next stage of the software development process.


Equivalence Partitioning: 

A test technique that utilizes a subset of data that is
representative of a larger class. This is done in place of undertaking exhaustive testing of
each value of the larger class of data.


Error or defect: 

1.A discrepancy between a computed, observed or measured value or
condition and the true, specified or theortically correct value or conditon 2.Human action
that results in software containing a fault (e.g., omission or misinterpretation of user
requirements in a software specification, incorrect translation or omission of a requirement
in the design specification)


Error Guessing: 

Test data selection techniques for picking values that seem likely to cause
defects. This technique is based upon the theory that test cases and test data can be
developed based on intuition and experience of the tester.


Exhaustive Testing: 

Executing the program through all possible combination of values for
program variables.


Exit criteria: 

Standards for work product quality which block the promotion of incomplete
or defective work products to subsequent stages of the software development process.


Customer: 

The individual or organization, internal or external to the producing organization
that receives the product.


Cyclomatic complexity: 

The number of decision statements plus one.

Debugging: 
The process of analysing and correcting syntactic, logic and other errors
identified during testing.


Decision Coverage: 

A white-box testing technique that measures the number of - or
percentage - of decision directions executed by the test case designed. 100% Decision
coverage would indicate that all decision directions had been executed at least once during
testing. Alternatively each logical path through the program can be tested.


Decision Table
A tool for documenting the unique combinations of conditions and associated results in
order to derive unique test cases for validation testing.


Defect Tracking Tools
Tools for documenting defects as they are found during testing and for
tracking their status through to resolution.


Desk Check: 

A verification technique conducted by the author of the artifcat to verify the
completeness of their own work. This technique does not involve anyone else.


Dynamic Analysis: 

Analysis performed by executing the program code.Dynamic analysis
executes or simulates a development phase product and it detects errors by analyzing the
response of the product to sets of input data.


Entrance Criteria: 

Required conditions and standards for work product quality that must be
present or met for entry into the next stage of the software development process.


Equivalence Partitioning: 

 A test technique that utilizes a subset of data that is
representative of a larger class. This is done in place of undertaking exhaustive testing of
each value of the larger class of data.


Error or defect: 

1.A discrepancy between a computed, observed or measured value or
condition and the true, specified or theoretically correct value or condition 2.Human action
that results in software containing a fault (e.g., omission or misinterpretation of user
requirements in a software specification, incorrect translation or omission of a requirement
in the design specification)


Error Guessing: 

Test data selection techniques for picking values that seem likely to cause
defects. This technique is based upon the theory that test cases and test data can be
developed based on intuition and experience of the tester.


Exhaustive Testing: 

Executing the program through all possible combination of values for
program variables.


Exit criteria: 

Standards for work product quality which block the promotion of incomplete
or defective work products to subsequent stages of the software development process.

No comments:

Post a Comment

Testing Types

Black-box Testing:   A test technique that focuses on testing the functionality of the program component or application against its sp...