What are Software Testing Types ? How Many !

Software Testing Types

 

This post is related to software testing glossary . Not everything is required for each of you, but, Its the complete list . Have a glance and pick up yours.

A

 

Acceptance testing:

Testing with respect to user needs, requirements, and business Processes to determine whether or not a system satisfies the acceptance criteria and to enable whether or not to accept the System.

Accessibility testing:

Testing to determine the ease by which users with disabilities can use a Component or system.

Accuracy testing:

The process of testing to determine the accuracy of a software product.

Ad hoc testing:

Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

Agile testing:

Testing practice for a project using agile software development methodologies,Incorporating techniques and methods, such as extreme programming (XP), treating development as the customer of testing and emphasizing the test-first design paradigm. See also test-driven development.

Alpha testing:

Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.

Analytical testing:

Testing based on a systematic analysis of e.g., product risks or requirements.

API testing:

Testing performed by submitting commands to the software under test using Programming interfaces of the application directly.

Attack-based testing:

An experience-based testing technique that uses software attacks to induce failures, particularly security related failures.

 

B

 

Back-to-back testing:

Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.

Beta testing:

Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

Big-bang testing:

An integration testing approach in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.

 Black box testing:

Testing, either functional or non-functional, without reference to the internal structure of the component or system.

 Bottom-up testing:

An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. See also integration testing.

Branch testing:

A white box test design technique in which test cases are designed to execute branches.

 Build verification test:

A set of automated tests which validates the integrity of each new build and verifies its key/core functionality, stability and testability. It is an industry practice when a high frequency of build releases occurs (e.g., agile projects) and it is run on every new build before the build is released for further testing. See also regression testing, smoke test.

 Business process-based testing:

An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

 

C

 

Capture/playback:

A test automation approach, where inputs to the test object are recorded during manual testing in order to generate automated test scripts that could be executed later (i.e.replayed).

Checklist-based testing:

An experience-based test design technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.

Clear-box testing:

Also know as white-box testing. Please refer white box testing.

CLI testing:

Testing performed by submitting commands to the software under test using a dedicated command-line interface.

Code-based testing:

Please refer white box testing.

Combinatorial testing:

A means to identify a suitable subset of test combinations to achieve a predetermined level of coverage when testing an object with multiple parameters and where those parameters themselves each have several values, which gives rise to more combinations than are feasible to test in the time allowed. See also classification tree method, n-wise testing, pairwise testing, orthogonal array testing.

Compatibility testing:

Please refer interoperability testing.

Complete testing:

See exhaustive testing.

Compliance testing:

The process of testing to determine the compliance of the component or system.

Component integration testing:

Testing performed to expose defects in the interfaces and interaction between integrated components.

Component testing:

The testing of individual software components.

Concurrency testing:

Testing to determine how the occurrence of two or more activities within the same interval of time, achieved either by interleaving the activities or by simultaneous execution, is handled by the component or system.

Condition testing:

A white box test design technique in which test cases are designed to execute condition outcomes.

Configuration testing:

See portability testing.

Confirmation testing:

Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Conformance testing:

See compliance testing.

Consultative testing:

Testing driven by the advice and guidance of appropriate experts from outside the test team (e.g., technology experts and/or business domain experts).

Control flow testing:

An approach to structure-based testing in which test cases are designed to execute specific sequences of events. Various techniques exist for control flow testing, e.g., decision testing, condition testing, and path testing, that each have their specific approach and level of control flow coverage. See also decision testing, condition testing, path testing.

Conversion testing:

Testing of software used to convert data from existing systems for use in replacement systems.

Critical Testing Processes:

A content-based model for test process improvement built around twelve critical processes. These include highly visible processes, by which peers and management judge competence and mission-critical processes in which performance affects the company’s profits and reputation. See also content-based model.

D

 

Data-driven testing:

A scripting technique that stores test input and expected results in a ATT table or spreadsheet, so that a single control script can execute all of the tests in the table.ETAE Data-driven testing is often used to support the application of test execution tools such as capture/playback tools.

Data flow testing:

A white box test design technique in which test cases are designed to execute definition-use pairs of variables.

Data integrity testing:

See database integrity testing.

Database integrity testing:

Testing the methods and processes used to access and manage the data(base), to ensure access methods, processes and data rules function as expected and that during access to the database, data is not corrupted or unexpectedly deleted, updated or created.

Decision condition testing:

A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.

Decision table testing:

A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.

Decision testing:

A white box test design technique in which test cases are designed to execute decision outcomes.

Design-based testing:

An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).

Development testing:

Formal or informal testing conducted during the implementation of a component or system, usually in the development environment by developers.

Dirty testing:

See negative testing.

Documentation testing:

Testing the quality of the documentation, e.g. user guide or installation guide.

Dynamic testing:

Testing that involves the execution of the software of a component or system.

E

 

Efficiency testing:

The process of testing to determine the efficiency of a software product.

Elementary comparison testing:

A black box test design technique in which test cases are designed to execute combinations of inputs using the concept of modified condition decision coverage.

Exhaustive testing:

A test approach in which the test suite comprises all combinations of input values and preconditions.

Experience-based testing:

Testing based on the tester’s experience, knowledge and intuition.

Exploratory testing:

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

F

 

Factory acceptance testing:

Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether or not a component or system satisfies the requirements, normally including hardware as well as software. See also alpha testing.

Fail over testing:

Testing by simulating failure modes or actually causing failures in a controlled environment. Following a failure, the fail over mechanism is tested to ensure that data is not lost or corrupted and that any agreed service levels are maintained (e.g., function availability or response times). See also recoverability testing.

Field testing:

See beta testing.

Finite state testing:

See state transition testing.

Functional testing:

Testing based on an analysis of the specification of the functionality of a component or system. See also black box testing.

Functionality testing:

The process of testing to determine the functionality of a software product.

Glass box testing:

See white box testing.

GUI testing:

Testing performed by interacting with the software under test via the graphical user interface.

H

 

Hardware-software integration testing: Testing performed to expose defects in the interfaces and interaction between hardware and software components. See also integration testing.

 I

 

Incremental testing:

Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

Independence of testing:

Separation of responsibilities, which encourages the ATM accomplishment of objective testing.

Insourced testing:

Testing performed by people who are co-located with the project team but are not  fellow employees.

Installability testing:

The process of testing the installability of a software product. See also portability testing.

Integration testing:

Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

Interface testing:

An integration test type that is concerned with testing the interfaces between components or systems.

Interoperability testing:

The process of testing to determine the interoperability of a ATA software product. See also functionality testing.

Invalid testing:

Testing using input values that should be rejected by the component or system. See also error tolerance, negative testing.

Isolation testing:

Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and drivers, if needed.

K

 

Keyword-driven testing:

A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The  keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data-driven testing.

L

 

Load testing:

A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.

M

 

Maintainability testing:

The process of testing to determine the maintainability of a software product.

Maintenance testing:

Testing the changes to an operational system or the impact of a changed environment to an operational system.

Methodical testing:

Testing based on a standard set of tests, e.g., a checklist, a quality standard, or a set of generalized test cases.

Migration testing:

See conversion testing.

Model-based testing:

Testing based on a model of the component or system under test, e.g, reliability growth models, usage models such as operational profiles or behavioral models such as decision table or state transition diagram.

Modified condition decision testing:

A white box test design technique in which test cases are designed to execute single condition outcomes that independently affect a decision outcome.

Module testing:

See component testing.

Monkey testing:

Testing by means of a random selection from a large range of inputs and by randomly pushing buttons, ignorant of how the product is being used. 

Multiple condition testing:

A white box test design technique in which test cases are designed to execute combinations of single condition outcomes (within one statement). 

Mutation testing:

See back-to-back testing.

N

 

 N-switch testing:

A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions.

 N-wise testing:

A black box test design technique in which test cases are designed to execute all possible discrete combinations of any set of n input parameters.

Negative testing:

Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

Neighborhood integration testing:

A form of integration testing where all of the nodes that connect to a given node are the basis for the integration testing.

 Non-functional testing:

Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

O

 

Operational acceptance testing:

Operational testing in the acceptance test phase, typically performed in a (simulated) operational environment by operations and/or systems administration staff focusing on operational aspects, e.g. recoverability, resource-behavior, installability and technical compliance.

 Operational profile testing:

Statistical testing using a model of system operations (short duration tasks) and their probability of typical use. 

Operational testing:

Testing conducted to evaluate a component or system in its operational environment.

Orthogonal array testing:

A systematic way of testing all-pair combinations of variables using orthogonal arrays. It significantly reduces the number of all combinations of variables to test all pair combinations.

Outsourced testing:

Testing performed by people who are not co-located with the project team and are not fellow employees. 

p

 

Pair testing:

Two persons, e.g. two testers, a developer and a tester, or an end-user and a tester,working together to find defects. Typically, they share one computer and trade control of it while testing.

Pairwise integration testing:

A form of integration testing that targets pairs of components that work    together, as shown in a call graph. 

Pairwise testing:

A black box test design technique in which test cases are designed to execute all possible discrete combinations of each pair of input parameters.

Pareto analysis:

A statistical technique in decision making that is used for selection of a limited number of factors that produce significant overall effect. In terms of quality improvement, a large majority of problems (80%) are produced by a few key causes (20%).

Path testing:

A white box test design technique in which test cases are designed to execute paths.

Performance testing:

The process of testing to determine the performance of a software product.

Portability testing:

The process of testing to determine the portability of a software product.

Procedure testing:

Testing aimed at ensuring that the component or system can operate in conjunction with new or existing users’ business procedures or operational procedures.

Process-compliant testing:

Testing that follows a set of defined processes, e.g., defined by an external party such as a standards committee.

 Process-driven testing:

A scripting technique where scripts are structured into scenarios which represent use cases of the software under test. The scripts can be parameterized with test data. 

R

 

Random testing:

A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance. 

Reactive testing:

Testing that dynamically responds to the actual system under test and test results being obtained. Typically reactive testing has a reduced planning cycle and the design and implementation test phases are not carried out until the test object is received.

Recoverability testing:

The process of testing to determine the recoverability of a software product.See also reliability testing.

Regression-averse testing:

Testing using various techniques to manage the risk of regression, e.g.,by designing re-usable test ware and by extensive automation of testing at one or more test levels. 

Regression testing:

Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. 

Reliability testing:

The process of testing to determine the reliability of a software product. 

Requirements-based testing:

An approach to testing in which test cases are designed based on test objectives and test conditions derived from requirements, e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.

 Resource utilization testing:

The process of testing to determine the resource-utilization of a software product. 

Re-testing: 

The process of testing the fixed defects to verify whether the fixes are as required. 

Risk-based testing:

An approach to testing to reduce the level of product risks and inform ATM stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process. 

Robustness testing:

Testing to determine the robustness of the software product.

S 

 

Safety testing:

Testing to determine the safety of a software product. 

Scalability testing:

Testing to determine the scalability of the software product. 

Scripted testing:

Test execution carried out by following a previously documented sequence of tests. 

Security testing:

Testing to determine the security of the software product.  

Session-based testing:

An approach to testing in which test activities are planned as uninterrupted sessions of test design and execution, often used in conjunction with exploratory testing. 

Specification-based testing:

See black box testing. 

Standard-compliant testing:

Testing that complies to a set of requirements defined by a standard,e.g., an industry testing standard or a standard for testing safety-critical systems.

State transition testing:

A black box test design technique in which test cases are designed to execute valid and invalid state transitions. 

Statement testing:

A white box test design technique in which test cases are designed to execute statements.

Statistical testing:

A test design technique in which a model of the statistical distribution of the input  is used to construct the representative test cases.

Stress testing:

A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers. [After IEEE 610]

Structural testing:

See white-box testing.

 Suitability testing:

The process of testing to determine the suitability of a software product. 

Syntax testing:

A black box test design technique in which test cases are designed based upon the definition of the input domain and/or output domain. 

System integration testing:

Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data Interchange, Internet). 

System testing:

The process of testing an integrated system to verify that it meets specified requirements.

T

 

 Thread testing:

An approach to component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy. 

Top-down testing:

An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs.Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. See also integration testing.

U

 

Unit testing:

See component testing.

 Usability testing:

Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions. 

Use case testing:

A black box test design technique in which test cases are designed to execute scenarios of use cases. 

User acceptance testing:

See acceptance testing.

User story testing:

A black box test design technique in which test cases are designed based on user stories to verify their correct implementation. See also user story.

V

 

Volume testing:

Testing where the system is subjected to large volumes of data.

W

 

White-box testing:

Testing based on an analysis of the internal structure of the component or system.

Reference : International Software Testing Qualifications Board – Glossary