You may download this full glossary as free, offline PDF here.
A statistical testing approach to determine which of two systems or components performs better.
The unintended termination of the execution of a component or system prior to completion.
A use case in which some actors with malicious intent are causing harm to the system or to other actors.
The criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.
acceptance test-driven development
A collaborative approach to development in which the team and customers are using the customers own domain language to understand their requirements, which forms the basis for testing a component or system.
A test level that focuses on determining whether to accept the system.
The degree to which a component or system can be used by people with the widest range of characteristics and capabilities to achieve a specified goal in a specified context of use.
The process of obtaining user account information based on trial and error with the intention of using that information in a security attack.
The degree to which the actions of an entity can be traced uniquely to that entity.
The behavior produced/observed when a component or system is tested.
ad hoc review
A review technique performed informally without a structured process.
ad hoc testing
Informal testing performed without test analysis and test design.
The degree to which a component or system can be adapted for different or evolving hardware, software or other operational or usage environments.
An input to an ML model created by applying small perturbations to a working example that results in the model outputting an incorrect result with high confidence.
A test technique based on the attempted creation and execution of adversarial examples to identify defects in an ML model.
A statement on the values that underpin Agile software development. The values are: individuals and interactions over processes and tools, working software over comprehensive documentation, customer collaboration over contract negotiation, responding to change over following a plan.
Agile software development
A group of software development methodologies based on iterative incremental development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams.
agile testing quadrants
A classification model of test types/levels in four quadrants, relating them to two dimensions of test goals: supporting the team vs. critiquing the product, and technology-facing vs. business-facing.
A type of acceptance testing that is performed in the developer’s test environment by roles outside the development organization.
analytical test strategy
A test strategy whereby the test team analyses the test basis to identify the test conditions to cover.
The degree to which an assessment can be made for a component or system of either the impact of one or more intended changes, the diagnosis of deficiencies or causes of failures, or the identification of parts to be modified.
Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc., or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation.
Software that is used to detect and inhibit malware.
Repeated action, process, structure or reusable solution that initially appears to be beneficial and is commonly used but is ineffective and/or counterproductive in practice.
Testing performed by submitting requests to the test object using its application programming interface.
application programming interface
A type of interface in which the components or systems involved exchange information in a defined formal structure.
The degree to which users can recognize whether a component or system is appropriate for their needs.
A condition that does not contain logical operators.
A path or means by which an attacker can gain access to a system for malicious purposes.
A person or process that attempts to access data, functions or other restricted areas of the system without authorization, potentially with malicious intent.
Testing to determine if the game music and sound effects will engage the user in the game and enhance the game play.
An independent examination of a work product or process performed by a third party to assess whether it complies with specifications, standards, contractual agreements, or other criteria.
A procedure determining whether a person or a process is, in fact, who or what it is declared to be.
The degree to which the identity of a subject or resource can be proved to be the one claimed.
Permission given to a user or process to access resources.
automation code defect density
Defect density of a component of the test automation code.
automotive safety integrity level
One of four levels that specify the item’s or element’s necessary requirements of ISO 26262 and safety measures to avoid an unreasonable residual risk.
A process reference model and an associated process assessment model in the automotive industry.
The degree to which a component or system is operational and accessible when required for use.
Testing to compare two or more variants of a test item or a simulation model of the same test item by executing the same test cases on all variants and comparing the results.
A collaborative approach to development in which the team is focusing on delivering expected behavior of a component or system for the customer, which forms the basis for testing.
A type of acceptance testing performed at an external site to the developer’s test environment by roles outside the development organization.
black-box test technique
A test technique based on an analysis of the specification of a component or system.
A network of compromised computers, called bots or robots, which is controlled by a third party and used to transmit malware or spam, or to launch attacks.
A minimum or maximum value of an ordered equivalence partition.
boundary value analysis
A black-box test technique in which test cases are designed based on boundary values.
A transfer of control from a decision point.
An approach to testing in which gamification and awards for defects found are used as a motivator.
build verification test
An automated test that validates the integrity of each new build and verifies its key/core functionality, stability, and testability.
Capability Maturity Model Integration
A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best-practices for planning, engineering and managing product development and maintenance.
The degree to which the maximum limits of a component or system parameter meet requirements.
Testing to evaluate the capacity of a system.
A test automation approach in which inputs to the test object are recorded during manual testing to generate automated test scripts that can be executed later.
A graphical representation used to organize and display the interrelationships of various possible root causes of a problem. Possible causes of a real or potential defect or failure are organized in categories and subcategories in a horizontal tree-structure, with the (potential) defect or failure as the root node.
A graphical representation of logical relationships between inputs (causes) and their associated outputs (effects).
A black-box test technique in which test cases are designed from cause-effect graphs.
The process of confirming that a component, system or person complies with specified requirements.
A type of testing initiated by modification to a component or system.
A review technique guided by a list of questions or required attributes.
An experience-based test technique whereby the experienced tester uses a high-level list of items to be noted, checked, or remembered, or a set of rules or criteria against which a product has to be verified.
A tree diagram representing test data domains of a test object.
classification tree technique
A black-box test technique in which test cases are designed using a classification tree.
Testing performed by submitting commands to the software under test using a dedicated command-line interface.
A system in which the controlling action or input is dependent on the output or changes in output.
A type of security attack performed by inserting malicious code at an interface into an application to exploit poor handling of untrusted data.
A standard that describes the characteristics of a design or a design description of data or program components.
The degree to which a component or system can perform its required functions while sharing an environment and resources with other components or systems without a negative impact on any of them.
A black-box test technique in which test cases are designed to exercise specific combinations of values of several parameters
A type of interface in which the information is passed in form of command lines.
A type of product developed in an identical format for a large number of customers in the general market.
The degree to which a component or system can exchange information with other components or systems, and/or perform its required functions while sharing the same hardware or software environment.
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
Adherence of a work product to standards, conventions or regulations in laws and similar prescriptions.
Testing to determine the compliance of the component or system.
A part of a system that can be tested in isolation.
component integration testing
Testing in which the test items are interfaces and interactions between integrated components.
A test level that focuses on individual hardware or software components.
The practice of determining how a security attack has succeeded and assessing the damage caused.
The simultaneous execution of multiple independent threads by a component or system.
Testing to evaluate if a component or system involving concurrency behaves as specified.
The coverage of condition outcomes.
A white-box test technique in which test cases are designed to exercise outcomes of atomic conditions.
In managing project risks, the period of time within which a contingency action must be implemented in order to be effective in reducing the impact of the risk.
The degree to which a component or system ensures that data are accessible only to those authorized to have access.
An aggregation of work products that is designated for configuration management and treated as a single entity in the configuration management process.
A discipline applying technical and administrative direction and surveillance to identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify that it complies with specified requirements.
A type of change-related testing performed after fixing a defect to confirm that a failure caused by that defect does not reoccur.
The degree to which a component or system can connect to other components or systems.
consultative test strategy
A test strategy whereby the test team relies on the input of one or more key stakeholders to determine the details of the strategy.
context of use
Users, tasks, equipment (hardware, software and materials), and the physical and social environments in which a software product is used.
An automated software development procedure that merges, integrates and tests all changes as soon as they are committed.
An approach that involves a process of testing early, testing often, test everywhere, and automate to obtain feedback on the business risks associated with a software release candidate as rapidly as possible.
contractual acceptance testing
A type of acceptance testing performed to verify whether a system satisfies its contractual requirements.
A statistical process control tool used to monitor a process and determine whether it is statistically controlled. It graphically depicts the average value and the upper and lower control limits (the highest and lowest values) of a process.
The sequence in which operations are performed by a business process, component or system.
control flow analysis
A type of static analysis based on a representation of unique paths for executing a component or system.
control flow testing
A white-box test technique in which test cases are designed based on control flows.
A metric that shows progress toward a defined criterion, e.g., convergence of the total number of tests executed to the total number of tests planned for execution.
cost of quality
The total costs incurred on quality activities and issues and often split into prevention costs, appraisal costs, internal failure costs and external failure costs.
The degree to which specified coverage items have been determined or have been exercised by a test suite expressed as a percentage.
The criteria to define the coverage items required to reach a test objective.
An attribute or combination of attributes derived from one or more test conditions by using a test technique.
Critical Testing Processes
A content-based model for test process improvement built around twelve critical processes. These include highly visible processes, by which peers and management judge competence and mission-critical processes in which performance affects the company’s profits and reputation.
The degree to which a website or web application can function across different browsers and degrade gracefully when browser features are absent or lacking.
A vulnerability that allows attackers to inject malicious code into an otherwise benign website.
An approach to testing in which testing is distributed to a large group of testers.
A software tool developed specifically for a set of users or customers.
The maximum number of linear, independent paths through a program.
A representation of dynamic measurements of operational performance for some organization or activity, using metrics represented via metaphors such as visual dials, counters, and other devices resembling those on the dashboard of an automobile, so that the effects of events or activities can be easily understood and related to operational goals.
data flow analysis
A type of static analysis based on the lifecycle of variables.
Data transformation that makes it difficult for a human to recognize the original data.
The protection of personally identifiable information or otherwise sensitive information from undesired disclosure.
A scripting technique that uses data files to contain the test data and expected results needed to execute the test scripts.
The process of finding, analyzing and removing the causes of failures in a component or system.
A type of statement in which a choice between two or more possible outcomes controls which set of actions will result.
The coverage of decision outcomes.
decision table testing
A black-box test technique in which test cases are designed to exercise the combinations of conditions and the resulting actions shown in a decision table.
A white-box test technique in which test cases are designed to execute decision outcomes.
An imperfection or deficiency in a work product where it does not meet its requirements or specifications.
The number of defects per unit size of a work product.
Defect Detection Percentage
The number of defects found by a test level, divided by the number found by that test level and any other means afterwards.
The process of recognizing, recording, classifying, investigating, resolving and disposing of defects.
defect management committee
A cross-functional team of stakeholders who manage reported defects from initial detection to ultimate resolution (defect removal, defect deferral, or report cancellation). In some cases, the same team as the configuration control board.
Documentation of the occurrence, nature, and status of a defect.
A list of categories designed to identify and classify defects.
defect-based test technique
A test technique in which test cases are developed from what is known about a specific defect type.
The association of a definition of a variable with the subsequent use of that variable.
A physical or logical subnetwork that contains and exposes an organization’s external-facing services to an untrusted network, commonly the Internet.
denial of service
A security attack that is intended to overload the system with requests such that legitimate requests cannot be serviced.
A type of testing in which test suites are executed on physical or virtual devices.
A temporary component or tool that replaces another component and controls or calls a test item in isolation.
The process of evaluating a component or system based on its behavior during execution.
Testing that involves the execution of the test item.
The extent to which correct and complete goals are achieved.
The degree to which resources are expended in relation to results achieved.
Software used during testing that mimics the behavior of hardware.
The process of encoding information so that only authorized parties can retrieve the original information, usually by means of a specific decryption key or process.
A type of testing in which business processes are tested from start to finish under production-like circumstances.
Testing to determine the stability of a system under a significant load over a significant period of time within the system’s operational context.
The set of conditions for officially starting a defined task.
An abstraction of the real environment of a component or system including other components, processes, and environment conditions, in a real-time simulation.
A large user story that cannot be delivered as defined within a single iteration or is large enough that it can be split into smaller user stories.
A subset of the value domain of a variable within a component or system in which all values are expected to be treated the same based on the specification.
A black-box test technique in which test cases are designed to exercise equivalence partitions by using one representative member of each partition.
equivalent manual test effort
Effort required for running tests manually.
A human action that produces an incorrect result.
A test technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes.
A defect that was not detected by a testing activity that is supposed to find that defect.
A security tester using hacker techniques.
European Foundation for Quality Management excellence model
A non-prescriptive framework for an organization’s quality management system based on five ‘Enabling’ criteria and four ‘Results’ criteria.
A test approach in which the test suite comprises all combinations of input values and preconditions.
The set of conditions for officially completing a defined task.
The observable predicted behavior of a test item under specified conditions based on its test basis.
experience-based test technique
A test technique only based on the tester’s experience, knowledge and intuition.
Testing based on the tester’s experience, knowledge and intuition.
expert usability review
An informal usability review in which the reviewers are experts. Experts can be usability experts or subject matter experts, or both.
An approach to testing whereby the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests.
The status of a test result in which the actual result does not match the expected result.
The backup operational mode in which the functions of a system that becomes unavailable are assumed by a secondary system.
An event in which a component or system does not perform a required function within specified limits.
The physical or functional manifestation of a failure.
Failure Mode and Effect Analysis
A systematic approach to risk identification and analysis of identifying possible modes of failure and attempting to prevent their occurrence.
The ratio of the number of failures of a given category to a given unit of measure.
A test result which fails to identify the presence of a defect that is actually present in the test object.
A test result in which a defect is reported although no such defect actually exists in the test object.
The process of intentionally adding a defect to a component or system to determine whether it can detect and possibly recover from it.
The process of intentionally adding defects to a component or system to monitor the rate of detection and removal, and to estimate the number of defects remaining.
The degree to which a component or system operates as intended despite the presence of hardware or software faults.
Fault Tree Analysis
A technique used to analyze the causes of faults (defects). The technique visually models how logical relationships between failures, human errors, and external events can combine to cause specific faults to disclose.
An iterative and incremental software development process driven from a client-valued functionality (feature) perspective. Feature-driven development is mostly used in Agile software development.
A type of testing conducted to evaluate the system behavior under productive connectivity conditions in the field.
A result of an evaluation that identifies some important issue, problem, or opportunity.
A component or set of components that controls incoming and outgoing network traffic based on predetermined security rules.
follow-up test case
A test case generated by applying a metamorphic relation to a source test case during metamorphic testing.
A type of review that follows a defined process with a formally documented output.
A type of evaluation designed and used to improve the quality of a component or system, especially when it is still being designed.
The degree to which the functions facilitate the accomplishment of specified tasks and objectives.
The degree to which the set of functions covers all the specified tasks and user objectives.
The degree to which a component or system provides the correct results with the needed degree of precision.
The absence of unreasonable risk due to hazards caused by malfunctioning behavior of Electric/Electronic(E/E) – Systems.
The degree to which a component or system provides functions that meet stated and implied needs when used under specified conditions.
Testing performed to evaluate if a component or system satisfies functional requirements.
A software testing technique used to discover security vulnerabilities by inputting massive amounts of random data, called fuzz, to the component or system.
generic test automation architecture
Representation of the layers, components, and interfaces of a test automation architecture, allowing for a structured and modular approach to implement test automation.
graphical user interface
A type of interface that allows users to interact with a component or system through graphical icons and visual indicators.
Testing performed by interacting with the software under test via the graphical user interface.
A person or organization who is actively involved in security attacks, usually with malicious intent.
hardware in the loop
Dynamic testing performed using real hardware with integrated software in a simulated environment.
Transformation of a variable length string of characters into a usually shorter fixed-length value or key. Hashed values, or hashes, are commonly used in table or database lookups. Cryptographic hash functions are used to secure data.
A generally recognized rule of thumb that helps to achieve a goal.
An evaluation of a work product that uses a heuristic.
high-level test case
A test case with abstract preconditions, input data, expected results, postconditions, and actions (where applicable).
An approach to design that aims to make software products more usable by focusing on the use of the software products and applying human factors, ergonomics, and usability knowledge and techniques.
A pointer within a web page that leads to other web pages.
hyperlink test tool
A tool used to check that no broken hyperlinks are present on a website.
An organizational improvement model that serves as a roadmap for initiating, planning, and implementing improvement actions. The IDEAL model is named for the five phases it describes: initiating, diagnosing, establishing, acting, and learning.
The identification of all work products affected by a change, including an estimate of the resources needed to accomplish the change.
incremental development model
A type of software development lifecycle model in which the component or system is developed through a series of increments.
independence of testing
Separation of responsibilities, which encourages the accomplishment of objective testing.
independent test lab
An organization responsible to test and certify that the software, hardware, firmware, platform, and operating system follow all the jurisdictional rules for each location where the product will be used.
A type of review that does not follow a defined process and has no formally documented output.
Measures that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality, and non-repudiation. These measures include providing for restoration of information systems by incorporating protection, detection, and reaction capabilities.
input data testing
A test level that focuses on the quality of the data used for training and prediction by ML models.
A security threat originating from within the organization, often by an authorized system user.
Testing performed by people who are co-located with the project team but are not fellow employees.
A type of formal review to identify issues in a work product, which provides measurement to improve the review process and the software development process.
The degree to which a component or system can be successfully installed and/or uninstalled in a specified environment.
A test level that focuses on interactions between components or systems.
The degree to which a component or system allows only authorized access and modification to a component, a system or data.
A type of integration testing performed to determine whether components or systems pass data and control correctly to one another.
The degree to which two or more components or systems can exchange information and use the information that has been exchanged.
intrusion detection system
A system which monitors activities on the 7 layers of the OSI model from network to application level, to detect violations of the security policy.
iterative development model
A type of software development lifecycle model in which the component or system is developed through a series of repeated cycles.
A scripting technique in which test scripts contain high-level keywords and supporting files that contain low-level scripts that implement those keywords.
The degree to which a component or system can be used by specified users to achieve specified goals of learning with satisfaction and freedom from risk in a specified context of use.
level of intrusion
The level to which a test object is modified by adjusting it for testability.
level test plan
A test plan that typically addresses one test level.
A simple scripting technique without any control structure in the test scripts.
The process of simulating a defined set of activities at a specified load to be submitted to a component or system.
A tool that generates a load for a system under test.
The control and execution of load generation, and performance monitoring and reporting of the component or system.
Documentation defining a designated number of virtual users who process a defined set of transactions in a specified time period that a component or system being tested may experience in production.
A type of performance testing conducted to evaluate the behavior of a component or system under varying loads, usually between anticipated conditions of low, typical, and peak usage.
low-level test case
A test case with concrete values for preconditions, input data, expected results, postconditions, and a detailed description of actions (where applicable).
The degree to which a component or system can be modified by the intended maintainers.
The process of modifying a component or system after delivery to correct defects, improve quality characteristics, or adapt to a changed environment.
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Software that is intended to harm a system or its components.
Static analysis aiming to detect and remove malicious code received at an interface.
A systematic evaluation of software acquisition, supply, development, operation, or maintenance process, performed by or on behalf of management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of management approaches to achieve fitness for purpose.
A view of quality measured by the degree that a product or service conforms to its intended design and requirements based on the process used.
master test plan
A test plan that is used to coordinate multiple test levels or test types.
Testing to determine the correctness of the pay table implementation, the random number generator results, and the return to player computations.
(1) The capability of an organization with respect to the effectiveness and efficiency of its processes and work practices. (2) The degree to which a component or system meets needs for reliability under normal operation.
A structured collection of elements that describe certain aspects of maturity in an organization, and aid in the definition and understanding of an organization’s processes.
Any model used in model-based testing.
mean time between failures
The average time between consecutive failures of a component or system.
mean time to failure
The average time from the start of operation to a failure for a component or system.
mean time to repair
The average time a component or system will take to recover from a failure.
The process of assigning a number or category to an entity to describe an attribute of that entity.
A memory access failure due to a defect in a program’s dynamic store allocation logic that causes it to fail to release memory after it has finished using it.
A description of how a change in the test inputs from the source test case to the follow-up test case affects a change in the expected outputs from the source test case to the follow-up test case.
A test technique in which the inputs and expected results are extrapolated from a passing test case using a metamorphic relation.
A table containing different test approaches, testing techniques and test types that are required depending on the Automotive Safety Integrity Level (ASIL) and on the context of the test object.
methodical test strategy
A test strategy whereby the test team uses a pre-determined set of test conditions such as a quality standard, a checklist, or a collection of generalized, logical test conditions which may relate to a particular domain, application or type of testing.
A measurement scale and the method used for measurement.
ML functional performance
The degree to which an ML model meets ML functional performance criteria.
ML functional performance criteria
Criteria based on ML functional performance metrics used as a basis for model evaluation, tuning and testing.
ML functional performance metrics
A set of measures that relate to the functional correctness of an ML system.
An implementation of machine learning (ML) that generates a prediction, classification or recommendation based on input data.
ML model testing
A test level that focuses on the ability of an ML model to meet required ML functional performance criteria and non-functional criteria.
The coverage of model elements.
model in the loop
Dynamic testing performed using a simulation model of the system in a simulated environment.
model-based test strategy
A test strategy whereby the test team derives testware from models.
Testing based on or involving models.
(1) The person responsible for running review meetings. (2) The person who performs a usability test session.
The degree to which a component or system can be modified without degrading its quality.
modified condition/decision coverage
The coverage of all outcomes of the atomic conditions that independently affect the overall decision outcome.
modified condition/decision testing
A white-box test technique in which test cases are designed to exercise outcomes of atomic conditions that independently affect a decision outcome.
The degree to which a system is composed of discrete components such that a change to one component has minimal impact on other components.
A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyzes the behavior of the component or system.
Testing to determine if many players can simultaneously interact with the casino game world, with computer-controlled opponents, game servers, and with each other, as expected according to the game design.
multiple condition coverage
The coverage of all possible combinations of all single condition outcomes within one statement.
multiple condition testing
A white-box test technique in which test cases are designed to exercise outcome combinations of atomic conditions.
Myers-Briggs Type Indicator
An indicator of psychological preference representing the different personalities and communication styles of people.
The coverage of sequences of N+1 transitions.
Testing a component or system in a way for which it was not intended to be used.
neighborhood integration testing
A type of integration testing in which all of the nodes that connect to a given node are the basis for the integration testing.
A sub-network with a defined level of trust. For example, the Internet or a public zone would be considered to be untrusted.
The coverage of activated neurons in the neural network for a set of tests.
Testing performed to evaluate that a component or system complies with non-functional requirements.
The degree to which actions or events can be proven to have taken place, so that the actions or events cannot be repudiated later.
Model-based test approach whereby test cases are generated into a repository for future execution.
Model-based test approach whereby test cases are generated and executed simultaneously.
A software tool that is available to all potential users in source code form, usually via the internet. Its users are permitted, usually under license, to study, change, improve and, at times, to distribute the software.
A system in which controlling action or input is independent of the output or changes in output.
The degree to which a component or system has attributes that make it easy to operate and control.
operational acceptance testing
A type of acceptance testing is performed to determine if operations and/or systems administration staff can accept a system.
An actual or predicted pattern of use of the component or system.
The process of developing and implementing an operational profile.
Testing performed by people who are not co-located with the project team and are not fellow employees.
An approach in which two team members simultaneously collaborate on testing a work product.
pairwise integration testing
A type of integration testing that targets pairs of components that work together as shown in a call graph.
A black-box test technique in which test cases are designed to exercise pairs of parameter-value pairs.
par sheet testing
Testing to determine that the game returns the correct mathematical results to the screen, to the players’ accounts, and to the casino account.
Decision rules used to determine whether a test item has passed or failed.
The status of a test result in which the actual result matches the expected result.
A security attack recovering secret passwords stored in a computer system or transmitted over a network.
A sequence of consecutive edges in a directed graph.
A white-box test technique in which test cases are designed to execute paths in a control flow graph.
A review performed by others with the same abilities to create the work product.
A testing technique aiming to exploit security vulnerabilities (known or unknown) to gain unauthorized access.
The degree to which a component or system uses time, resources and capacity when accomplishing its designated functions.
A metric that supports the judgment of process performance.
Testing to determine the performance efficiency of a component or system.
performance testing tool
A test tool that generates load for a designated test item and that measures and records its performance during test execution.
A review technique in which a work product is evaluated from the perspective of different stakeholders with the purpose to derive other work products.
A security attack intended to redirect a website’s traffic to a fraudulent website without the user’s knowledge or consent.
The percentage of defects that are removed in the same phase of the software lifecycle in which they were introduced.
An attempt to acquire personal or sensitive information by masquerading as a trustworthy entity in an electronic communication.
A consensus-based estimation technique, mostly used to estimate effort or relative size of user stories in Agile software development. It is a variation of the Wideband Delphi method using a deck of cards with values representing the units in which the team estimates.
player perspective testing
Testing done by testers from a player’s perspective to validate player satisfaction.
A data item that specifies the location of another data item.
The degree to which a component or system can be transferred from one hardware, software or other operational or usage environment to another.
A type of testing to ensure that the release is performed correctly and the application can be deployed.
The expected state of a test item and its environment at the end of test case execution.
The required state of a test item and its environment prior to test case execution.
The level of (business) importance assigned to an item, e.g., defect.
A systematic approach to risk-based testing that employs product risk identification and analysis to create a product risk matrix based on likelihood and impact. Term is derived from Product RISk MAnagement.
An unintended change in behavior of a component or system caused by measuring it.
A disciplined evaluation of an organization’s software processes against a reference model.
A framework in which processes of the same nature are classified into an overall model.
process-compliant test strategy
A test strategy whereby the test team follows a set of predefined processes, whereby the processes address such items as documentation, the proper identification and use of the test basis and test oracle(s), and the organization of the test team.
A scripting technique where scripts are structured into scenarios which represent use cases of the software under test. The scripts can be parameterized with test data.
A risk impacting the quality of a product.
A view of quality measured by the degree that well-defined quality characteristics are met.
A risk that impacts project success.
A type of testing to confirm that sensors can detect nearby objects without physical contact.
An independently derived variant of the test item used to generate results, which are compared with the results of the original test item based on the same test inputs.
The degree to which a component or system satisfies the stated and implied needs of its various stakeholders.
Activities focused on providing confidence that quality requirements will be fulfilled.
A category of quality attributes that bears on work product quality.
A set of activities designed to evaluate the quality of a component or system.
quality function deployment
A facilitated workshop technique that helps determine critical characteristics for new product development.
The process of establishing and directing a quality policy, quality objectives, quality planning, quality control, quality assurance, and quality improvement for an organization.
A product risk related to a quality characteristic.
A matrix describing the participation by various roles in completing tasks or deliverables for a project or process. It is especially useful in clarifying roles and responsibilities. RACI is an acronym derived from the four key responsibilities most typically used: Responsible, Accountable, Consulted, and Informed.
A technique for decreasing the load on a system in a measurable and controlled way.
A technique for increasing the load on a system in a measurable and controlled way.
A black-box test technique in which test cases are designed by generating random independent inputs to match an operational profile.
reactive test strategy
A test strategy whereby the test team waits to design and implement tests until the software is received, reacting to the actual system under test.
Testing that dynamically responds to the behavior of the test object and to test results being obtained.
The exploration of a target area aiming to gain information that can be useful for an attack.
The degree to which a component or system can recover the data directly affected by an interruption or a failure and re-establish the desired state of the component or system.
A type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software.
regression-averse test strategy
A test strategy whereby the test team applies various techniques to manage the risk of regression such as functional and/or non-functional regression test automation at one or more levels.
regulatory acceptance testing
A type of acceptance testing performed to verify whether a system conforms to relevant laws, policies and regulations.
The degree to which a component or system performs specified functions under specified conditions for a specified period of time.
reliability growth model
A model that shows the growth in reliability over time of a component or system as a result of the defect removal.
remote test lab
A facility that provides remote access to a test environment.
The degree to which a component or system can replace another specified component or system for the same purpose in the same environment.
A provision that contains criteria to be fulfilled.
An approach to testing in which test cases are designed based on requirements.
The degree to which the amounts and types of resources used by a component or system, when performing its functions, meet requirements.
A regular event in which team members discuss results, review their practices, and identify ways to improve.
The degree to which a work product can be used in more than one system, or in building other work products.
A type of static testing in which a work product or process is evaluated by one or more individuals to detect defects or to provide improvements.
A document describing the approach, resources and schedule of intended review activities. It identifies, amongst others: documents and code to be reviewed, review types to be used, participants, as well as entry and exit criteria to be applied in case of formal reviews, and the rationale for their choice. It is a record of the review planning process.
A participant in a review who identifies issues in the work product.
A factor that could result in future negative consequences.
The overall process of risk identification and risk assessment.
The process to examine identified risks and determine the risk level.
The process of finding, recognizing and describing risks.
The damage that will be caused if the risk becomes an actual outcome or event.
The qualitative or quantitative measure of a risk defined by impact and likelihood.
The probability that a risk will become an actual outcome or event.
The process for handling risks.
The process through which decisions are reached and protective measures are implemented for reducing or maintaining risks to specified levels.
Testing in which the management, selection, prioritization, and use of testing activities and resources are based on corresponding risk types and risk levels.
A review technique in which a work product is evaluated from the perspective of different stakeholders.
A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed.
root cause analysis
An analysis technique aimed at identifying the root causes of defects. By directing corrective measures at root causes, it is hoped that the likelihood of defect recurrence will be minimized.
S.M.A.R.T. goal methodology
A methodology whereby objectives are defined very specifically rather than generically. SMART is an acronym derived from the attributes of the objective to be defined: Specific, Measurable, Attainable, Relevant and Timely.
safety integrity level
The level of risk reduction provided by a safety function, related to the frequency and severity of perceived hazards.
A cryptographic technique that adds random data (salt) to the user data prior to hashing.
The degree to which a component or system can be adjusted for changing capacity.
Testing to determine the scalability of the software product.
A review technique in which a work product is evaluated to determine its ability to address specific scenarios.
A person who records information at a review meeting.
A person who executes security attacks that have been created by other hackers rather than creating one’s own attacks.
Testing (manual or automated) that follows a test script.
The degree to which a component or system protects its data and resources against unauthorized access or use and secures unobstructed access and use for its legitimate users.
An attempt to gain unauthorized access to a component or system, resources, information, or an attempt to compromise system integrity.
An audit evaluating an organization’s security processes and infrastructure.
A high-level document describing the principles, approach and major objectives of the organization regarding security.
A set of steps required to implement the security policy and the steps to be taken in response to a security incident.
A quality risk related to security.
Testing to determine the security of the software product.
A weakness in the system that could allow for a successful security attack.
sequential development model
A type of software development lifecycle model in which a complete system is developed in a linear way of several discrete and successive phases with no overlap between them.
A technique to enable virtual delivery of services which are deployed, accessed and managed remotely.
session-based test management
A method for measuring and managing session-based testing.
An approach in which test activities are planned as test sessions.
The degree of impact that a defect has on the development or operation of a component or system.
A programming language/interpreter technique for evaluating compound conditions in which a condition on one side of a logical operator may not be evaluated if the condition on the other side is sufficient to determine the final outcome.
sign change coverage
The coverage of neurons activated with both positive and negative activation values in a neural network for a set of tests.
The coverage achieved if by changing the sign of each neuron it can be shown to individually cause one neuron in the next layer to change sign while all other neurons in the next layer do not change sign for a set of tests.
A component or system used during testing which behaves or operates like a given component or system.
A test suite that covers the main functionality of a component or system to determine whether it works properly before planned testing begins.
An attempt to trick someone into revealing information (e.g., a password) that can be used to attack systems or networks.
software development lifecycle
The activities performed at each stage in software development, and how they relate to one another logically and chronologically.
software in the loop
Dynamic testing performed using real software in a simulated environment or with experimental hardware.
The period of time that begins when a software product is conceived and ends when the software is no longer available for use. The software lifecycle typically includes a concept phase, requirements phase, design phase, implementation phase, test phase, installation and checkout phase, operation and maintenance phase, and sometimes, retirement phase. Note these phases may overlap or be performed iteratively.
software process improvement
A program of activities designed to improve the performance and maturity of the organization’s software processes and the results of such a program.
software qualification test
Testing performed on completed, integrated software to provide evidence for compliance with software requirements.
Software Usability Measurement Inventory
A questionnaire-based usability testing tool that measures and benchmarks user experience.
source test case
A test case that passed and is used as the basis of follow-up test cases in metamorphic testing.
specification by example
A development technique in which the specification is defined by examples.
Testing to determine the ability of a system to recover from sudden bursts of peak loads and return to a steady state.
A type of code injection in the structured query language (SQL).
Formal, possibly mandatory, set of requirements developed and used to prescribe consistent approaches to the way of working or to provide guidelines (e.g., ISO/IEC standards, IEEE standards, and organizational standards).
standard-compliant test strategy
A test strategy whereby the test team follows a standard. Standards followed may be valid e.g., for a country (legislation standards), a business domain (domain standards), or internally (organizational standards).
state transition testing
A black-box test technique in which test cases are designed to exercise elements of a state transition model.
An entity in a programming language, which is typically the smallest indivisible unit of execution.
The coverage of executable statements.
A white-box test technique in which test cases are designed to execute statements.
The process of evaluating a component or system without executing it, based on its form, structure, content, or documentation.
Testing that does not involve the execution of a test item.
A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources such as access to memory or servers.
Coverage measures based on the internal structure of a component or system.
A scripting technique that builds and utilizes a library of reusable (parts of) scripts.
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
A type of evaluation designed and used to gather conclusions about the quality of a component or system, especially when a substantial part of it has completed design.
The step-by-step process of reducing the security vulnerabilities of a system by applying a security policy and different layers of protection.
system integration testing
A test level that focuses on interactions between systems.
system of systems
Multiple heterogeneous, distributed systems that are embedded in networks at multiple levels and in multiple interconnected domains, addressing large-scale inter-disciplinary common problems and purposes, usually without a common management structure.
system qualification test
Testing performed on the completed, integrated system of software components, hardware components, and mechanics to provide evidence for compliance with system requirements and that the complete system is ready for delivery.
A test level that focuses on verifying that a system as a whole meets specified requirements.
The amount of data passing through a component or system in a given time period.
system under test
A type of test object that is a system.
System Usability Scale
A simple, ten-item attitude scale giving a global view of subjective assessments of usability.
Systematic Test and Evaluation Process
A structured testing methodology also used as a content-based model for improving the testing process. It does not require that improvements occur in a specific order.
A formal review by technical experts that examine the quality of a work product and identify discrepancies from specifications and standards.
A set of one or more test cases.
test adaptation layer
The layer in a test automation architecture which provides the necessary code to adapt test scripts on an abstract level to the various components, configuration or interfaces of the SUT.
The activity that identifies test conditions by analyzing the test basis.
The implementation of the test strategy for a specific project.
(1) A person who provides guidance and strategic direction for a test organization and for its relationship with other disciplines. (2) A person who defines the way testing is structured for a given system, including topics such as test tools and test data management.
The use of software to perform or support test activities.
test automation architecture
An instantiation of the generic test automation architecture to define the architecture of a test automation solution, i.e., its layers, components, services and interfaces.
test automation engineer
A person who is responsible for the design, implementation and maintenance of a test automation architecture as well as the technical evolution of the resulting test automation solution.
test automation framework
A tool that provides an environment for test automation. It usually includes a test harness and test libraries.
test automation manager
A person who is responsible for the planning and supervision of the development and evolution of a test automation solution.
test automation solution
A realization/implementation of a test automation architecture, i.e., a combination of components implementing a specific test automation assignment. The components may include commercial off-the-shelf test tools, test automation frameworks, as well as test hardware.
test automation strategy
A high-level plan to achieve long-term objectives of test automation under given boundary conditions.
The body of knowledge used as the basis for test analysis and design.
A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test conditions.
test case explosion
The disproportionate growth of the number of test cases with growing size of the test basis, when using a certain test design technique. Test case explosion may also happen when applying the test design technique systematically for the first time.
Documentation of the goal or objective for a test session.
During the test closure phase of a test process data is collected from completed activities to consolidate experience, testware, facts and numbers. The test closure phase consists of finalizing and archiving the testware and evaluating the test process, including preparation of a test evaluation report.
The activity that makes testware available for later use, leaves test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.
test completion report
A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria.
A testable aspect of a component or system identified as a basis for testing.
The activity that develops and applies corrective actions to get a test project on track when it deviates from what was planned.
An instance of the test process against a single identifiable version of the test object.
Data needed for test execution.
test data preparation
The activity to select data from existing databases or create, generate, manipulate and edit data for testing.
test definition layer
The layer in a generic test automation architecture which supports test implementation by supporting the definition of test suites and/or test cases, e.g., by offering templates or guidelines.
The activity that derives and specifies test cases from test conditions.
A senior manager who manages test managers.
An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
An approximation related to various aspects of testing.
The activity that runs a test on a component or system producing actual results.
test execution automation
The use of software, e.g., capture/playback tools, to control the execution of tests, the comparison of actual results to expected results, the setting up of test preconditions, and other test control and reporting functions.
test execution layer
The layer in a generic test automation architecture which supports the execution of test suites and/or test cases.
test execution schedule
A schedule for the execution of test suites within a test cycle.
test execution tool
A test tool that executes tests against a designated test item and evaluates the outcomes against expected results and postconditions.
test generation layer
The layer in a generic test automation architecture which supports manual or automated design of test suites and/or test cases.
A collection of stubs and drivers needed to execute a test suite
A customized software interface that enables automated testing of a test object.
The activity that prepares the testware needed for test execution based on test analysis and design.
test improvement plan
A plan for achieving organizational test process improvement objectives based on a thorough understanding of the current strengths and weaknesses of the organization’s test processes and test process assets.
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office environment and procedures.
A part of a test object used in the test process.
On large projects, the person who reports to the test manager and is responsible for project management of a particular test level or a particular set of testing activities.
A specific instantiation of a test process.
A chronological record of relevant details about the execution of tests.
The activity of creating a test log.
The planning, scheduling, estimating, monitoring, reporting, control and completion of test activities.
test management tool
A tool that supports test management.
The person responsible for project management of testing activities, resources, and evaluation of a test object.
Test Maturity Model integration
A five-level staged framework for test process improvement, related to the Capability Maturity Model Integration (CMMI), that describes the key elements of an effective test process.
The purpose of testing for an organization, often documented as part of the test policy.
A model describing testware that is used for testing a component or a system under test.
The activity that checks the status of testing activities, identifies any variances from planned or expected, and reports status to stakeholders.
The work product to be tested.
The reason or purpose of testing.
A source to determine an expected result to compare with the actual result of the system under test.
Documentation describing the test objectives to be achieved and the means and the schedule for achieving them, organized to coordinate testing activities.
The activity of establishing or updating a test plan.
Test Point Analysis
A formula based test estimation method based on function point analysis.
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap up activities post execution.
The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test implementation, test execution, and test completion.
test process group
A collection of specialists who facilitate the definition, maintenance, and improvement of the test processes used by an organization.
test process improvement
A program of activities undertaken to improve the performance and maturity of the organization’s test processes.
test process improvement manifesto
A statement that echoes the Agile manifesto, and defines values for improving the test process.
test process improver
A person implementing improvements in the test process based on a test improvement plan.
test progress report
A type of test report produced at regular intervals about the progress of test activities against a baseline, risks, and alternatives requiring a decision.
A graphical model representing the relationship of the amount of testing per level, with more at the bottom than at the top.
Documentation summarizing test activities and results.
Collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
The consequence/outcome of the execution of a test.
The execution of a test suite on a specific version of the test object.
A list of activities, tasks or events of the test process, identifying their intended start and finish dates and/or times, and interdependencies.
A sequence of instructions for the execution of a test.
test selection criteria
The criteria used to guide the generation of test cases or to select test cases in order to limit the size of a test.
An uninterrupted period of time spent in executing tests.
The complete documentation of the test design, test cases, and test scripts for a specific test item.
Documentation aligned with the test policy that describes the generic requirements for testing and details how to perform testing within an organization.
A set of test scripts or test procedures to be executed in a specific test run.
A procedure used to define test conditions, design test cases, and specify test data.
A group of test activities based on specific test objectives aimed at specific characteristics of a component or system.
A software development technique in which the test cases are developed, and often automated, and then the software is developed incrementally to pass those test cases.
An approach to software development in which the test cases are designed and implemented before the associated component or system is developed.
The degree to which test conditions can be established for a component or system, and tests can be performed to determine whether those test conditions have been met.
A person who performs testing.
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of a component or system and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Work products produced during the test process for use in planning, designing, executing, evaluating and reporting on testing.
think aloud usability testing
A usability testing technique where test participants share their thoughts with the moderator and observers by thinking aloud while they solve usability test tasks. Think aloud is useful to understand the test participant.
The amount of time required by a user to determine and execute the next action in a sequence of actions.
The coverage of neurons exceeding a threshold activation value in a neural network for a set of tests.
The degree to which a component or system can perform its required functions within required response times, processing times and throughput rates.
Total Quality Management
An organization-wide management approach to quality based on employee participation to achieve long-term success through customer satisfaction.
A set of exploratory tests organized around a special focus.
A continuous business-driven framework for test process improvement that describes the key elements of an effective and efficient test process.
The degree to which a relationship can be established between two or more work products.
A two-dimensional table, which correlates two entities (e.g., requirements and test cases). The table allows tracing back and forth the links of one entity to the other, thus enabling the determination of coverage achieved and the assessment of impact of proposed changes.
A view of quality based on the perception and feeling of individuals.
unit test framework
A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities.
The degree to which a component or system can be used by specified users to achieve specified goals in a specified context of use.
A process through which information about the usability of a system is gathered in order to improve the system (known as formative evaluation) or to assess the merit or worth of a system (known as summative evaluation).
A test facility in which unintrusive observation of participant reactions and responses to software takes place.
A requirement on the usability of a component or system.
usability test participant
A representative user who solves typical tasks in a usability test.
usability test script
A document specifying a sequence of actions for the execution of a usability test. It is used by the moderator to keep track of briefing and pre-session interview questions, usability test tasks, and post-session interview questions.
usability test session
A test session in usability testing in which a usability test participant is executing tests, moderated by a moderator and observed by a number of observers.
usability test task
A usability test execution activity specified by the moderator that needs to be accomplished by a usability test participant within a given period of time.
Testing to evaluate the degree to which the system can be used by specified users with effectiveness, efficiency and satisfaction in a specified context of use.
use case testing
A black-box test technique in which test cases are designed to exercise use case behaviors.
user acceptance testing
A type of acceptance testing performed to determine if intended users accept the system.
user error protection
The degree to which a component or system protects users against making errors.
A person’s perceptions and responses resulting from the use or anticipated use of a software product.
All components of a system that provide information and controls for the user to accomplish specific tasks with the system.
user interface aesthetics
The degree to which a user interface enables pleasing and satisfying interaction for the user.
user interface guideline
A low-level, specific rule or recommendation for user interface design that leaves little room for interpretation so designers implement it similarly. It is often used to ensure consistency in the appearance and behavior of the user interface of the systems produced by an organization.
A user or business requirement consisting of one sentence expressed in the everyday or business language which is capturing the functionality a user needs, the reason behind it, any non-functional criteria, and also including acceptance criteria.
user story testing
A black-box test technique in which test cases are designed based on user stories to verify their correct implementation.
A usability evaluation whereby a representative sample of users are asked to report subjective evaluation into a questionnaire based on their experience in using a component or system.
user-agent based testing
A type of testing in which a test client is used to switch the user agent string and identify itself as a different client while executing test suites.
A view of quality measured by the degree that the needs, wants, and desires of a user are met.
A sequential software development lifecycle model describing a one-for-one relationship between major phases of software development from business requirements specification to delivery, and corresponding test levels from acceptance testing to component testing.
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
value change coverage
The coverage of neurons activated where their activation values differ by more than a change amount in the neural network for a set of tests.
A view of quality measured by the ratio of the cost to the value received from a product or service.
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
virtual test environment
A test environment in which one or more parts are digitally simulated.
A simulation of activities performed according to a user operational profile.
Testing that uses image recognition to interact with GUI objects.
A static analyzer that is used to detect particular security vulnerabilities in the code.
A type of review in which an author leads members of the review through a work product and the members ask questions and make comments about possible issues.
Web Content Accessibility Guidelines
A part of a series of web accessibility guidelines published by the Web Accessibility Initiative (WAI) of the World Wide Web Consortium (W3C), the main international standards organization for the internet. They consist of a set of guidelines for making content accessible, primarily for people with disabilities.
Website Analysis and Measurement Inventory
A commercial website analysis service providing a questionnaire for measuring user experience and assessing delivery of business goals online.
white-box test technique
A test technique only based on the internal structure of a component or system.
Testing based on an analysis of the internal structure of the component or system.
An expert-based test estimation technique that aims at making an accurate estimation using the collective wisdom of the team members.
A pointer that references a location that is out of scope for that pointer or that does not exist.
XiL test environment
A generalized term for dynamic testing in different virtual test environments.