Overview

TMMi has a staged architecture for process improvement. It contains stages or levels through which an organization passes as its testing process evolves from one that is ad hoc and unmanaged, to one that is managed, defined, measured, and finally in a state of continuous improvement, referred to as optimization. Achieving each stage ensures that an adequate improvement has been laid as a foundation for the next stage. The internal structure of the TMMi is rich in testing practices that can be learned and applied in a systematic way to support a quality testing process that improves in incremental steps. There are five levels in the TMMi that prescribe a maturity hierarchy and an evolutionary path to test process improvement. Each level has a set of process areas that an organization needs to implement to achieve maturity at that level. Experience has shown that organizations do their best when they focus their test process improvement efforts on a manageable number of process areas at a time, and that those areas require increasing sophistication as the organization improves. Because each maturity level forms a necessary foundation for the next level, trying to skip a maturity level is usually counter-productive. However, one must keep in mind that test process improvement efforts should always focus on the needs of the organization in the context of its business environment and process areas at higher maturity levels may address the current needs of an organization or project. For example, organizations seeking to move from maturity level 1 to maturity level 2 are frequently encouraged to establish a test group, which is addressed by the Test Organization process area that resides at maturity level 3. Although the test group is not a necessary characteristic of a TMMi level 2 organization, it can be a useful part of the organization’s approach to achieve TMMi maturity level 2.

TMMI_maturity_levels

Source: syllabus TMMI

The process areas for each maturity level of the TMMi are shown in figure 1. They are fully described later in separate chapters, whilst each level is explained below along with a brief description of the characteristics of an organization at each TMMi level. The description will introduce the reader to the evolutionary path prescribed in the TMMi for test process improvement. Note that the TMMi does not have a specific process area dedicated to test tools and/or test automation. Within TMMi test tools are treated as a supporting resource (for practices) and are therefore part of the process area where they provide support, e.g., applying a test design tool is a supporting test practice within the process area Test Design and Execution at TMMi level 2 and applying a performance testing is tool is a supporting test practice within the process area Non-functional Testing at TMMi level 3.

Read about levels

Level 1

At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The organization usually does not provide a stable environment to support the processes. Success in these organizations depends on the competence and heroics of the people in the organization and not the use of proven processes. Tests are developed in an ad hoc way after coding is completed. Testing and debugging are interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the product often does not fulfil its needs, is not stable, and/or is too slow. Within testing there is a lack of resources, tools and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1 organizations are characterized by a tendency to over commit, abandonment of processes in a time of crises, and an inability to repeat their successes. In addition products tend not to be released on time, budgets are overrun and delivered quality is not according to expectations.

Level 2

At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process discipline reflected by maturity level 2 helps to ensure that proven practices are retained during times of stress. However, testing is still perceived by many stakeholders as being a project phase that follows coding. In the context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are also developed. Within the test plan a test approach is defined, whereby the approach is based on the result of a product risk assessment. Risk management techniques are used to identify the product risks based on documented requirements. The test plan defines what testing is required, when, how and by whom. Commitments are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure it is going according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of testing services are visible to management. Test design techniques are applied for deriving and selecting test cases from specifications. However, testing may still start relatively late in the development lifecycle, e.g., during the design or even during the coding phase. In TMMI level 2 testing is multi-level: there are component, integration, system and acceptance test levels. For each identified test level there are specific testing objectives defined in the organization-wide or program-wide test strategy. The processes of testing and debugging are differentiated. The main objective of testing in a TMMi level 2 organization is to verify that the product satisfies the specified requirements. Many quality problems at this TMMi level occur because testing occurs late in the development lifecycle. Defects are propagated from the requirements and design into code. There are no formal review programs as yet to address this important issue. Post code, execution-based testing is still considered by many stakeholders the primary testing activity.

Level 3

At TMMi level 3, testing is no longer confined to a phase that follows coding. It is fully integrated into the development lifecycle and the associated milestones. Test planning is done at an early project stage, e.g., during the requirements phase, and is documented in a master test plan. The development of a master test plan builds on the test planning skills and commitments acquired at TMMi level 2. The organization's set of standard test processes, which is the basis for maturity level 3, is established and improved over time. A test organization and a specific test training program exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of the test organization’s accepted practices. Organizations at level 3 understand the importance of reviews in quality control; a formal review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place across the lifecycle. Test professionals are involved in reviews of requirements specifications. Whereas the test designs at TMMi level 2 focus mainly on functionality testing, test designs and test techniques are expanded at level 3 to include nonfunctional testing, e.g., usability and/or reliability, depending on the business objectives. A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and procedures. At maturity level 2 these may be quite different in each specific instance, e.g., on a particular project. At maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines/ This tailoring also enables valid comparisons between different implementations of a defined process and easier movement of staff between projects . Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at maturity level 2. Consequently at maturity level 3, the organization must revisit the maturity level 2 process areas.

Level 4

Achieving the goals of TMMi level 2 and 3 has the benefits of putting into place a technical, managerial, and staffing infrastructure capable of thorough testing and providing support for test process improvement. With this infrastructure in place, testing can become a measured process to encourage further growth and accomplishment. In TMMi level 4 organizations, testing is a thoroughly defined, well-founded and measurable process. Testing is perceived as evaluation; it consists of all lifecycle activities concerned with checking products and related work products. An organization-wide test measurement program will be put into place that can be used to evaluate the quality of the testing process, to assess productivity, and to monitor improvements. Measures are incorporated into the organization’s measurement repository to support fact-based decision making. A test measurement program also supports predictions relating to test performance and cost. With respect to product quality, the presence of a measurement program allows an organization to implement a product quality evaluation process by defining quality needs, quality attributes and quality metrics. (Work) products are evaluated using quantitative criteria for quality attributes such as reliability, usability and maintainability. Product quality is understood in quantitative terms and is managed to the defined objectives throughout the lifecycle. Reviews and inspections are considered to be part of the test process and are used to measure product quality early in the lifecycle and to formally control quality gates. Peer reviews as a defect detection technique is transformed into a product quality measurement technique in line with the process area Product Quality Evaluation. TMMi level 4 also covers establishing a coordinated test approach between peer reviews (static testing) and dynamic testing and the and the use of peer review results and data to optimize the test approach with the objective to make testing both more effective and more efficient. Peer reviews are now fully integrated with the dynamic testing process, e.g. part of the test strategy, test plan and test approach.

Level 5

The achievement of all previous test improvement goals at levels 1 through 4 of TMMi has created an organizational infrastructure for testing that supports a completely defined and measured process. At TMMi maturity level 5, an organization is capable of continually improving its processes based on a quantitative understanding of statistically controlled processes. Improving test process performance is carried out through incremental and innovative process and technological improvements. The testing methods and techniques are constantly being optimized and there is a continuous focus on fine tuning and process improvement. An optimizing test process, as defined by the TMMi is one that is: - managed, defined, measured, efficient and effective - statistically controlled and predictable - focused on defect prevention - supported by automation as much is deemed an effective use of resources - able to support technology transfer from the industry to the organization - able to support re-use of test assets - focused on process change to achieve continuous improvement. To support the continuous improvement of the test process infrastructure, and to identify, plan and implement test improvements, a permanent test process improvement group is formally established and is staffed by members who have received specialized training to increase the level of their skills and knowledge required for the success of the group. In many organizations this group is called a Test Process Group. Support for a Test Process Group formally begins at TMMi level 3 when the test organization is introduced. At TMMi level 4 and 5, the responsibilities grow as more high level practices are introduced, e.g., identifying reusable test (process) assets and developing and maintaining the test (process) asset library. The Defect Prevention process area is established to identify and analyze common causes of defects across the development lifecycle and define actions to prevent similar defects from occurring in the future. Outliers to test process performance, as identified as part of process quality control, are analyzed to address their causes as part of Defect Prevention. The test process is now statistically managed by means of the Quality Control process area. Statistical sampling, measurements of confidence levels, trustworthiness, and reliability drive the test process. The test process is characterized by sampling-based quality measurements. At TMMi level 5, the Test Process Optimization process area introduces mechanisms to fine-tune and continuously improve testing. There is an established procedure to identify process enhancements as well as to select and evaluate new testing technologies. Tools support the test process as much as is effective during test design, test execution, regression testing, test case management, defect collection and analysis, etc. Process and testware reuse across the organization is also common practice and is supported by a test (process) asset library. The three TMMi level 5 process areas, Defect Prevention, Quality Control and Test Process Optimization all provide support for continuous process improvement. In fact, the three process areas are highly interrelated. For example, Defect Prevention supports Quality Control, e.g., by analyzing outliers to process performance and by implementing practices for defect causal analysis and prevention of defect re-occurrence. Quality Control contributes to Test Process Optimization, and Test Process Optimization supports both Defect Prevention and Quality Control, for example by implementing the test improvement proposals. All of these process areas are, in turn, supported by the practices that were acquired when the lower-level process areas were implemented. At TMMi level 5, testing is a process with the objective of preventing defects.
Structure of the TMMI

The structure of the TMMI is largely based on the structure of the CMMI. This is a major benefit because many people/organizations are already familiar with the CMMI structure. The CMMI structure makes a clear distinction between components that are required (goals) or recommended (specific practices, example work products, etc.) to implement. This aspect is also included in the TMMi. In this chapter, the components and structure of the TMMi are described. In addition the support provided by the CMMI to a TMMi implementation is described. 3.1 Required, Expected and Informative Components The various components are grouped into three categories: required, expected and informative.

Required components

Required components describe what an organization must achieve to satisfy a process area. This achievement must be visibly implemented in an organization's processes. The required components in TMMi are the specific and generic goals. Goal satisfaction is used in assessments as the basis for deciding if a process area has been achieved and satisfied.

Expected Components

Expected components describe what an organization will typically implement to achieve a required component. Expected components guide those who implement improvements or perform assessments. Expected components include both specific and generic practices. Either the practices as described or acceptable alternatives to the practices must be present in the planned and implemented processes of the organization, before goals can be considered satisfied.

Informative Components

Informative components provide details that help organizations get started in thinking about how to approach the required and expected components. Sub-practices, example work products, notes, examples, and references are all informative model components.

Components of the TMMI

The TMMi model required and expected components can be summarized to illustrate their relationship as in figure 2.

TMMI_structure_components

Source: syllabus TMMI

The following sections provide a description of the components. Note that the TMMi also provides a specific glossary of terms. The terms used in the glossary are largely re-used from the international test terminology standard developed by the International Software Testing Qualifications Board (ISTQB): Standard glossary of terms used in Software Testing [ISTQB].

Maturity Levels

A maturity level within the TMMi can be regarded as a degree of organizational test process quality. It is defined as an evolutionary plateau of test process improvement. Each level progressively develops an important part of the organization's test processes. There are five maturity levels within the TMMi. Each maturity level defines what to implement in order to achieve the given level. The higher the maturity level the organization achieves, the more mature the test process of the organization is. To reach a particular maturity level, an organization must satisfy all of the appropriate goals (both specific and generic) of the process areas at the specific level and also those at earlier maturity levels. Note that all organizations possess a minimum of TMMi level 1, as this level does not contain any goals that must be satisfied.

Process Areas

As stated with the exception of level 1, each maturity level consists of several process areas that indicate where an organization should focus to improve its test process. Process areas identify the issues that must be addressed to achieve a maturity level. Each process area identifies a cluster of test related activities. When the practices are all performed a significant improvement in activities related to that area will be made. In the TMMi, only those process areas that are considered to be key determinants of test process capability are identified. All process areas of the maturity level and the lower maturity levels must be satisfied to consider a maturity level to be achieved. For example, if an organization is at TMMi level 3, it has satisfied all of the process areas at both TMMi level 2 and TMMi level 3.

Specific Goals

A specific goal describes the unique characteristic that must be present to satisfy the process area. A specific goal is a required model component and is used in assessments to help determine whether a process area is satisfied.

Generic Goals

Generic goals appear near the end of a process area and are called 'generic' because the same goal statement appears in all process areas. A generic goal describes the characteristics that must be present to institutionalize the processes that implement a process area. A generic goal is a required model component and is used in assessments to help determine whether a process area is satisfied.

Specific Practices

A specific practice is the description of an activity that is considered important in achieving the associated specific goal. The specific practice describes the activities expected to result in achievement of the specific goals of a process area. A specific practice is an expected model component.

Sub-practices

A sub-practice is a detailed description that provides guidance for interpreting and implementing a specific practice. Sub-practices may be worded as if prescriptive, but are actually an informative component meant only to provide ideas that may be useful for test process improvement.

  • PA 2.1 Test Policy and Strategy

    Purpose
    The purpose of the Test Policy and Strategy process area is to develop and establish a test policy, and an organization-wide or program-wide test strategy in which the test levels are unambiguously defined. To measure test performance, test performance indicators are introduced.
    Notes
    When an organization wants to improve its test process, it should first clearly define a test policy. The test policy defines the organization's overall test objectives, goals and strategic views regarding testing. It is important for the test policy to be aligned with the overall business (quality) policy of the organization. A test policy is necessary to attain a common view of testing and its objectives between all stakeholders within an organization. This common view is required to align test (process improvement) activities throughout the organization. The test policy should address testing activities for both new development and maintenance projects. Within the test policy the objectives for test process improvement should be stated. These objectives will subsequently be translated into a set of key test performance indicators. The test policy and the accompanying performance indicators provide a clear direction, and a means to communicate expected and achieved levels of test performance. The performance indicators must show the value of testing and test process dimprovement to the stakeholders. The test performance indicators will provide quantitative indication whether the organization is improving and achieving the defined set of test (improvement) goals..
    Based upon the test policy a test strategy will be defined. The test strategy covers the generic test requirements for an organization or program (one or more projects). The test strategy addresses the generic product risks and presents a process for mitigating those risks in accordance with the testing policy. Preparation of the test strategy starts by performing a generic product risk assessment analyzing the products being developed within a program or organization.
    The test strategy serves as a starting point for the testing activities within projects. The projects are set up in accordance with the organization-wide or program-wide test strategy. A typical test strategy will include a description of the test levels that are to be applied, for example: unit, integration, system and acceptance test. For each test level, at a minimum, the objectives, responsibilities, main tasks and entry/exit criteria are defined. The test strategy serves as a starting point for the testing activities within projects. The projects are set up in accordance with the organization-wide or program-wide test strategy. When a test strategy is defined and followed, less overlap between the test levels is likely to occur, leading to a more efficient test process. Also, since the test objectives and approach of the various levels are aligned, fewer holes are likely to remain, leading to a more effective test process.
    Note that test policy and test strategy modification is usually required as an organization's test process evolves and moves up the levels of the TMMi.
    Scope
    The process area Test Policy and Strategy involves the definition and deployment of a test policy and test strategy at an organizational level. Within the test strategy, test levels are identified. For each test level, at a minimum, test objectives, responsibilities, main tasks and entry/exit criteria are defined. To measure test performance and the accomplishment of test (improvement) objectives, test performance indicators are defined and implemented.

    1. Establish a Test Policy

      A test policy, aligned with the business (quality) policy, is established and agreed upon by the stakeholders.

      1. Define test goals
      2. Define and maintain test goals based upon business needs and objectives.

        Example work products:

        1. Test goals

        Sub-practices :

        1. Study business needs and objectives
        2. Examples of business needs and objectives to be studied include the following:

          • Mission statement
          • Business and user needs regarding the products
          • Business drivers
          • Main goals of a quality program
          • Business (quality) policy
          • Type of business, e.g., risk level of products being developed
        3. Provide feedback for clarifying business needs and objectives as necessary
        4. Define test goals traceable to business needs and objectives
        5. Examples of test goals include the following:

          • Validate products for "fit-for-use"
          • Prevent defects from occurring in operation
          • Verify compliance to external standards
          • Provide visibility regarding product quality
          • Shorten test execution lead-time
        6. Review the test goals with stakeholders
        7. Revisit and revise the test goals as appropriate, e.g., on a yearly basis
      3. Define test policy
      4. A test policy, aligned with the business (quality) policy, is defined based on the test goals and agreed upon by the stakeholders.

        Example work products:

        1. Test policy

        Sub-practices :

        1. Define the test policy based on the defined test goals
        2. Examples of typical statements to be part of a test policy include the following:

          • A definition of testing
          • A definition of debugging (fault localization and repair)
          • Basic views regarding testing and the testing profession
          • The objectives and added value of testing
          • The quality levels to be achieved
          • The level of independence of the test organization
          • A high level test process definition
          • The key responsibilities of testing
          • The organizational approach to and objectives of test process improvement
        3. Clearly separate testing from debugging within the test policy
        4. Review the test policy with stakeholders
        5. Define and establish ownership for test policy
        6. Revisit and revise the test policy as appropriate, e.g., on a yearly basis
      5. Distribute the test policy to stakeholders
      6. The test policy and test goals are presented and explained to stakeholders inside and outside testing.

        Example work products:

        1. Deployment plan
        2. Test policy presentation
        3. Examples of distribution mechanisms include the following:

          • Documenting it in a handbook (quality system)
          • Presenting in project and/or departmental meetings
          • Referencing it via posters on the wall
          • Making it part of the departmental introduction program
          • Providing access to it on a central web portal
    2. Establish a Test Strategy

      An organization-wide or program-wide test strategy that identifies and defines the test levels to be performed is established and deployed.

      1. Perform a generic product risk assessment
      2. A generic product risk assessment is performed to identify the typical critical areas for testing.

        Example work products:

        1. Generic product risk list, with a category and priority assigned to each risk

        Sub-practices :

        1. Identify and select stakeholders that need to contribute to the generic risk assessment
        2. Identify generic product risks using input from stakeholders
        3. Document the context and potential consequences of the generic product risk
        4. Identify the relevant stakeholders associated with each generic product risk
        5. Analyze the identified generic products risks using the predefined parameters, e.g., likelihood and impact
        6. Categorize and group generic product risks according to the defined risk categories
        7. Prioritize the generic product risks for mitigation
        8. Review and obtain agreement with stakeholders on the completeness, category and priority level of the generic product risks
        9. Revise the generic product risks as appropriate

        Note that product risk categories and parameters as defined in the Test Planning process area (SP 1.1 Define product risk categories and parameters) are largely re-used within this specific practice. Refer to SG 1 Perform a Product Risk Assessment from the process area Test Planning for more details on the practices for performing a product risk assessment.

      3. Define test strategy
      4. The test strategy is defined that identifies and defines the test levels. For each level, the objectives, responsibilities, main tasks, entry/exit criteria and so forth are defined.

        Example work products:

        1. Test strategy

        Sub-practices :

        1. Study test policy and goals
        2. Provide feedback for clarifying test policy and goals as necessary
        3. Define the test strategy providing clear linkage to the defined test policy and goals
        4. Examples of topics to be addressed as part of a test strategy include the following:

          • Generics risks of the products being developed
          • Overall test model (V-model, incremental lifecycle) to be employed as a way to mitigate the risks
          • Test levels (e.g., unit, integration, system and acceptance test)
          • Objectives, responsibilities and main tasks at each test level, for example:,
            • For unit testing
              • Verifying that the unit operates as specified in the unit design
              • Achieving a certain level of code coverage
            • For integration testing
              • Verifying that the units together operate as specified in the global design
              • Verifying that the interfaces operate as specified in the interface specification
            • For system testing
              • Verifying that the system operates as specified in the requirements specification
              • Achieving a certain level of system requirements coverage
            • For acceptance testing
              • Verifying that the system satisfies defined acceptance criteria
              • Validating whether the system is 'fit for use'
              • Achieving a certain level of user requirements coverage
          • Test case design techniques to be used at each test level
          • Test types to be carried out at each test level
          • Entry and exit criteria for each test level
          • Standards that must be adhered to
          • Level of independence of testing
          • Environment in which the tests will be executed
          • Approach to automation at each test level
          • Approach to regression testing
        5. Review the test strategy with stakeholders
        6. Define and establish ownership for test strategy
        7. Revisit and revise the test strategy as appropriate, e.g., on a yearly basis

        Note that the test strategy will serve as a starting point for testing to be performed in a project. Each project can tailor the overall organizational strategy to its specific project needs. Any areas of noncompliance shall be clearly documented in the project’s test plan.

      5. Distribute the test strategy to stakeholders
      6. The test strategy is presented to and discussed with stakeholders inside and outside testing.

        Example work products:

        1. Deployment plan
        2. Test strategy presentation
        3. Examples of distribution mechanisms include the following:

          • Documenting it in a handbook and/or quality system
          • Presenting in project and/or departmental meetings
          • Referencing it via posters on the wall
          • Making it part of the departmental introduction program
          • Providing access to it on a central web portal
    3. Establish Test Performance Indicators

      A set of goal-oriented test process performance indicators to measure the quality of the test process is established and deployed.

      1. Define test performance indicators
      2. The test performance indicators are defined based upon the test policy and goals, including a procedure for data collection, storage and analysis.

        Example work products:

        1. Test performance indicators
        2. Data collection, storage, analysis and reporting procedures

        Sub-practices :

        1. Study test policy and goals, e.g., the objectives for test process improvement
        2. Provide feedback for clarifying test policy and goals as necessary
        3. Define the test performance indicators traceable to the test policy and goals
        4. Examples of test performance indicators include the following:

          • Test effort and cost
          • Test lead time
          • Number of defects found
          • Defect detection percentage
          • Test coverage
          • Test maturity level

          In general the defined test performance indicators should relate to the business value of testing

        5. Review the performance indicators with stakeholders
        6. Define and establish ownership for test performance indicators
        7. Specify how performance indicators will be obtained and stored
        8. Specify how performance indicators will be analyzed and reported
      3. Deploy test performance indicators
      4. Deploy the test performance indicators and provide measurement results for the identified test performance indicators to stakeholders.

        Example work products:

        1. Test performance indicator data
        2. Reports providing information regarding the test performance indicators

        Sub-practices :

        1. Obtain specified performance indicator data
        2. Analyze and interpret performance indicator data
        3. Manage and store performance indicator data and analysis results
        4. Report the performance indicator data to stakeholders on a periodic basis
        5. Assist stakeholders in understanding the results
        6. Examples of actions to assist in understanding the results include the following:

          • Discussing the results with relevant stakeholders
          • Provide contextual information that provides background and explanation
  • PA 2.2 Test Planning

    Purpose
    The purpose of Test Planning is to define a test approach based on the identified risks and the defined test strategy, and to establish and maintain well-founded plans for performing and managing the testing activities.
    Introductory Notes
    After confirmation of the test assignment, an overall study is carried out regarding the product to be tested, the project organization, the requirements, and the development process. As part of Test Planning, the test approach is defined based on the outcome of a product risk assessment and the defined test strategy. Depending on the priority and category of risks, it is decided which requirements of the product will be tested, to what degree, how and when. The objective is to provide the best possible coverage to the parts of the system with the highest risk. Based on the test approach the work to be done is estimated and as a result the proposed test approach is provided with clear cost information. The product risks, test approach and estimates are defined in close co-operation with the stakeholders rather than by the testing team alone. The test plan will comply, or explain non-compliances, with the test strategy.
    Within Test Planning, the test deliverables that are to be provided are identified, the resources that are needed are determined, and aspects relating to infrastructure are defined. In addition, test project risks regarding testing are identified. As a result the test plan will define what testing is required, when, how and by whom. Finally, the test plan document is developed and agreed to by the stakeholders. The test plan provides the basis for performing and controlling the testing activities. The test plan will usually need to be revised, using a formal change control process, as the project progresses to address changes in the requirements and commitments, inaccurate estimates, corrective actions, and (test) process changes.
    Scope
    The process area Test Planning involves performing a product risk assessment on the test object and defining a differentiated test approach based on the risks identified. It also involves developing estimates for the testing to be performed, establishing necessary commitments, and defining and maintaining the plan to guide and manage the testing. A test plan is required for each identified test level. At TMMi level 2 test plans are typically developed per test level. At TMMi level 3, within the process area Test Lifecycle and Integration, the master test plan is introduced as one of its goals.

    1. Perform a Product Risk Assessment

      A product risk assessment is performed to identify the critical areas for testing.

      1. Define product risk categories and parameters
      2. Product risk categories and parameters are defined that will be used during the product risk assessment.

        Example work products:

        1. Product risk categories lists
        2. Product risk evaluation and prioritization criteria

        Sub-practices :

        1. Determine product risk categories
        2. A reason for identifying product risk categories is to help in the future consolidation of the test tasks into test types in the test plans.

          Examples of product risk categories include the following:

          • Functional risks
          • Architectural risks
          • Non-functional risks, e.g., usability, efficiency, portability, maintainability, reliability
          • Change related risks, e.g., regression
        3. Define consistent criteria for evaluating and quantifying the product risk likelihood and impact levels
        4. Define thresholds for each product risk level

        Risk level is defined as the importance of a risk as defined by its characteristics (impact and likelihood). For each risk level, thresholds can be established to determine the acceptability or unacceptability of a product risk, prioritization of product risks, or to set a trigger for management action.

      3. Identify product risks
      4. Product risks are identified and documented.

        Example work products:

        1. Identified product risks

        Sub-practices :

        1. Identify and select stakeholders that need to contribute to the risk assessment
        2. Identify product risks using input from stakeholders and requirements documents
        3. Examples of product risk identification techniques include the following:

          • Risk workshops
          • Brainstorming
          • Expert interviews
          • Checklists
          • Lessons learned
        4. Document the background and potential consequences of the risk
        5. Identify the relevant stakeholders associated for each risk
        6. Review the identified product risks against the test assignment
      5. Analyze product risks
      6. Product risks are evaluated, categorized and prioritized using the predefined product risk categories and parameters.

        Example work products:

        1. Product risk list, with a category and priority assigned to each risk

        Sub-practices :

        1. Analyze the identified products risks using the predefined parameters, e.g., likelihood and impact
        2. Categorize and group product risks according to the defined risk categories
        3. Prioritize the product risks for mitigation
        4. Establish a horizontal traceability between products risks and requirements to ensure that the source of product risks is documented
        5. Generate a requirements / product risks traceability matrix
        6. Review and obtain agreement with stakeholders on the completeness, category and priority level of the product risks
        7. Revise the product risks as appropriate
        8. Examples of when product risks may need to be revised include the following:

          • New or changing requirements
          • Change of the software development approach
          • Lessons learned on quality issues in the project
    2. Establish a Test Approach

      A test approach, based on identified product risks, is established and agreed upon.

      1. Identify items and features to be tested
      2. The items and features to be tested, and not to be tested, are identified based on the product risks.

        Example work products:

        1. List of items to be tested and not to be tested
        2. List of features to be tested and not to be tested

        Sub-practices :

        1. Breakdown the prioritized product risks into items to be tested and not to be tested
        2. Document the risk level and source documentation (test basis) for each identified item to be tested
        3. Breakdown the prioritized product risks into features to be tested and not to be tested
        4. Document the risk level and source documentation (test basis) for each identified feature to be tested
        5. Review with stakeholders the list of items and features to be tested and not to be tested
      3. Define the test approach
      4. The test approach is defined to mitigate the identified and prioritized product risks.

        Example work products:

        1. The approach, e.g., selected set of test design techniques, should be described in sufficient detail to support identification of major test tasks and estimation of the time required to do each one.

        Sub-practices :

        1. Select the test design techniques to be used. Multiple test design techniques are defined to provide adequate test coverage based on the defined product risks
        2. Criteria for selecting a test design technique include the following:

          • Type of system
          • Regulatory standards
          • Customer or contractual requirements
          • Level of risk
          • Type of risk
          • Documentation available
          • Knowledge of the testers
          • Time and budget
          • Development lifecycle
          • Previous experience with types of defects found
        3. Define the approach to review test work products
        4. Define the approach for re-testing
        5. Examples of approaches for re-testing include the following:

          • For all high risk test items a full re-test will take place re-executing the full test procedure
          • For all low risk test items the incidents are re-tested in isolation
        6. Define the approach for regression testing
        7. Examples of elements of a regression test approach include the following:

          • Focus of the regression testing, e.g., which items and/or features
          • Methods to select the test cases to be executed
          • Type of testing to be performed
          • Manual testing or using test automation tools
        8. Identify the supporting test tools to be used
        9. Identify significant constraints regarding the test approach
        10. Examples of constraints regarding the test approach include the following:

          • Test resource availability
          • Test environment features
          • Project deadlines
        11. Align the test approach with the defined organization-wide or program-wide test strategy
        12. Identify any non-compliance to the test strategy and its rationale
        13. Review the test approach with stakeholders
        14. Revise the test approach as appropriate
        15. Examples of when the test approach may need to be revised include the following:

          • New or changed priority level of product risks
          • Lessons learned after applying the test approach in the project
      5. Define entry criteria
      6. The entry criteria for testing are defined to prevent testing from starting under conditions that do not allow for a thorough test process

        Example work products:

        1. Entry criteria per identified test level

        Sub-practices :

        1. Define a set of entry criteria related to the test process
        2. Examples of entry criteria related to the test process include the following:

          • The availability of a test summary report from the previous test level
          • The availability of a test environment according to requirements
          • The availability of documentation, e.g., test release notes, user manual, installation manual
        3. Define a set of entry criteria related to product quality
        4. Examples of entry criteria related to product quality include the following:

          • A successful intake test
          • No outstanding defects (of priority level X)
          • All outstanding defects have been analyzed
        5. Review the entry criteria with stakeholders, especially those responsible for meeting the entry criteria
      7. Define exit criteria
      8. The exit criteria for testing are defined to determine when testing is complete.

        Example work products:

        1. Exit criteria per identified test level

        Sub-practices :

        1. Define a set of exit criteria related to the test process
        2. Examples of exit criteria related to the test process include the following:

          • Percentage of tests prepared that have been executed (successfully)
          • Percentage of coverage for each test item, e.g., code coverage or requirements coverage
          • The availability of an approved test summary report
        3. Define a set of exit criteria related to product quality
        4. Examples of exit criteria related to product quality include the following:

          • All high priority product risks mitigated
          • Defect detection rate falls below a threshold
          • Number of outstanding defects (by priority level)
          • Percentage of software modules supported by an inspected design
        5. Review the exit criteria with stakeholders

        Note that the exit criteria of a test level should be aligned with the entry criteria of the subsequent test level.

      9. Define suspension and resumption criteria
      10. Criteria are defined that will be used to suspend and resume all or a portion of the test tasks on the test items and/or features.

        Example work products:

        1. Suspension criteria
        2. Resumption criteria

        Sub-practices :

        1. Specify the suspension criteria used to suspend all or a portion of the test tasks on the test items and/or features
        2. Examples of suspension criteria include the following:

          • Number of critical defects
          • Number of non-reproducible defects
          • Issues with test execution due to the test environments
        3. Specify the resumption criteria used to specify the test tasks that must be repeated when the criteria that caused the suspension are removed
    3. Establish Test Estimates

      Well-founded test estimates are established and maintained for use in discussing the test approach with stakeholders and in planning the testing activities.

      1. Establish a top-level work breakdown structure
      2. Establish a top-level work breakdown structure (WBS) to clearly define the scope of the testing to be performed and, thereby, the scope for the test estimate.

        Example work products:

        1. Test work products list
        2. Test tasks to be performed
        3. Work breakdown structure

        Sub-practices :

        1. Identify test work products to be developed based on the defined test approach
        2. Identify test work products that will be acquired externally
        3. Identify test work products that will be re-used
        4. Identify test tasks to be performed related to the test work products
        5. Identify indirect test tasks to be performed such as test management, meetings, configuration management, etc.

        Note that the WBS should also take into account tasks for implementing the test environment requirements. Refer to the Test Environment process area for more information on this topic.

      3. Define test lifecycle
      4. Define the test lifecycle phases on which to scope the planning effort.

        Example work products:

        1. Test lifecycle phases definition
        2. Test milestones

        Sub-practices :

        1. Define test lifecycle phases. At a minimum a test planning, test preparation and test execution phase are distinguished
        2. Schedule the test preparation phase such that it starts immediately upon the completion of the test basis
        3. Align the top-level work breakdown structure with the defined test lifecycle
        4. Identify major milestones for each test lifecycle phase

        Note that understanding the lifecycle is crucial in determining the scope of the test planning effort and the timing of the initial planning, as well as the timing and criteria (at critical milestones) for replanning.

      5. Determine estimates for test effort and cost
      6. Estimate the test effort and cost for the test work products to be created and testing tasks to be performed based on the estimation rationale.

        Example work products:

        1. Attribute estimates of test work products and test tasks
        2. Test effort estimates
        3. Test cost estimates

        Sub-practices :

        1. Determine and maintain estimates of the attributes of the test work products and test tasks
        2. Examples of attributes used to estimate test work products and test tasks include the following:

          • Size, e.g., number of test cases, number of pages, number of test points, volume of test data, number of requirements
          • Complexity of related test item, e.g., cyclomatic number
          • Level of re-use
          • Priority level of related product risk

          Note that appropriate methods (e.g., validated models or historical data) should be used to determine the attributes of the test work products and test tasks that will be used to estimate the resource requirements.

        3. Study (technical) factors that can influence the test estimate
        4. Examples of factors that can influence the test estimate include the following:

          • Usage of test tools
          • Quality of earlier test levels
          • Quality of test basis
          • Development environment
          • Test environment
          • Availability of re-usable testware from previous projects
          • Knowledge and skill level of testers
        5. Select models and/or historical data that will be used to transform the attributes of the test work products and test tasks into estimates of the effort and cost
        6. Examples of models that can be used for test estimation include the following:

          • Test Point Analysis [TMap]
          • Three point estimate
          • Wide Band Delphi [Veenendaal]
          • Ratio of development effort versus test effort
        7. Include supporting infrastructure needs when estimating test effort and cost
        8. Examples of supporting infrastructure needs include the following:

          • Test environment
          • Critical computer resources
          • Office environment
          • Test tools
        9. Estimate test effort and cost using models and/or historical data
        10. Document assumptions made in deriving the estimates
        11. Record the test estimation data, including the associated information needed to reconstruct the estimates
    4. Develop a Test Plan

      A test plan is established and maintained as the basis for managing testing and communication to stakeholders.

      1. Establish the test schedule
      2. The test schedule, with predefined stages of manageable size, is established and maintained based on the developed test estimate and defined test lifecycle.

        Example work products:

        1. Test schedule

        Sub-practices :

        1. Identify test scheduling constraints such as task duration, resources, and inputs needed
        2. Identify test task dependencies
        3. Define the test schedule (timing of testing activities, test lifecycle phases and test milestones)
        4. Document assumptions made in defining the test schedule
        5. Establish corrective action criteria for determining what constitutes a significant deviation from the test plan and may indicate a need for rescheduling.
      3. Plan for test staffing
      4. A plan is created for the availability of the necessary test staff resources who have the required knowledge and skills to perform the testing.

        Example work products:

        1. Staffing requirements
        2. Inventory of skill needs
        3. Staffing and new hire plan
        4. Test training plan

        Sub-practices :

        1. Determine staffing requirements based on the work breakdown structure, test estimate and test schedule
        2. Identify knowledge and skills needed to perform the test tasks
        3. Assess the knowledge and skills available
        4. Select mechanisms for providing needed knowledge and skills
        5. Examples of mechanisms include the following:

          • In-house training
          • External training
          • Coaching
          • External skill acquisition
        6. Incorporate selected mechanisms into the test plan
      5. Plan stakeholder involvement
      6. A plan is created for the involvement of the identified stakeholders.
        Stakeholders are identified from all phase of the test lifecycle by identifying the type of people and functions needing during the testing activities. Stakeholders are also identified by their relevance and the degree of interaction for the specific testing activities. A two-dimensional matrix with stakeholders along one axis and testing activities along the other axis is convenient for accomplishing this identification.

        Example work products:

        1. Stakeholder involvement plan
      7. Identify test project risks
      8. The test project risks associated with testing are identified, analyzed and documented.

        Example work products:

        1. Identified test project risks
        2. Prioritized test project risk list
        3. Test project risk mitigation plans

        Sub-practices :

        1. Identify test project risks
        2. Examples of project risk identification techniques include the following:

          • Brainstorming
          • Expert interviews
          • Checklists
        3. Analyze the identified test project risks in terms of likelihood and impact
        4. Prioritize the analyzed test project risks
        5. Review and obtain agreement with stakeholders on the completeness and priority level of the documented test project risks
        6. Define contingencies and mitigation actions for the (high priority) test project risks
        7. Revise the test project risks as appropriate
        8. Examples of when test project risks may need to be revised include:

          • When new test project risks are identified
          • When the likelihood of a test project risk changes
          • When test project risks are retired
          • When testing circumstances change significantly
      9. Establish the test plan
      10. The test plan is established and maintained as a basis for managing testing and guiding the communication with the stakeholders.
        The results of previous practices are documented in an overall test plan, tying together the information in a logical manner.

        Example work products:

        1. Identified test project risks
        2. Examples of elements of a test plan include the following [after IEEE 829]:

          • Test plan identifier
          • An overall introduction
          • Non-compliances with the test strategy and the rationale
          • Items to be tested (including priority level) and not to be tested
          • Features to be tested (including priority level) and not to be tested
          • Test approach (e.g., test design techniques)
          • Entry and exit criteria
          • Suspension and resumption criteria
          • Test milestones and work products
          • Test lifecycle and tasks
          • Environmental needs and requirements (including office environment)
          • Staffing and training needs
          • Stakeholder involvement
          • Test estimate
          • Test schedule
          • Test project risks and contingencies

        Refer to the Test Environment process area for information on environmental needs and requirements.

    5. Obtain Commitment to the Test Plan

      Commitments to the test plan are established and maintained.

      1. Review test plan
      2. Review the test plan (and possibly other plans that affect testing) to achieve and understand test commitments.

        Example work products:

        1. Test plan review log

        Sub-practices :

        1. Organize reviews with stakeholders to facilitate their understanding of the test commitments.
      3. Reconcile work and resource levels
      4. Adjust the test plan to reconcile available and estimated resources.

        Example work products:

        1. Revised test approach and corresponding estimation parameters
        2. Renegotiated test budgets
        3. Revised test schedules
        4. Revised product risk list
        5. Renegotiated stakeholder agreements

        Sub-practices :

        1. Discuss differences between estimates and available resources with stakeholders
        2. Reconcile any differences between estimates and available resources

        Note that reconciliation is typically accomplished by lowering or deferring technical performance, negotiating more resources, finding ways to increase productivity, changing the scope of the project such as removing features, outsourcing, adjusting staff skill mix, or revising the schedule.

      5. Obtain test plan commitments
      6. Obtain commitments from relevant stakeholders responsible for performing and supporting the execution of the test plan.

        Example work products:

        1. Documented requests for commitments
        2. Documented commitments

        Sub-practices :

        1. Identify needed support and negotiate commitments for that support with relevant stakeholders
        2. Note that the WBS can be used as a checklist for ensuring that commitments are obtained for all tasks. The plan for stakeholders' interaction should identify all parties from whom commitments should be obtained.

        3. Document all organizational commitments, both full and provisional
        4. Review internal commitments with senior management as appropriate
        5. Review external commitments with senior management as appropriate
  • PA 2.3 Test Monitoring and Control

    Purpose
    The purpose of Test Monitoring and Control is to provide an understanding of test progress and product quality so that appropriate corrective actions can be taken when test progress deviates significantly from plan or product quality deviates significantly from expectations.
    Introductory Notes
    The progress of testing and the quality of the products should both be monitored and controlled. The progress of the testing is monitored by comparing the status of actual test (work) products, tasks (including their attributes), effort, cost, and schedule to what is identified in the test plan. The quality of the product is monitored by means of indicators such as product risks mitigated, the number of defects found, number of open defects, and status against test exit criteria.
    Monitoring involves gathering the required (raw) data, e.g., from test log and test incidents reports, reviewing the raw data for their validity and calculating the defined progress and product quality measures. Test summary reports should be written on a periodic and event-driven basis as a means to provide a common understanding on test progress and product quality. Since 'testing is the measurement of product quality' [Hetzel], the practices around product quality reporting are key to the success of this process area.
    Appropriate corrective actions should be taken when the test progress deviates from the plan or product quality deviates from expectations. These actions may require re-planning, which may include revising the original plan or additional mitigation activities based on the current plan. Corrective actions that influence the original committed plan should be agreed by the stakeholders.
    An essential part of test monitoring and control is test project risk management. Test project risk management is performed to identify and solve as early as possible major problems that undermine the test plan. When performing project risk management, it is also important to identify problems that are beyond the responsibility of testing. For instance, organizational budget cuts, delay of development work products or changed/added functionality can all significantly affect the test process. By building on the test project risks already documented in the test plan, test project risks are monitored and controlled and corrective actions are initiated as needed.
    Scope
    The process area Test Monitoring and Control involves monitoring the test progress and product quality against documented estimates, commitments, plans and expectations, reporting on test progress and product quality to stakeholders, taking control measures, (e.g., corrective actions, when necessary) and managing the corrective actions to closure

    1. Monitor Test Progress against Plan

      The actual progress and performance of testing is monitored and compared against the values in the test plan.

      1. Monitor test planning parameters
      2. Monitor the actual values of the test planning parameters against the test plan.

        Example work products:

        1. Records of test performance
        2. Records of significant deviations from plan

        Sub-practices:

        1. Monitor test progress against the test schedule
        2. Examples of progress monitoring typically include the following:

          • Periodically measuring the actual completion of test tasks, test (work) products and test milestones
          • Comparing actual completion of test tasks, test (work) products and test milestones against the test schedule documented in the test plan
          • Identifying significant deviations from the test schedule estimates in the test plan
        3. Monitor the test cost and expended test effort
        4. Examples of cost and effort monitoring typically include the following:

          • Periodically measuring the actual test costs and effort expended and staff assigned
          • Comparing actual test cost, effort and staffing to the estimates documented in the test plan
          • Identifying significant deviations from the test cost, effort and staffing in the test plan
        5. Monitor the attributes of the test work products and test tasks
        6. Refer to SP 3.3 Determine estimates of test effort and cost from the Test Planning process area for information about the attributes of test work products and test tasks.

          Examples of test work products and test task attributes monitoring typically include the following:

          • Periodically measuring the actual attributes of the test work products and test tasks, such as size or complexity
          • Comparing the actual attributes of the test work products and test tasks to the estimates documented in the test plan
          • Identifying significant deviations from the estimates in the test plan
        7. Monitor the knowledge and skills of test staff
        8. Examples of knowledge and skills monitoring typically include the following:

          • Periodically measuring the acquisition of knowledge and skills of test staff
          • Comparing actual training obtained to that documented in the test plan
        9. Document the significant deviations in the test planning parameters.
      3. Monitor test environment resources provided and used
      4. Monitor the test environment resources provided and used against those defined in the plan.

        Example work products:

        1. Records of test environment resources provided and used
        2. Records of significant deviations from plan

        Sub-practices:

        1. Monitor test environment resources provided against the plan
        2. Monitor the actual usage of the provided test environment resources against the plan
        3. Identify and document significant deviations from the estimates in the plan
      5. Monitor test commitments
      6. Monitor test commitments achieved against those identified in the test plan.

        Example work products:

        1. Records of commitment reviews

        Sub-practices:

        1. Regularly review commitments (both internal and external)
        2. Identify commitments that have not been satisfied or that are at significant risk of not being satisfied
        3. Document the results of the commitment reviews.
      7. Monitor test project risks
      8. Monitor test project risks against those identified in the test plan.

        Example work products:

        1. Updated test project risk list
        2. Records of project risk monitoring

        Sub-practices:

        1. Periodically review the test project risks in the context of the current status and circumstances
        2. Revise the documentation of the test project risks, as additional information becomes available, to incorporate any changes
        3. Communicate test project risk status to relevant stakeholders
      9. Monitor stakeholder involvement
      10. Monitor stakeholder involvement against expectations defined in the test plan.

        Once the stakeholders are identified and the extent of their involvement within testing is specified in the test plan, that involvement must be monitored to ensure that the appropriate interactions are occurring.

        Example work products:

        1. Records of stakeholder involvement

        Sub-practices:

        1. Periodically review the status of stakeholder involvement
        2. Identify and document significant issues and their impact
        3. Document the results of the stakeholder involvement status reviews
      11. Conduct test progress reviews
      12. Periodically review test progress, performance and issues.

        Progress reviews are reviews to keep stakeholders informed. Reviews are often held both internally with test team members and externally with stakeholders outside testing. These reviews are typically informal reviews held regularly, e.g., weekly, bi-weekly or monthly.

        Example work products:

        1. Test progress report
        2. Documented test progress review results, e.g. minutes of the progress meetings

        Sub-practices:

        1. Collect and analyze test progress monitoring measures
        2. Regularly communicate status on test progress and performance to stakeholders
        3. Examples of stakeholders typically include the following:

          • Project management
          • Business management
          • Test team members
        4. Regularly organize test progress review meetings with stakeholders
        5. Identify, document and discuss significant issues and deviations from the test plan
        6. Document change requests on test work products and major problems identified in test progress and performance
        7. Document the results of the reviews, e.g., decisions made and corrective actions defined
      13. Conduct test progress milestone reviews
      14. Review the accomplishments and progress of testing at selected test milestones.

        Test progress milestone reviews are planned during test planning and are typically formal reviews.

        Example work products:

        1. Test milestone report
        2. Documented milestone review results, e.g., minutes of the review meeting

        Sub-practices:

        1. Conduct test progress reviews at meaningful points in the test schedule, such as the completion of selected stages, with relevant stakeholders
        2. Communicate accomplishments and test progress and performance status to stakeholders
        3. Review the commitments, plan, status, and project risks of testing
        4. Review the test environment resources
        5. Identify, document and discuss significant test progress issues and their impacts
        6. Document the results of the reviews, actions items, and decisions
        7. Update the test plan to reflect accomplishments and latest status
    2. Monitor Product Quality against Plan and Expectations

      Actual product quality is monitored against the quality measurements defined in the plan and the quality expectations, e.g., of the customer/user.

      1. Check against entry criteria
      2. At the start of the test execution phase check the status against the entry criteria identified in the test plan.

        Example work products:

        1. Records of entry check

        Sub-practices:

        1. Check the status against the entry criteria identified in the test plan
        2. Identify and document significant deviations in compliance to entry criteria and initiate corrective action
      3. Monitor defects
      4. Monitor measures of defects found during testing against expectations.

        Example work products:

        1. Records of defect monitoring

        Sub-practices:

        1. Monitor measures on defects found and status against expectations
        2. Examples of useful defect measures include the following [Burnstein]:

          • Total number of defects (for a component, subsystem, system) outstanding at each defined priority level
          • Total number of defects found during the most recent test run at each defined priority level
          • Number of defects resolved/unresolved (for all levels of test)
          • Number of defects found for each given type
          • Number of defects causing failures of severity level greater than X
          • Number of defects/KLOC (“incident volume”)
          • Actual number versus estimated number of defects (based on historical data)
        3. Identify and document significant deviations from expectations for measures regarding defects found
      5. Monitor product risks
      6. Monitor product risks against those identified in the test plan.

        Example work products:

        1. Updated test product risk list
        2. Records of product risk monitoring

        Sub-practices:

        1. Periodically review the product risks in the context of the current status and circumstances with a selected set of stakeholders
        2. Monitor changes and additions to the requirements to identify new or changed product risks
        3. Revise the documentation of the product risks as additional information becomes available to incorporate the change on likelihood, impact and/or priority status
        4. Monitor the (number of) product risks mitigated by testing against the mitigation stated in the plan
        5. Communicate product risk status to relevant stakeholders
      7. Monitor exit criteria
      8. Monitor the status of the exit criteria against those identified in the test plan.

        Example work products:

        1. Records of exit criteria monitoring

        Sub-practices:

        1. Monitor the test process related exit criteria, e.g., test coverage against plan
        2. Monitor the product quality related exit criteria against plan
        3. Identify and document significant deviations in exit criteria status from plan
      9. Monitor suspension and resumption criteria
      10. Monitor the status of the suspension and resumption criteria against those identified in the test plan.

        Example work products:

        1. Records of suspension criteria monitoring
        2. Records of resumption criteria monitoring

        Sub-practices:

        1. Monitor suspension criteria against those documented in the test plan
        2. Suspend testing if suspension criteria are met and initiate corrective action
        3. Monitor resumption criteria against those documented in the test plan
        4. Initiate the resumption of testing once the issues have been solved using the defined resumption criteria
      11. Conduct product quality reviews
      12. Periodically review product quality.

        Product quality reviews are reviews conducted to keep stakeholders informed. Reviews are often held both internally with test team members and externally with stakeholders outside testing. These reviews are typically informal reviews held regularly, e.g., weekly, bi-weekly or monthly.

        Example work products:

        1. Product quality report
        2. Documented product quality review results, e.g., minutes of the product quality meetings

        Sub-practices:

        1. Collect and analyze product quality monitoring measures
        2. Regularly communicate status on product quality to stakeholders
        3. Examples of stakeholders typically include the following:

          • Project management
          • Business management
          • Test team members
        4. Regularly organize product quality review meetings with stakeholders
        5. Identify, document and discuss significant product quality issues and deviations from expectations and plan
        6. Document the results of the reviews, e.g., decisions made and corrective actions defined
      13. Conduct product quality milestone reviews
      14. Review product quality status at selected test milestones.

        Product quality milestone reviews are planned during test planning and are typically formal reviews.

        Example work products:

        1. Test milestone report
        2. Documented milestone review results, e.g., minutes of the review meeting

        Sub-practices:

        1. Conduct product quality reviews at meaningful points in the test schedule, such as the completion of selected stages, with relevant stakeholders
        2. Communicate product quality status to stakeholders by means of a formal product quality report
        3. Examples of elements of a product quality test report include the following [after IEEE 829]:

          • Identifier (and reference to test plan)
          • Management summary
          • Variances (against plan)
          • Comprehensive assessment
          • Summary of results
          • Evaluation
          • Summary of activities
          • Approvals
        4. Review the status regarding defects, product risks and exit criteria
        5. Identify and document significant product quality issues and their impacts
        6. Document the results of the reviews, actions items, and decisions
        7. Update the test plan to reflect accomplishments and the latest status
    3. Manage Corrective Actions to Closure

      Corrective actions are managed to closure when test progress or product quality deviate significantly from the test plan or expectations.

      1. Analyze issues
      2. Collect and analyze the issues and determine corrective actions necessary to address them.

        Example work products:

        1. List of issues needing corrective actions

        Sub-practices:

        1. Gather issues for analysis
        2. Examples of issues to be gathered include the following:

          • Significant deviations in actual test planning parameters from estimates in the test plan
          • Commitments that have not been satisfied
          • Significant changes in test project risk status, e.g., possible late delivery and/or poor quality of test basis and/or test object
          • Stakeholder representation or involvement issues
          • Significant deviations in test environment implementation progress from plan
          • Number, severity and priority level of defects found
          • Status regarding exit criteria
          • Significant changes in product risks
        3. Analyze issues to determine need for corrective action
        4. Note corrective action is required when the issue, if left unresolved, may prevent testing or even the project from meeting its objectives.

      3. Take corrective action
      4. Take corrective action as appropriate for the identified issues.

        Example work products:

        1. Corrective action plan

        Sub-practices:

        1. Determine and document the appropriate actions needed to address the identified issues
        2. Examples of potential actions include the following:

          • Re-negotiating commitments
          • Adding resources
          • Changing the test approach
          • Re-visiting the exit criteria
          • Deferring release date
          • Changing the scope of the project, e.g., delivering less functionality

          Note that many of the potential actions listed above will lead to a revised test plan.

        3. Review and get agreement with relevant stakeholders on the actions to be taken
        4. Re-negotiate commitments with stakeholders (both internally and externally)
      5. Manage corrective action
      6. Manage the corrective action to closure.

        Example work products:

        1. Corrective action results

        Sub-practices:

        1. Monitor corrective actions for completion
        2. Analyze results of corrective actions to determine the effectiveness of the corrective actions
        3. Report progress on status of corrective actions
  • PA 2.4 Test Design and Execution

    Purpose
    The purpose of Test Design and Execution is to improve the test process capability during test design and execution by establishing test design specifications, using test design techniques, performing a structured test execution process and managing test incidents to closure.
    Introductory Notes
    Structured testing implies that test design techniques are applied, possibly supported by tools. Test design techniques are used to derive and select test conditions and design test cases from requirements and design specifications. The test conditions and test cases are documented in a test specification. A test case consists of the description of the input values, execution preconditions, expected results and execution post conditions. At a later stage, as more information becomes available regarding the implementation, the test cases are translated into test procedures. In a test procedure, also referred to as a manual test script, the specific test actions and checks are arranged in an executable sequence. Specific test data required to be able to run the test procedure is created. The tests will subsequently be executed using these test procedures.
    The test design and execution activities follow the test approach as defined in the test plan. The specific test design techniques applied (e.g., black box, white box or experience-based) are based on level and type of product risk identified during test planning.
    During the test execution stage, incidents are found and incident reports are written. Incidents are logged using an incident management system and are communicated to the stakeholders per established protocols. A basic incident classification scheme is established for incident management, and a procedure is put in place to handle the incident lifecycle process including managing each incident to closure.
    Scope
    The process area Test Design and Execution addresses the test preparation phase including the application of test design techniques to derive and select test conditions and test cases. It also addresses the creation of specific test data, the execution of the tests using documented test procedures and incident management.

    1. Perform Test Analysis and Design using Test Design Techniques

      During test analysis and design, the test approach is translated into tangible test conditions and test cases using test design techniques.

      1. Identify and prioritize test conditions
      2. Test conditions are identified and prioritized using test design techniques, based on an analysis of the test items as specified in the test basis.

        Example work products:

        1. Test basis issue log
        2. Test conditions
        3. Test design specification

        Sub-practices:

        1. Study and analyze the test basis (such as requirements, architecture, design, interface specifications and user manual)
        2. Discuss issues regarding the test basis with the document owner
        3. Select the most appropriate test design techniques in line with the documented test approach
        4. Examples of black box test design techniques include the following:

          • Equivalence Partitioning
          • Boundary Value Analysis
          • Decision Tables (Cause/Effect Graphing)
          • State Transition Testing

          Examples of white box test design techniques include the following:

          • Statement Testing
          • Decision (Branch) Testing
          • Condition Testing

          Note that in addition to black box and white box techniques, experience-based techniques such as exploratory testing can also be used which result in documenting the test design specification by means of a test charter.

          Typically more than one test design technique is selected per test level in order to be able to differentiate the intensity of testing, e.g., number of test cases, based on the level of risk of the test items. In addition to using the risk level to prioritize testing, other factors influence the selection of test design techniques such as development lifecycle, quality of the test basis, skills and knowledge of the testers, contractual requirements and imposed standards.

        5. Derive the test conditions from the test basis using test design techniques
        6. Prioritize the test conditions based on identified product risks
        7. Document the test conditions in a test design specification, based on the test design specification standard
        8. Examples of elements of a test design specification include the following [after IEEE 829]:

          • Test design specification identifier
          • Items and/or features to be tested
          • Approach refinements
          • Test conditions
          • Pass/fail criteria
        9. Review the test design specifications with stakeholders
        10. Revise the test design specifications and test conditions as appropriate, e.g., whenever the requirements change
      3. Identify and prioritize test cases
      4. Test cases are identified and prioritized using test design techniques.

        Example work products:

        1. Test cases
        2. Test case specification

        Sub-practices:

        1. Derive the test cases from the test conditions using test design techniques. A test case consists of a set of input values, execution preconditions, expected results and execution post conditions.
        2. Prioritize the test cases based on identified product risks
        3. Document the test cases in a test case specification, based on the test case specification standard
        4. Examples of elements of a test case specification include the following [IEEE 829]:

          • Test case specification identifier
          • Items and/or features to be tested
          • Input specifications
          • Output specifications
          • Environmental needs
          • Special procedural requirements
          • Inter-case dependencies
        5. Review the test case specifications with stakeholders
        6. Revise the test case specifications as appropriate
      5. Identify necessary specific test data
      6. Specific test data necessary to support the test conditions and execution of test cases is identified.

        Example work products:

        1. Test data specification

        Sub-practices:

        1. Identify and specify the necessary specific test data required to implement and execute the test cases
        2. Document the necessary specific test data, possibly as part of the test case specification
      7. Maintain horizontal traceability with requirements
      8. Traceability between the requirements and the test conditions is established and maintained.

        Example work products:

        1. Requirements / test conditions traceability matrix

        Sub-practices:

        1. Maintain requirements traceability to ensure that the source of test conditions is documented
        2. Generate a requirements / test conditions traceability matrix
        3. Set up the traceability matrix such that monitoring of requirements coverage during test execution is facilitated
    2. Perform Test Implementation

      During test implementation, the test procedures are developed and prioritized, including the intake test. Test data is created, and the test execution schedule is defined during this phase.

      1. Develop and prioritize test procedures
      2. Test procedures are developed and prioritized.

        Example work products:

        1. Test procedure specification
        2. Automated test script

        Sub-practices:

        1. Develop test procedures by combining the test cases in a particular order and including any other information needed for test execution
        2. Prioritize the test procedures based on identified product risks
        3. Document the test procedures in a test procedure specification, based on the test procedure specification standard
        4. Examples of elements of a test procedure specification include the following [IEEE 829]:

          • Test procedure specification identifier
          • Purpose
          • Special requirements (execution preconditions), e.g., dependencies on other test procedures
          • Procedure steps (test actions and checks)
        5. Review the test procedure specifications with stakeholders
        6. Revise the test procedure specifications as appropriate
        7. Optionally, the test procedures can be automated and translated into automated test scripts
      3. Create specific test data
      4. Specific test data, as specified during the test analysis and design activity, is created.

        Example work products:

        1. Specific test data

        Sub-practices:

        1. Create specific test data required to perform the tests as specified in the test procedures
        2. Archive the set of specific test data to allow the initial situation to be restored in the future
        3. Refer to SP 3.2 Perform test data management from the process area Test Environment for managing the created test data.

      5. Specify intake test procedure
      6. The intake test is specified. This test, sometimes called the confidence or smoke test is used to decide at the beginning of test execution whether the test object is ready for detailed and further testing.

        Example work products:

        1. Intake checklist
        2. Intake test procedure specification

        Sub-practices:

        1. Define a list of checks to be executed during the intake test using the entry criteria as defined in the test plan as an input
        2. Examples of checks to be part of an intake test include the following:

          • All necessary major functions are accessible
          • Representative functions are accessible and working at least for the positive path case
          • Interfaces with other components or systems that will be tested are working
          • The documentation is complete for the available functionality, e.g., test release note, user manual, installation manual
        3. Develop the intake test procedure, based on the checks identified, by putting the checks (test cases) in an executable order and including any other information needed for test execution
        4. Document the intake test procedures in a test procedure specification, based on the test procedure specification standard
        5. Review the intake test procedure specification with stakeholders
        6. Revise the intake test procedure specification as appropriate.
      7. Develop test execution schedule
      8. A test execution schedule is developed that describes the sequence in which the test procedures will be executed.

        Example work products:

        1. Test execution schedule

        Sub-practices:

        1. Investigate the dependencies between the test procedures
        2. Schedule the test procedures using their priority level as a main driver
        3. Assign a tester to perform the execution of a test procedure
        4. Review the test execution schedule with stakeholders
        5. Revise the test execution schedule as appropriate
    3. Perform Test Execution

      Tests are executed according to the previously specified test procedures and test schedule. Incidents are reported and test logs are written.

      1. Perform intake test
      2. Perform the intake test (confidence test) to decide whether the test object is ready for detailed and further testing.

        Example work products:

        1. Intake test log
        2. Incident reports

        Sub-practices:

        1. Perform the intake test (confidence test) using the documented intake test procedure to decide if the test object is ready for detailed and further testing
        2. Document the results of the intake test by means of a test log, based on the test log standard
        3. Log incidents when a discrepancy is observed

        Note that this practice is highly related to the practice SP 2.4 Perform test environment intake test from the process area Test Environment. The intake test on the test object and test environment can possibly be combined.

      3. Execute test cases
      4. According to the defined execution schedule, the test cases are run either manually using documented test procedures and/or via test automation using pre-defined test scripts.

        Example work products:

        1. Test results

        Sub-practices:

        1. Execute the test cases using documented test procedures and/or test scripts
        2. Record actual results
        3. Compare actual results with expected results
        4. Repeat test activities after the receipt of a fix or change by performing re-testing (confirmation testing)
        5. Perform regression testing as appropriate.

        Note that some testing will be carried out informally using no pre-defined detailed test procedures, e.g., during exploratory testing or error guessing.

      5. Report test incidents
      6. Discrepancies between the actual and expected results are reported as test incidents.

        Example work products:

        1. Test incident reports

        Sub-practices:

        1. Log test incidents when a discrepancy is observed
        2. Analyze the test incident for further information on the problem
        3. Establish the cause of the test incident, e.g., system under test, test documentation, test data, test environment or test execution mistake
        4. Assign an initial priority and severity level to the test incident
        5. Formally report the test incident using an incident classification scheme
        6. Examples of elements of a test incident report include the following [IEEE 829]:

          • Test incident report identifier
          • Summary
          • Incident description (input, expected results, actual results, anomalies, date and time, test procedure step, environment, attempts to repeat, testers, observers)
          • Priority level
          • Severity level
        7. Review the test incident report with stakeholders
        8. Store the test incidents in a central repository
      7. Write test log
      8. Discrepancies between the actual and expected results are reported as test incidents.

        Example work products:

        1. Test logs

        Sub-practices:

        1. Collect test execution data
        2. Document the test execution data by means of a test log, based on the test log standard
        3. Examples of elements of a test log include the following [IEEE 829]:

          • Test log identifier
          • Description (items being tested, environment in which the testing has been executed)
          • Activity and event entries (execution description, test results, anomalous events, incident report identifiers)
        4. Review the test log with stakeholders
    4. Manage Test Incidents to Closure

      Test incidents are managed and resolved as appropriate.

      1. Decide disposition of test incidents in configuration control board
      2. Appropriate actions on test incidents are decided upon by a configuration control board (CCB).

        Example work products:

        1. CCB meeting report, including a decision log regarding test incidents
        2. Updated incident report

        Sub-practices:

        1. Establish a CCB with participation of stakeholders, including testing
        2. Review and analyze the incidents found
        3. Revisit the priority and severity level of the test incident
        4. Determine actions to be taken for the test incidents found
        5. Examples of decisions that can be made include the following:

          • Rejected, incident is not a defect
          • Deferred, incident is declined for repair but may be dealt with during a later stage
          • Fix, incident is accepted and shall be repaired
        6. Record the decision including rationale and other relevant information in the incident database; the incident report is updated.
        7. Assign the incident to the appropriate group, e.g., development, to perform appropriate actions
      3. Perform appropriate action to fix the test incident
      4. Appropriate actions are taken to fix, re-test and close the test incidents or defer the incident(s) to a future release.

        Example work products:

        1. Test log (including test results)
        2. Updated incident report

        Sub-practices:

        1. Repair the incident which may involve updating documentation and/or software code
        2. Record information on the repair action in the incident database; the incident report is updated
        3. Perform re-testing, and possibly regression testing, to confirm the fix of the incident
        4. Record information on the re-testing action in the incident database; the incident report is updated
        5. Formally close the incident provided re-testing was successful
      5. Track the status of test incidents
      6. The status of the test incidents is tracked and appropriate actions are taken as needed.

        Example work products:

        1. CCB meeting report
        2. Incident status report

        Sub-practices:

        1. Provide status reports on incidents to stakeholders
        2. Examples of elements that are covered in an incident status report include the following:

          • Incidents opened during period XXXX-XXXX
          • Incidents closed during period XXXX-XXXX
          • Incidents remaining open for X or more weeks
        3. Discuss status reports in a CCB meeting
        4. Take appropriate action if needed, e.g., if an incident that needs repair has the same status for a longer than agreed period of time
  • PA 2.5 Test Environment

    Purpose
    The purpose of Test Environment is to establish and maintain an adequate environment, including test data, in which it is possible to execute the tests in a manageable and repeatable way.
    Introductory Notes
    A managed and controlled test environment is indispensable for any testing. It is also needed to obtain test results under conditions which are as close as possible to the 'real-life' situation. This is especially true for higher level testing, e.g., at system and acceptance test level. Furthermore, at any test level the reproducibility of test results should not be endangered by undesired or unknown changes in the test environment.
    Specification of test environment requirements is performed early in the project. This specification is reviewed to ensure its correctness, suitability, feasibility and accurate representation of a 'real-life' operational environment. Early test environment requirements specification has the advantage of providing more time to acquire and/or develop the required test environment and components such as simulators, stubs or drivers. The type of environment required will depend on the product to be tested and the test types, methods and techniques used.
    Availability of a test environment encompasses a number of issues which need to be addressed. For example, is it necessary for testing to have an environment per test level? A separate test environment per test team or per test level can be very expensive. Maybe it is possible to have the same environment shared between testers and developers. If so, strict management and control is necessary as both testing and development activities are done in the same environment, which can easily negatively impact progress. When poorly managed, this situation can cause many problems ranging from conflicting reservations to people finding the environment in an unknown or undesired state when starting their activities.
    Finally test environment management also includes managing access to the test environment by providing log-in details, managing test data, providing and enforcing configuration management and providing technical support on progress disturbing issues during test execution.
    As part of the Test Environment process area, the requirements regarding generic test data, and the creation and management of the test data are also addressed. Whereas specific test data is defined during the test design and analysis activity, more generic test data is often defined and created as a separate activity. Generic test data is reused by many testers and provides overall background data that is needed to perform the system functions. Generic test data often consists of master data and some initial content for primary data. Sometimes timing requirements influence this activity.
    Scope
    The process area Test Environment addresses all activities for specifying test environment requirements, implementing the test environment and managing and controlling the test environment. Management and control of the test environment also includes aspects such as configuration management and ensuring availability. The Test Environment process area scope includes both the physical test environment and the test data.

    1. Develop Test Environment Requirements

      Stakeholder needs, expectations and constraints are collected and translated into test environment requirements.

      1. Elicit test environment needs
      2. Elicit test environment, including generic test data, needs, expectations and constraints.

        Example work products:

        1. Test environment needs

        Sub-practices:

        1. Study the test approach and test plan for test environment implications
        2. Engage testing representatives for eliciting test environment needs, including generic test data, expectations and constraints
        3. Examples of test environment needs include the following:

          • Network components
          • Software components, e.g., operating systems, firmware
          • Simulators, stubs and drivers
          • Supporting documentation, e.g., user guides, technical guides and installation manuals
          • Interfacing components or products
          • Tools to develop stubs and drivers
          • Test equipment
          • Requirements for multiple test environments
          • Generic test databases
          • Test data generators
          • Test data storage needs
          • Test data archive and restore facilities
        4. Document the test environment needs, including generic test data, expectations and constraints
      3. Develop the test environment requirements
      4. Transform the test environment needs into prioritized test environment requirements.

        Example work products:

        1. Prioritized test environment requirements
        2. Requirements allocation sheet

        Sub-practices:

        1. Translate the test environment needs, including generic test data, expectations and constraints into documented test environment requirements
        2. Establish and maintain a prioritization of test environment requirements
        3. Having prioritized test environment requirements helps to determine scope. This prioritization ensures that requirements critical to the test environment are addressed quickly.

        4. Allocate test environment requirements to test environment components
      5. Analyze the test environment requirements
      6. Analyze the requirements to ensure they are necessary, sufficient and feasible.

        Example work products:

        1. Test environment requirements analysis report
        2. Test environment requirements review log
        3. Test environment project risks

        Sub-practices:

        1. Analyze test environment requirements to determine whether they fully support the test lifecycle and test approach
        2. Examples of practices to support the analysis of the test environment requirements:

          • Mapping of test environment requirements to test levels
          • Mapping of test environment requirements to test types
        3. Identify key test environment requirements having a strong influence on cost, schedule or test performance
        4. Identify test environment requirements that can be implemented using existing or modified resources
        5. Analyze test environment requirements to ensure that they are complete, feasible and realizable
        6. Analyze test environment requirements to ensure that together they sufficiently represent the 'real-life' situation, especially for higher test levels
        7. Identify test project risks related to the test environment requirements
        8. Review the test environment requirements specification with stakeholders
        9. Revise the test environment requirements specification as appropriate
    2. Perform Test Environment Implementation

      The test environment requirements are implemented and the test environment is made available to be used during test execution.

      1. Implement the test environment
      2. Implement the test environment as specified in the test environment requirements specification and according to the defined plan

        Example work products:

        1. Operational test environment
        2. Test results for test environment components

        Sub-practices:

        1. Implement the test environment as specified and according to the defined plan
        2. Adhere to applicable standards and criteria
        3. Perform testing on test environment components as appropriate
        4. Develop supporting documentation, e.g., installation, operation and maintenance documentation
        5. Revise the test environment components as necessary
        6. An example of when the test environment may need to be revised is when problems surface during implementation that could not be foreseen during requirements specification.

      3. Create generic test data
      4. Generic test data as specified in the test environment requirements specification is created.

        Example work products:

        1. Generic test data

        Sub-practices:

        1. Create generic test data required to support the execution of the tests
        2. Anonymize sensitive data in line with the policy when 'real-life' data is used as a source
        3. Archive the set of generic test data
      5. Specify test environment intake test procedure
      6. The test environment intake test (confidence test), to be used to decide whether the test environment is ready for testing, is specified.

        Example work products:

        1. Test environment intake checklist
        2. Test environment intake test procedure specification
        3. Test environment intake test procedure specification review log

        Sub-practices:

        1. Define a list of checks to be carried out during the intake test of the test environment
        2. Develop the test environment intake test procedure based on the checks identified by putting the checks (test cases) in a executable order and including any other information needed for performing the test environment intake test
        3. Document the test environment intake test procedure in a test procedure specification, based on the test procedure specification standard
        4. Review the test environment intake test procedure specification with stakeholders
        5. Revise the test environment intake test procedure as appropriate
        6. Note that this practice is highly related to the practice SP 2.3 Specify intake test procedure from the process area Test Design and Execution and can possibly be combined.

      7. Perform test environment intake test
      8. The test environment intake test (confidence test) is performed to determine whether the test environment is ready to be used for testing.

        Example work products:

        1. Test environment intake test log
        2. Incident reports

        Sub-practices:

        1. Perform the intake test (confidence test) using the documented intake test procedure to decide if the test environment is ready to be used for testing.
        2. Document the results of the test environment intake test by means of a test log, based on the test log standard
        3. Log incidents if a discrepancy is observed
        4. Refer to SP 3.3 Report test incidents from the process area Test Design and Execution for more information on incident logging.

          Note that this practice is highly related to the practice SP 3.1 Perform intake test from the process area Test Design and Execution and the intake test on the test object and test environment can possibly be combined.

    3. Manage and Control Test Environments

      Test environments are managed and controlled to allow for uninterrupted test execution

      1. Perform systems management
      2. Systems management is performed on the test environments to effectively and efficiently support the test execution process.

        Example work products:

        1. System management log file
        2. Test logging

        Sub-practices:

        1. Install components needed, e.g., for a specific test session
        2. Manage access to the test environment by providing log-in details
        3. Provide technical support on progress disturbing issues during test execution
        4. Provide logging facilities, which can be used afterwards to analyze test results
      3. Perform test data management
      4. Test data is managed and controlled to effectively and efficiently support the test execution process.

        Example work products:

        1. Archived test data
        2. Test data management log file

        Sub-practices:

        1. Manage security and access to the test data
        2. Manage test data, e.g., with respect to storage resources needed
        3. Archive and restore test data and other files on a regular basis and if necessary related to a test session
      5. Coordinate the availability and usage of the test environments
      6. The availability and usage of the test environment by multiple groups is coordinated to achieve maximum efficiency.

        Example work products:

        1. Test environment reservation schedule

        Sub-practices:

        1. Set up a procedure for managing the usage of test environments by multiple groups
        2. Make documented reservations of the test environments in the reservation schedule
        3. Identifying specific test environment components needed when making a reservation
        4. Discuss conflicting reservations with involved groups and stakeholders
        5. Define a test environment reservation schedule for the upcoming period
        6. Use the test environment during the reserved and assigned time-slot
        7. Decommision the test environment correctly after usage, e.g., by making sure it is in a known state and test files are removed
      7. Report and manage test environment incidents
      8. Problems that occur when using the test environment are formally reported as incidents and are managed to closure.

        Example work products:

        1. Test environment incident reports
        2. CCB meeting reports, including a decision log regarding test environment incidents

        Sub-practices:

        1. Log the test environment incident when a problem is observed
        2. Formally report the test environment incident using an incident classification scheme
        3. Manage test environment incidents to closure
        4. Refer to Test Design and Execution process area for practices and sub-practices covering incident reporting and management.

Reference: TMMi Foundation(2018), “Test Maturity Model integration (TMMi®) Release 1.2”, [online] https://sjsi.org/materialy-tmmi/ [accessed: 19.05.2022].