Chapter 2 – Types of Performance Testing
- J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber, Dennis Rea
- Learn about various types of performance tests.
- Understand the values and benefits associated with each type of performance testing.
- Understand the potential disadvantages of each type of performance testing.
Performance testing is a generic term that can refer to many different types of performance-related testing, each of which addresses a specific problem area and provides its own benefits, risks, and challenges.
This chapter defines, describes, and outlines the benefits and project risks associated with several common types or categories of performance-related testing. Using this chapter, you will be able to overcome the frequent misuse and misunderstanding of many of these terms even within established teams.
How to Use This Chapter
Use this chapter to understand various types of performance-related testing. This will help your team decide which types of performance-related testing are most likely to add value to a given project based on current risks, concerns, or testing results. To get the most from this chapter:
- Use the “Key Types of Performance Testing” section to make a more informed decision about which type of testing is most relevant to your specific concerns, and to balance the trade-offs between different test types.
- Use the “Summary Matrix of Benefits by Key Performance Test Types” section to ensure that you consider not only the benefits of a particular type of tests, but also the challenges and areas of concern that are likely to not be addressed adequately by that type of performance test.
- Use the “Additional Concepts / Terms” section to become more aware of additional types of performance testing that may add value to your project, and to improve your ability to engage in conversations about performance testing with people outside of your specific context.
is defined as the technical investigation done to determine or validate the speed, scalability, and/or stability characteristics of the product under test. Performance-related activities, such as testing and tuning, are concerned with achieving response times, throughput, and resource-utilization levels that meet the performance objectives for the application under test. Because performance testing is a general term that covers all of its various subsets, every value and benefit listed under other performance test types in this chapter can also be considered a potential benefit of performance testing in general.
Key Types of Performance Testing
The following are the most common types of performance testing for Web applications.
|Term ||Purpose ||Notes|
|Performance test ||To determine or validate speed, scalability, and/or stability. || A performance test is a technical investigation done to determine or validate the responsiveness, speed, scalability, and/or stability characteristics of the product under test. |
|Load test ||To verify application behavior under normal and peak load conditions. || Load testing is conducted to verify that your application can meet your desired performance objectives; these performance objectives are often specified in a service level agreement (SLA). A load test enables you to measure response times, throughput rates, and resource-utilization levels, and to identify your application’s breaking point, assuming that the breaking point occurs below the peak load condition.|
| || || Endurance testing is a subset of load testing. An endurance test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes anticipated during production operations over an extended period of time.|
| || || Endurance testing may be used to calculate Mean Time Between Failure (MTBF), Mean Time To Failure (MTTF), and similar metrics.|
|Stress test ||To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions. || The goal of stress testing is to reveal application bugs that surface only under high load conditions. These bugs can include such things as synchronization issues, race conditions, and memory leaks. Stress testing enables you to identify your application’s weak points, and shows how the application behaves under extreme load conditions.|
| || || Spike testing is a subset of stress testing. A spike test is a type of performance test focused on determining or validating the performance characteristics of the product under test when subjected to workload models and load volumes that repeatedly increase beyond anticipated production operations for short periods of time.|
|Capacity test ||To determine how many users and/or transactions a given system will support and still meet performance goals. || Capacity testing is conducted in conjunction with capacity planning, which you use to plan for future growth, such as an increased user base or increased volume of data. For example, to accommodate future loads, you need to know how many additional resources (such as processor capacity, memory usage, disk capacity, or network bandwidth) are necessary to support future usage levels.|
| || || Capacity testing helps you to identify a scaling strategy in order to determine whether you should scale up or scale out.|
The most common performance concerns related to Web applications are “Will it be fast enough?”, “Will it support all of my clients?”, “What happens if something goes wrong?”, and “What do I need to plan for when I get more customers?”. In casual conversation, most people associate “fast enough” with performance testing, “accommodate the current/expected user base” with load testing, “something going wrong” with stress testing, and “planning for future growth” with capacity testing. Collectively, these risks form the basis for the four key types of performance tests for Web applications.
Summary Matrix of Benefits by Key Performance Test Types
|Term ||Benefits ||Challenges and Areas Not Addressed|
|Performance test || Determines the speed, scalability and stability characteristics of an application, thereby providing an input to making sound business decisions.|| May not detect some functional defects that only appear under load.|
| || Focuses on determining if the user of the system will be satisfied with the performance characteristics of the application.|| If not carefully designed and validated, may only be indicative of performance characteristics in a very small number of production scenarios.|
| || Identifies mismatches between performance-related expectations and reality.|| Unless tests are conducted on the production hardware, from the same machines the users will be using, there will always be a degree of uncertainty in the results.|
| || Supports tuning, capacity planning, and optimization efforts.|| |
|Load test || Determines the throughput required to support the anticipated peak production load.|| Is not designed to primarily focus on speed of response.|
| || Determines the adequacy of a hardware environment.|| Results should only be used for comparison with other related load tests. |
| || Evaluates the adequacy of a load balancer.|| |
| || Detects concurrency issues.|| |
| || Detects functionality errors under load.|| |
| || Collects data for scalability and capacity-planning purposes.|
| || Helps to determine how many users the application can handle before performance is compromised.|
| || Helps to determine how much load the hardware can handle before resource utilization limits are exceeded.|
|Stress test || Determines if data can be corrupted by overstressing the system. || Because stress tests are unrealistic by design, some stakeholders may dismiss test results.|
| || Provides an estimate of how far beyond the target load an application can go before causing failures and errors in addition to slowness.|| It is often difficult to know how much stress is worth applying.|
| || Allows you to establish application-monitoring triggers to warn of impending failures.|| It is possible to cause application and/or network failures that may result in significant disruption if not isolated to the test environment.|
| || Ensures that security vulnerabilities are not opened up by stressful conditions.|| |
| || Determines the side effects of common hardware or supporting application failures.|| |
| || Helps to determine what kinds of failures are most valuable to plan for.|| |
|Capacity test || Provides information about how workload can be handled to meet business requirements. || Capacity model validation tests are complex to create.|
| || Provides actual data that capacity planners can use to validate or enhance their models and/or predictions.|| Not all aspects of a capacity-planning model can be validated through testing at a time when those aspects would provide the most value.|
| || Enables you to conduct various tests to compare capacity-planning models and/or predictions.|| |
| || Determines the current usage and capacity of the existing system to aid in capacity planning.|| |
| || Provides the usage and capacity trends of the existing system to aid in capacity planning|| |
Although the potential benefits far outweigh the challenges related to performance testing, uncertainty over the relevance of the resulting data — based on the sheer impossibility of testing all of the reasonable combinations of variables, scenarios and situations — makes some organizations question the value of conducting performance testing at all. In practice, however, the likelihood of catastrophic performance failures occurring in a system that has been through reasonable (not even rigorous) performance testing is dramatically reduced, particularly if the performance tests are used to help determine what to monitor in production so that the team will get early warning signs if the application starts drifting toward a significant performance-related failure.
Additional Concepts / Terms
You will often see or hear the following terms when conducting performance testing. Some of these terms may be common in your organization, industry, or peer network, while others may not. These terms and concepts have been included because they are used frequently enough, and cause enough confusion, to make them worth knowing.
|Component test ||A component test is any performance test that targets an architectural component of the application. Commonly tested components include servers, databases, networks, firewalls, clients, and storage devices.|
|Investigation ||Investigation is an activity based on collecting information related to the speed, scalability, and/or stability characteristics of the product under test that may have value in determining or improving product quality. Investigation is frequently employed to prove or disprove hypotheses regarding the root cause of one or more observed performance issues.|
|Smoke test ||A smoke test is the initial run of a performance test to see if your application can perform its operations under a normal load. |
|Unit test ||In the context of performance testing, a unit test is any test that targets a module of code where that module is any logical subset of the entire existing code base of the application, with a focus on performance characteristics. Commonly tested modules include functions, procedures, routines, objects, methods, and classes. Performance unit tests are frequently created and conducted by the developer who wrote the module of code being tested.|
|Validation test ||A validation test compares the speed, scalability, and/or stability characteristics of the product under test against the expectations that have been set or presumed for that product.|
Performance testing is a broad and complex activity that can take many forms, address many risks, and provide a wide range of value to an organization.
It is important to understand the different performance test types in order to reduce risks, minimize cost, and know when to apply the appropriate test over the course of a given performance-testing project. To apply different test types over the course of a performance test, you need to evaluate the following key points:
- The objectives of the performance test.
- The context of the performance test; for example, the resources involved, cost, and potential return on the testing effort.