The performance and availability of applications are of great importance. The speed of an application affects, among other things, user satisfaction, productivity and/or turnover. Unavailable applications by definition cause damage to a brands image, revenue and productivity. In short: it is important to know in advance whether an application meets the performance requirements.
When we talk about performance tests before a go-live, we often hear the objection ‘it costs too much money and makes no sense because the application is not finished yet’. And even after going live, it is often thought that performance testing costs more than it yields. We think this is a misconception: the business case for performance testing is easy to make.
Profit from performance testing before going live
It is recommendable to perform performance tests as early as the development phase of an application, especially if the application is not yet finished. These tests check whether the foundation (or architecture) of the application is capable of delivering the required performance, and whether the application (to that point) meets the performance requirements. For each change or in-between release, it is tested whether it has an influence on the performance, or whether it has unexpected side effects on other parts of the application (this process can also be automated). And if so, it is immediately clear where any deterioration comes from.
If no performance tests are performed in the development phase, it is unclear if the application will work at the point of delivery. If the performance after going-live turns out to be insufficient, it is unclear where the problem lies.
Costs versus benefits
This makes it possible to draw a business case for performance testing: the costs of performance testing versus the costs of performance problems. The most direct costs of performance problems are in finding the cause: man-hours of different areas of expertise are needed, tooling is needed and the solution also involves costs (for example, scaling up load capacity by adding hardware, or making changes to the code of the application). Moreover, after the problem is solved, a performance test is often performed to check the desired result. In addition, there are indirect costs such as the aforementioned image damage, productivity loss or even loss of revenue.
An example: one of our customers in the healthcare market has invested around 30,000 euros in performance testing during a large migration process. The test, showed that the expected hardware requirements were not necessary, which saved 30% of the hardware costs. This meant the test immediately paid for itself. Also, the new application chain went live under a good performance, allowing the migration to be completed successfully.
Preventing is cheaper than curing
In our daily practice, we regularly see that organizations first have to experience the costs involved in a performance problem after a go-live, before they have the confidence that ‘preventing is cheaper than curing’. Experience shows that an overview of the impact of P1 incidents is sufficient for a solid business case for both performance testing and continuous performance monitoring.
Need help in drawing up the business case for performance testing? Of course, we are happy to help!