We see it often in our practice: a development party delivers a software package on behalf of the client and reports neatly on a list of quality aspects with green check marks behind it: code quality, compliance, functional quality, performance, etc. The managing party compiles the necessary hardware according to the specifications of the developers and the new software package is then installed. After a number of final functional tests have been carried out, the application goes into production. After that, some drinks, a word of thanks from the client and the final payment to the supplier: everyone is happy.
But then......
After a few weeks, the first complaints arise: the speed of the application is a bit disappointing. Slowly the situation gets worse and eventually the application becomes unworkably slow. A restart of the servers seemed to have solved some problems, but problems are still returning. Regularly restarting the application and cleaning the data appear to be ways to keep the application stable, but managing the application becomes a daily concern. Suppliers point to each other and escalations spoil the mutual relationship. External consultants carry out research to get rid of the leak, but once in production, the possibilities for improvement appear to be limited. In the meantime, the relationship between the cooperating parties has deteriorated and the reputation of IT – or even of the brand – has been seriously damaged.
“Unfortunately, we often encounter this scenario in practice. And that’s a pity, because this could easily have been prevented!”
"But we did test, didn't we?"
How an application behaves under load, can be tested with performance tests in a test environment before the application is finalized. Anyone can create and perform such a performance test, but what is the use of the results if the test itself was not good? The value of test results lies mainly in the quality of the test. A performance test is often seen as brutal violence that is unleashed on an application, but this is much more nuanced. Composing and performing a good performance test is specialist work.
What scenario do you test?
A good performance test is characterised by the correct composition of test scenarios with which specific performance risks are specifically tested. For example: an application that is used on a daily basis by a small number of users must be fast, but does not have to be able to withstand any load peaks. A website that is used in emergency situations is another story: it has to be able to process large numbers of users in a short period of time. Load capacity and scalability are more important than speed. Other questions may be: does an application recover independently after overloading? And what happens if the infrastructure is temporarily disconnected? In short, carefully composed performance tests are carried out for specific scenarios, taking into account variables such as: simultaneity, server pressure, browser behaviour, speed, a-synchronicity, etc.
Can anyone do it?
Performance tests, anyone can do it. Right? The answer is ‘no’. Performance testing is an important quality test that requires specialist knowledge and tooling. A well-performed performance test provides the necessary knowledge that is necessary for the correct configuration and scaling of the application in a production environment. In some cases it pays to start automated testing in a DevOps environment.
Make clear agreements with the supplier about the expectations that users have about the speed and use of the application and also set requirements when testing it.