The application that I’m working on – a REST API, has a full fledged performance test suite that’s run for every release. That test suite is owned by a separate set of people responsible for performance testing. As a developer I limit myself to take inputs from the performance tests, and tune the application wherever necessary. But I think there are some basics every performance test project should get down.
It is common knowledge that a basic performance test should measure both requests/second and response times. So the basic output would be a chart –

The Problem
The commitment to your customer might be something like “less than 500 ms response times for load upto 500 requests/second“. The performance test will generate a chart like the one above, and you can use it to show the the performance is under 500 ms as committed for upto 500 req/s. Also, the chart shows that there is a disproportionate performance decrease after a certain req/s and then it gets exponentially worse. You can decide whether or not to address this based on your budget and whether your user might be expected to increase their work load to more than the problem point.
Now my problem is, how do I define ‘requests/second’. In almost all applications, there are vastly different types of requests. Just executing one type of request repeatedly and using it to derive a performance test would be absolutely useless. Measuring for all the different types of requests, and then aggregating by average or median is also quite useless. In fact, what this translates to, is that the “req/s” attribute is useless.
Define KPIs
You should define KPIs based on the functionality of the application. Instead of “req/s”, you have to create meaningful attributes like “user logins per second”, “orders created per second” or “get-product-by-id requests per second” etc. You need to define metrics also – almost always this is ‘response time’ because that’s the one metric that affects the users directly. But your application can have other metrics also, like volume of logs produced, number of network calls made, amount of RAM utilised. Many of these are not even measured nowadays because it has become common to drastically oversize the hardware required to run software. But still, maybe some are useful.
I believe the above point is one of the most ignored things in software development. You need to have clearly defined KPIs before you do your performance tests. Otherwise the performance testers will put whatever they feel like into the report, and you’ll end up with a vague and crowded document as a performance report.
Create an Environment
Second, you have to get down the environment right. The infrastructure you use for performance testing must ideally be the same size as the one you use for production. The data already present on the system must be the same size as production. For example, the number of products in the database, will most likely affect how fast one product can be retrieved. So you need to have a ‘seeding’ process where you populate your performance test stack with production-like data.
It helps to create a ‘seeding program’ that creates test data based on a configuration file. The config file might have settings like number of users to create, the number of products to create, the number of orders the users might already have etc. These settings can also be ‘ranges’ – eg., each user to have 0 to 30 orders distributed randomly. This might seem like overkill, but trust me, it will up your performance testing game to epic levels. It will give you the confidence to say you can performance test to any given scenario and consequently, future-proof your application.
Report to the Business
Make your report for the business to understand. Don’t make it a technical report like “response time vs. req/s”. Your report should tell stories – like “If number of logged in users increases by 50%, what is the response time for order registration’? If you have started your process by clearly defined KPIs then your report will come out great naturally. And it will communicate directly what needs to be done for the future, and what risks are there in the system.
“All requests must respond within a certain response time, if it doesn’t, just spend more money and throw more hardware at it” – seems to have become a common excuse for a proper performance tune up. Requests are not equal. One type of request might have to talk to the database. One type of request might have to wait for another service on the internet. It’s simply wrong, and very inconvenient to group all requests of an application together. Don’t do that. Go one step further and save some money by doing a proper performance analysis.