In August last year I wrote the following Blog post – detailing some challenges I had come across whilst attempting to create a performance testing strategy at Craneware. This post can be read here: Challenge I Face With Performance Testing.
Towards the end of that article I asked the following questions:
- Which toolset should we use to create our performance tests?
- How can we plug this into our Application Insights APM solution?
- How do we best integrate this into our CI/build process?
- Where should the tests be run from?
- Which tests should be included in a Definition of Done?
I’m now going to use this post to give an update on where I got to with answering these.
Which toolset should we use to create our performance tests?
I quickly figured out that there’s no silver bullet when it comes to answering this question. A colleague and I evaluated a number of tools (Load Testing within Visual Studio, Artillery) but eventually settled with Apache JMeter.
It’s open-source, meaning we needed no business sign-off to use it and it’s one of the most widely adopted performance testing tools, meaning that there was no lack of support on Stack Overflow etc. It was also easy to get up and running with it as it has a GUI.
As easy as it was to get tests up and running – there were a lot of areas to explore. I made sure we read and re-read the (very good) documentation. I figured out early on that it’s really easy in performance testing to return false results. I also had some frustrations around how much RAM and CPU JMeter used when performing heavy load tests. I can’t emphasise enough the importance of following these best practices when using JMeter.
Overall though, I’m happy with our choice. Teams found it easy to get to grips with after we created an initial test template and our performance test script repository is growing quite quickly.
How can we plug this into our Application Insights APM solution?
To be decided. We still have no answer to this. The results we use come from a JMeter graph generator plugin. Teams look into CPU usage etc from our APM solution as tests are running but it’s a very manual process.
How do we best integrate this into our CI/build process?
Also still to be decided. My colleague and I are going to suggest we use BlazeMeter to host our JMeter test scripts as I believe the benefits far outweigh the cost. We’ll be making a business proposal in the near future.
The other solution is we develop our own tool using command line to create a build step in our API releases on TFS. This will take a lot of development effort both to create and maintain.
Where should the tests be run from?
Currently our tests are being run from our local machines but we also now have several VMs hosted on Azure at our disposal for some of our heavier load tests. In our CD process we will be running the tests on VMs.
Which tests should be included in a Definition of Done?
We were successful in getting the following tests into the scrum teams’ Definition of Done. It’s a very small start, but it’s a start. We had no performance testing at all before the end of 2017 so I’m quite proud of what we achieved with the teams adopting these tests:
- Single user load test – does the API meet our NFR with a single user making requests? This could be used as a smoke test in Production in the future.
- Expected load test – teams define the number of users making requests before an API is developed. We then test with this number of users and ensure our NFRs are met.
- Maximum expected load test – As above, but we run with the maximum number of requests we can expect in our system.
We’re obviously missing a few different types of performance tests here. Namely soak, spike and stress tests.
I think our biggest victory is that teams are now talking about performance from the very beginning of development. Teams are re-working solutions based on performance concerns. This wasn’t happening this time last year so I would declare this a victory, especially considering neither myself nor my colleague (who were responsible for delivering this strategy) are dedicated performance testing experts. We’re testers within scrum teams who research this within our self-learning time.
We’ve now hired a Performance Engineer who will take the lead on further developing our performance testing strategy. I’m looking forward to working closely with her and learning from her experience.