IoTPerformanceTestingTools
This agricultural technology (AgTech) company is revolutionising the livestock industry with its proprietary smart ear tag and information platform.
The ear tag incorporates GPS, accelerometer, ambient temperature, Bluetooth, and satellite connectivity. These functions are used to collect various types of information about the animal, while on-tag analytics monitor whether behaviour is following normal patterns.
The tag collects and transmits summarised data to low earth orbit satellites. It is then sent to a central data platform, where it is accessed by customer-authenticated software partners for visualisation and data analytics.
The information provided allows customers to optimise their operational decision making, improve detection of stolen and wandering livestock, and gain increased insights into animal welfare and health.
The Internet-of-Things (IoT) nature of the solution meant that performance testing it would not be easy or straight forward. Although the ear tags regularly communicate with a server like many other mobile devices, the usage pattern is quite different and means it cannot simply be tested as a web/mobile application.
Another consideration is that a Cloud-based platform is used for IoT application enablement and data management. Performance volume on the IoT application for users and data is expected to grow by over 700% and 2,998% respectively in just four years, and that does not consider any other markets the solution may expand to during that time.
To ensure that the IoT solution mitigates identified risks and performs well, the computation and connectivity between the ear tags, data platform, and network infrastructure had to be thoroughly tested. Since the solution’s functionality and user base would continuously expand and grow, they also required a means to continuously improve and ensure its performance.
This required a specific skillset that required the company to look externally for a reliable and trusted testing partner, one with years of knowledge and experience in embedding quality into IoT solutions.
Less than 1% error rate over the entire test.
Identified and solved an IoT database concurrency issue that impacted critical business processes.
Ensured robust performance for higher-than-normal loads, such as during the product’s launch.
System optimised to achieve throughput of 0.29 successful requests/second.
Ensured a user experience of 1.3 seconds on average for key system functions.
Up to 95% of submits below five seconds, ensuring future scalability and growth.
Cost efficiencies gained through optimal tool selection.
The company engaged Planit through the recommendation of an existing software partner. They were impressed by the structured simplicity of our performance engineering framework, which consists of a “deliver once, deliver well” approach to achieve maximum quality, reducing duplication of effort, and savings costs.
Our engagement began with discovery and planning. This consisted of an assessment of their test assets, tools, environments, applications, test data, and more to allow us to tailor a solution that meets their needs and solves their challenges.
During this phase, we uncovered several performance risks that needed to be addressed. One key consideration was the service level agreements (SLA) for availability and response times with third-party providers, since it would determine how well the platform was placed for future scaling.
Other key risk areas we initially identified were:
With these considerations in mind, we constructed and implemented a customised performance testing solution. Our framework ensured it was built to be scalable and maintainable to meet their requirements in a cost-effective way, and to provide immediate results.
As part of this step, we assisted in selecting the right load generation tool based on their needs. A proof of concept was conducted to identify their requirements, to understand the protocols, and a shortlist of tools to test.
Apache JMeter was selected for its strong API and web UI functionality. Not only did it meet all their requirements, it also has the added benefit of being free as an open-source tool.
Since the company and its software partners used Azure DevOps, we implemented an automated performance test execution and reporting framework that harnessed it. Doing so would enable faster performance testing of code and uncover performance issues as quickly as possible. It was also designed to intelligently use secure and scalable IaaS virtual servers to meet the planned growth of data and global markets by up to 700%.
The following key areas were tested and evaluated for performance over a variety of scenarios:
Through our performance testing, we aimed to determine:
Non-functional baselines that we set for the data platform included:
Performance testing was securely implemented and optimised across the company’s virtual machines within our Continuous Performance Testing Framework. Two rounds of testing were executed, with reporting of tests automatically generated after each round and presented to the company in an interim report.
At the end of the testing, a final performance summary report was presented. It outlined how the data platform performed against the targets we set for it, as well as the actual risks we uncovered, mitigations implemented, and recommendations to further mitigate risk.
Our insights into the performance of the data platform enabled the company to go live with confidence, knowing that their solution would run well for its customers. They also received a continuous performance solution to help them monitor performance and unlock further speed improvements.
The robustness and stability of the solution was assessed by comparing results across like-for-like tests, with a less than 1% error rate registered across over the whole testing period. Any errors uncovered were analysed and resolved after each test.
Through baseline performance testing, we identified a concurrency issue in the IoT database that impacted critical business processes. The IoT database vendor promptly released a patch to address the issue, and our subsequent retesting showed that the fix worked.
Subsequent performance testing showed that the system could handle expected higher than normal loads. Following all optimisations done to the system, a throughput of 0.29 successful requests/second was achieved while providing a user experience of key system functions of 1.3 seconds on average.
Stress testing highlighted that up to 95% of submits would be below the agreed five seconds. This indicates that the capacity of the system was well placed to handle growth into other regions of the globe. Having discovered that it took UK users up to 20 seconds to sign-in to the data platform, compared to the two seconds or less experienced by Australian users, we were able to recommend the deployment of additional servers to reduce the impact of network latency.
Our performance testing not only helped improve and better forecast the needs of their data platform before customers started using it, the testing also provided the development team with a continuous feedback approach where speed is considered throughout the entire lifecycle. By closely aligning our performance testing approach with the company’s continuous delivery and continuous deployment practice, it will enable their code to be created and tested quickly with speed in mind.
The benefit of this approach is that unit level performance tests will uncover slow code, which can be stopped from making its way into the application. Deployed code is also quickly tested for speed and resiliency, with results feeding back into the design of new features.
By implementing this continuous performance testing approach that the company wanted for its delivery, they will be well positioned to automate more of its performance testing and reporting. These efficiencies will then enable them to detect and fix any performance issues early before they become costly or time consuming.
We use cookies to optimise our site and deliver the best experience. By continuing to use this site, you agree to our use of cookies. Please read our Cookie Policy for more information or to update your cookie settings.