In today's fast-paced digital landscape, web applications must be fast, reliable, and responsive under heavy user loads. A robust source load testing tool like Locust, an open source load testing solution, is critical for ensuring optimal application performance. This blog explores Locust, a highly popular and effective load testing tool that leverages Python code to deliver scalable and user-friendly locust tests for modern applications.
Load testing simulates user traffic to evaluate how applications perform under varying levels of demand. Using a test script to simulate realistic scenarios, load testing identifies performance bottlenecks, ensuring applications remain responsive even during peak usage. Key benefits include:
Load testing has evolved since the 1960s, when organizations began validating mainframe systems' capacity for critical business functions. The rapid growth of web applications in the late 1990s spurred the development of commercial tools like LoadRunner, enabling businesses to simulate traffic and identify bottlenecks. By the early 2000s, open source load testing tools like Apache JMeter offered cost-effective flexibility. However, modern cloud-native and distributed systems demanded lightweight, adaptable solutions.
In 2011, Carl Bystrom created Locust, a source load testing tool built on Python code. Its design emphasized three principles: writing test scripts in Python for readability, supporting distributed testing for scalability, and providing a real-time web interface for dynamic test control. Locust’s open-source nature fostered rapid community contributions, making it a go-to choice for agile teams.
Today, Locust integrates seamlessly with DevOps workflows, CI/CD pipelines, and cloud environments. Its Python code-based test scripts and distributed architecture make it ideal for testing API-driven and microservices-based applications, positioning Locust as a leading open source load testing tool.
Locust is a powerful source load testing tool written in Python, offering simplicity and flexibility for developers and testers familiar with the language. Its key benefits include:
To get started with Locust-based load testing, a user must: First, install Locust using pip. And, define the scenarios of a test by writing a locustfile.py script.Install locust
pip3 install locust
Validate your installation
locust -V
Here is a simple example setup
from locust import HttpUser, task
class HelloWorldUser(HttpUser):
@task
def hello_world(self):
# Adding a User-Agent header to mimic a browser request
self.client.get("/login", headers={"User-Agent": "Mozilla/5.0"})
self.client.get("/in/LoginHelp", headers={"User-Agent": "Mozilla/5.0"})
To run the test, simply execute:
locust -f locustfile.py --host=http://example.com
Total users to simulate: It is recommended that for Locust distributed, the initial number of simulated users must be greater than number of user classes times number of workers. In our case, we used 1 user class and 3 worker nodes.
Hatch rate: In instances where hatch rate is lower than the number of worker nodes, it would hatch in "bursts" where all worker node hatched a single user, then sleep for several seconds, hatch another user, sleep and so on.
If the number of workers on the dashboard exceeds the number of worker nodes available, redeploy the dashboard with the required number of worker nodes/instances
After swarming for a while, your dashboard will look something like this
Requests: Total number of requests made so far
Fails: Number of requests that have failed
Median: Response speed for 50 percentile in ms
90% ile: Response speed for 90 percentile in ms
Average: Average response speed in ms
Min: Minimum response speed in ms
Max: Maximum response speed in ms
Average size (bytes): Average response size in bytes
Current RPS: Current requests per second
Current Failures/s: Total number of failures per second
Your graphs will look something like this:
These graphs can be downloaded using the download icon next to them.
You can download the data under the download data tab.
You can analyze the graphs based on response and volume metrics.
Average response time measures the average amount of time that passes between a client’s initial request and the last byte of a server’s response, including the delivery of HTML, images, CSS, JavaScript, and any other resources. It’s the most accurate standard measurement of the actual user experience.
Peak response time measures the roundtrip of a request/response cycle (RTT) but focuses on the longest cycle rather than taking an average. High peak response times help identify problematic anomalies.
Error rates measure the percentage of problematic requests compared to total requests. It’s not uncommon to have some errors with a high load, but obviously, error rates should be minimized to optimize the user experience.
Concurrent users measure how many virtual users are active at a given point in time. While similar to requests per second (see below), the difference is that each concurrent user can generate a high number of requests.
Requests per second measures the raw number of requests that are being sent to the server each second, including requests for HTML pages, CSS stylesheets, XML documents, JavaScript files, images, and other resources.
Throughput measures the amount of bandwidth, in kilobytes per second, consumed during the test. Low throughput could suggest the need to compress resources.
AAs applications grow in complexity with microservices and cloud-native architectures, open source load testing remains vital. Emerging trends shaping Locust’s role include:
Locust is a powerful source load testing tool that empowers developers, testers, and DevOps teams to ensure applications meet real-world demands. Its Python code-based test scripts, defined with task def and methods like def on_start(self): self.client.post, offer unmatched flexibility for simulating user behaviors. The real-time web interface and distributed testing capabilities make Locust ideal for agile and CI/CD environments. As an open source load testing solution, Locust’s active community ensures continuous improvement, making it a top choice for optimizing application performance and delivering exceptional user experiences in today’s complex software landscape.