TECHNOLOGY

Getting started with wrk and wrk2 benchmarking

This blog documents how to use the wrk and wrk2 benchmarking tools, from installation, usage, and report generation. It also provides a simple Dockerfile containing both wrk and wrk2.

Nitika Agarwal
4 min readJan 10, 2021

Wrk

Wrk is a modern HTTP benchmarking tool written in C language which can be used to test the performance of an API.

Source Code: https://github.com/wg/wrk

Installation

  1. Mac: brew install wrk
  2. Linux(Ubuntu/Debian):
sudo apt-get install build-essential libssl-dev git -y
git clone https://github.com/wg/wrk.git wrk
cd wrk
make
# move the executable to somewhere in your PATH, ex:
sudo cp wrk /usr/local/bin

Usage

Below are the options available to run a wrk bench.

Usage: wrk <options> <url>                           
Options:
-c, --connections <N> Connections to keep open
-d, --duration <T> Duration of test
-t, --threads <N> Number of threads to use

-s, --script <S> Load Lua script file
-H, --header <H> Add header to request
--latency Print latency statistics
--timeout <T> Socket/request timeout
-v, --version Print version details

Numeric arguments may include a SI unit (1k, 1M, 1G)
Time arguments may include a time unit (2s, 2m, 2h)

For GET API:

$ wrk -t5 -c55 -d5 --latency http://localhost:8080/users

For POST API:

For benchmarking the POST request, wrk requires to provide an LUA script file that contains the information regarding the request headers and request payload.

$ wrk -t5 -c5 -d5 -s path/to/post.lua --latency http://localhost:8080/users

The post.lua looks like below:

Sample Report

The wrk report provides the Latency and Request/Sec statistics and along with the latency distribution by percentile.
Below is an example of a sample report generated by the wrk bench:

$ wrk -t5 -c55 -d5 --latency http://localhost:8080/users
Running 5s test @ http://localhost:8080/user
5 threads and 5 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.24ms 20.72ms 300.40ms 93.99%
Req/Sec 467.36 116.67 727.00 70.21%
Latency Distribution
50% 1.07ms
75% 1.16ms
90% 1.55ms
99% 65.66ms
2243 requests in 5.01s, 6.07MB read
Requests/sec: 447.90
Transfer/sec: 1.21MB

Wrk2

Wrk2 is a constant throughput, correct latency recording variant of wrk.

Source Code: https://github.com/giltene/wrk2

Installation

  1. Mac:
brew tap jabley/homebrew-wrk2
brew install --HEAD wrk2

2. Linux

sudo apt-get update
sudo apt-get install -y build-essential libssl-dev git zlib1g-dev
git clone https://github.com/giltene/wrk2.git
cd wrk2
make
# move the executable to somewhere in your PATH
sudo cp wrk /usr/local/bin

Usage

The usage of wrk2 is similar to that of wrk with an additional argument of -R which helps the user to specify the constant load that they want to generate on the API.

Usage: wrk2 <options> <url> 
Options:
-c, — connections <N> Connections to keep open
-d, — duration <T> Duration of test
-t, — threads <N> Number of threads to use

-s, — script <S> Load Lua script file
-H, — header <H> Add header to request
-L — latency Print latency statistics
-U — u_latency Print uncorrected latency statistics
— timeout <T> Socket/request timeout
-B, — batch_latency Measure latency of whole
batches of pipelined ops
(as opposed to each op)
-v, — version Print version details
-R, — rate <T> work rate (throughput)
in requests/sec (total)
[Required Parameter]


Numeric arguments may include a SI unit (1k, 1M, 1G)
Time arguments may include a time unit (2s, 2m, 2h)

Sample Report

Below is an example of a sample report generated by the wrk2 bench:

$ wrk2 -t5 -c10 -d20 -R10 -L http://localhost:8080/users
Running 20s test @ http://localhost:8080/users
5 threads and 10 connections
Thread calibration: mean lat.: 167.044ms, rate sampling interval: 388ms
Thread calibration: mean lat.: 178.368ms, rate sampling interval: 499ms
Thread calibration: mean lat.: 167.169ms, rate sampling interval: 395ms
Thread calibration: mean lat.: 165.825ms, rate sampling interval: 401ms
Thread calibration: mean lat.: 166.924ms, rate sampling interval: 415ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 158.57ms 27.22ms 305.15ms 78.00%
Req/Sec 1.83 2.04 5.00 84.48%
Latency Distribution (HdrHistogram - Recorded Latency)
50.000% 153.47ms
75.000% 160.51ms
90.000% 191.87ms
99.000% 279.55ms
99.900% 305.41ms
99.990% 305.41ms
99.999% 305.41ms
100.000% 305.41ms
Detailed Percentile spectrum:
Value Percentile TotalCount 1/(1-Percentile)
122.751 0.000000 1 1.00
132.735 0.100000 10 1.11
141.567 0.200000 20 1.25
145.023 0.300000 30 1.43
148.863 0.400000 40 1.67
153.471 0.500000 52 2.00
153.983 0.550000 55 2.22
155.903 0.600000 60 2.50
158.079 0.650000 65 2.86
158.719 0.700000 70 3.33
160.511 0.750000 75 4.00
166.143 0.775000 78 4.44
166.527 0.800000 80 5.00
170.879 0.825000 83 5.71
177.535 0.850000 85 6.67
189.951 0.875000 88 8.00
190.463 0.887500 89 8.89
191.871 0.900000 90 10.00
196.351 0.912500 92 11.43
197.503 0.925000 93 13.33
198.015 0.937500 94 16.00
200.447 0.943750 95 17.78
200.447 0.950000 95 20.00
207.231 0.956250 96 22.86
208.767 0.962500 97 26.67
208.767 0.968750 97 32.00
209.535 0.971875 98 35.56
209.535 0.975000 98 40.00
209.535 0.978125 98 45.71
279.551 0.981250 99 53.33
279.551 0.984375 99 64.00
279.551 0.985938 99 71.11
279.551 0.987500 99 80.00
279.551 0.989062 99 91.43
305.407 0.990625 100 106.67
305.407 1.000000 100 inf
#[Mean = 158.572, StdDeviation = 27.217]
#[Max = 305.152, Total count = 100]
#[Buckets = 27, SubBuckets = 2048]
----------------------------------------------------------
200 requests in 20.03s, 9.65MB read
Requests/sec: 9.99
Transfer/sec: 493.51KB

This can be plotted via HdrHistogram Plotter to obtain the latency graph.
The graph for the above report looks like below:

Latency By Percentile Distribution Graph

Using wrk and wrk2 via Docker

Here is a very simple Dockerfile that can be built and used for both wrk and wrk2.

The above docker file can be built using:

$ docker build -t wrk-bench:latest

--

--

Nitika Agarwal
Nitika Agarwal

No responses yet