Please ApacheBench my Server

I'm looking to get a realistic requests per second reading for my server, but when I run ApacheBench locally I get an insane high number (1851.64 [#/sec]), and I'm using Windows so I can't do it on my personal machine. Can anyone use their personal machine (or a Linode not located on the NJ network) to ApacheBench my server and post the results?

ab -n 1000 -c 5 http://veenstra.ca/

Or perhaps someone knows how to get a realistic number locally on a Windows machine?

10 Replies

From a Linode in London:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   50.430 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6199344 bytes
HTML transferred:       5841987 bytes
Requests per second:    19.83 [#/sec] (mean)
Time per request:       252.150 [ms] (mean)
Time per request:       50.430 [ms] (mean, across all concurrent requests)
Transfer rate:          120.05 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       78   83   3.9     82      90
Processing:   159  169   8.0    166     207
Waiting:       81   86   4.1     84     124
Total:        238  252  11.9    248     288

Percentage of the requests served within a certain time (ms)
  50%    248
  66%    249
  75%    270
  80%    270
  90%    271
  95%    271
  98%    271
  99%    272
 100%    288 (longest request)

Ouch. Thank you!

I'd love to see another test from North America.

From a Linode in Fremont:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   48.646 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6199344 bytes
HTML transferred:       5841987 bytes
Requests per second:    20.56 [#/sec] (mean)
Time per request:       243.230 [ms] (mean)
Time per request:       48.646 [ms] (mean, across all concurrent requests)
Transfer rate:          124.45 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       76   80   2.9     82      86
Processing:   155  163   6.0    167     192
Waiting:       79   83   3.1     85     108
Total:        232  243   8.9    249     275

Percentage of the requests served within a certain time (ms)
  50%    249
  66%    250
  75%    252
  80%    252
  90%    253
  95%    253
  98%    253
  99%    254
 100%    275 (longest request)

Damn, isn't 20 rps fairly slow for a Nginx/PHP-FPM /w APC server serving an extremely basic PHP page?

Any tips on how to improve this? Would config files help?

That's 72K/hour - isn't that fast enough?

For the foreseeable future, absolutely. I was just under the impression that the norm for a setup like this was at least ~100 rps:

Here are some benchmarks that led me to believe it would be faster:

~~[http://blog.a2o.si/2009/06/24/apache-modphp-compared-to-nginx-php-fpm/" target="blank">](http://blog.a2o.si/2009/06/24/apache-mo … x-php-fpm/">http://blog.a2o.si/2009/06/24/apache-mod_php-compared-to-nginx-php-fpm/](

Edit

Shit, the test's were executed on the same machine… :|

Sure, I'll hop on… From a Linode in Dallas:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   24.457 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6195000 bytes
HTML transferred:       5838000 bytes
Requests per second:    40.89 [#/sec] (mean)
Time per request:       122.284 [ms] (mean)
Time per request:       24.457 [ms] (mean, across all concurrent requests)
Transfer rate:          247.37 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       39   40   0.7     40      49
Processing:    81   82   0.9     82      91
Waiting:       42   43   0.7     42      48
Total:        121  122   1.4    122     137
WARNING: The median and mean for the waiting time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%    122
  66%    122
  75%    122
  80%    123
  90%    123
  95%    124
  98%    126
  99%    128
 100%    137 (longest request)

And from a server in Norway:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   180.077 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6195000 bytes
HTML transferred:       5838000 bytes
Requests per second:    5.55 [#/sec] (mean)
Time per request:       900.383 [ms] (mean)
Time per request:       180.077 [ms] (mean, across all concurrent requests)
Transfer rate:          33.60 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      122  323 429.3    253    3432
Processing:   260  577 271.8    528    3874
Waiting:      130  278 119.4    262    1802
Total:        387  900 537.7    788    4342

Percentage of the requests served within a certain time (ms)
  50%    788
  66%    934
  75%   1026
  80%   1072
  90%   1279
  95%   1563
  98%   3528
  99%   3821
 100%   4342 (longest request)

@refringe:

For the foreseeable future, absolutely. I was just under the impression that the norm for a setup like this was at least ~100 rps:
Well, you already know you can achieve that given enough simultaneous source requests based on your local testing. I think all you're seeing here is testing artifacts. Asking for a single remote test in a situation like this is not going to be all that useful.

While not perfect, in general local testing is a realistic way to identify your peak transaction capacity. It is true that any results are somewhat theoretical and that your network pipe will cap that further depending on the size of the requests involved. You can estimate that upper bound by calculating typical response sizes, and your bandwidth. For example, estimating 6000 network-side bytes for your home page (and ignoring other references like css which is unrealistic but I'm too lazy to look them up) and the default 50Mbps limit for a Linode would yield a best case of about 1041rps assuming there are enough clients making requests to take latency out of the equation. Or put another way, if you can hit ~1k rps in local testing, you should be able to satisfy enough requests to saturate your outbound pipe for requests to your home page.

Any testing from a remote machine - especially from a single machine - is much more likely to be dominated by network factors such as round trip time, and bandwidth. You can pretty much see that in the responses here where the test nodes that are further away (in terms of network topology) have lower figures. And for those with longer RTT to your server, a concurrency level of 5 is unlikely to be enough to keep the pipe (bandwidth x latency) full.

But if you'd like a local sample, here's from another Newark Linode, which is probably as good as you can get in terms of a remote, single node test with minimum network overhead:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   1.63092 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6195000 bytes
HTML transferred:       5838000 bytes
Requests per second:    940.65 [#/sec] (mean)
Time per request:       5.315 [ms] (mean)
Time per request:       1.063 [ms] (mean, across all concurrent requests)
Transfer rate:          5690.01 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   1.2      0      13
Processing:     3    4   2.9      4      17
Waiting:        2    3   2.6      3      17
Total:          3    4   3.1      4      17

Percentage of the requests served within a certain time (ms)
  50%      4
  66%      4
  75%      5
  80%      5
  90%      9
  95%     13
  98%     15
  99%     16
 100%     17 (longest request)

As a comparison, here's the same test but from a Dallas Linode. Exact same test parameters, and clearly it's not like your machine has suddenly lost 96% of its performance:

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      5
Time taken for tests:   24.440339 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6195000 bytes
HTML transferred:       5838000 bytes
Requests per second:    40.92 [#/sec] (mean)
Time per request:       122.202 [ms] (mean)
Time per request:       24.440 [ms] (mean, across all concurrent requests)
Transfer rate:          247.50 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       39   39   0.2     39      40
Processing:    81   82   1.0     82      98
Waiting:       41   42   0.8     42      57
Total:        120  121   1.1    122     137

Percentage of the requests served within a certain time (ms)
  50%    122
  66%    122
  75%    122
  80%    122
  90%    122
  95%    123
  98%    123
  99%    124
 100%    137 (longest request)

Now, here's the same Dallas Linode, but bumping up the concurrency factor (to 20) so that I can do a better job of filling the pipe (latency x bandwidth):

Server Software:        nginx/0.7.65
Server Hostname:        veenstra.ca
Server Port:            80

Document Path:          /
Document Length:        5838 bytes

Concurrency Level:      20
Time taken for tests:   7.164382 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      6199344 bytes
HTML transferred:       5841987 bytes
Requests per second:    139.58 [#/sec] (mean)
Time per request:       143.288 [ms] (mean)
Time per request:       7.164 [ms] (mean, across all concurrent requests)
Transfer rate:          845.01 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       39   54 211.7     39    3039
Processing:    81   84  20.2     83     326
Waiting:       41   43  17.1     42     283
Total:        120  138 212.4    122    3122

Percentage of the requests served within a certain time (ms)
  50%    122
  66%    122
  75%    122
  80%    123
  90%    123
  95%    124
  98%    132
  99%    362
 100%   3122 (longest request)

And again, clearly your node hasn't changed (these tests were almost back to back), but by just having more requests in flight, I've done a better job of taking latency out of the equation…

I also gave a quick experiment to running the last Dallas test simultaneously with another New York node (that gets ~40rps with the default concurrency of 5). In that case, the Dallas test actually got a result of 162rps, while the New York node remained at the same 42rps it got when run alone. In other words, neither of them were coming close to the net capacity of your node.

-- David

@db3l - nicely explained.

Fantastic! Thank you so much for not only running the tests, but for the wonderful explanation as well.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct