Apache Bench results

I have a simple WordPress blog and I wanted to do a load test. Here are the results:

$ ab -n 10000 -c 10 http://www.mydomain.com/blog/
This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Copyright 2006 The Apache Software Foundation, http://www.apache.org/

Benchmarking www.mydomain.com (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Finished 10000 requests

Server Software:        Apache
Server Hostname:        www.mydomain.com
Server Port:            80

Document Path:          /blog/
Document Length:        39855 bytes

Concurrency Level:      10
Time taken for tests:   1901.884287 seconds
Complete requests:      10000
Failed requests:        15
   (Connect: 0, Length: 15, Exceptions: 0)
Write errors:           0
Non-2xx responses:      15
Total transferred:      402236465 bytes
HTML transferred:       397960245 bytes
Requests per second:    5.26 [#/sec] (mean)
Time per request:       1901.884 [ms] (mean)
Time per request:       190.188 [ms] (mean, across all concurrent requests)
Transfer rate:          206.54 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.3      0      14
Processing:    19 1898 3497.9    339   41011
Waiting:       19 1283 2910.0    302   41011
Total:         19 1898 3497.9    339   41011

Percentage of the requests served within a certain time (ms)
  50%    339
  66%    450
  75%   1090
  80%   2567
  90%   7123
  95%   9399
  98%  12817
  99%  16084
 100%  41011 (longest request)

I'm no expert but 5.26 requests/second is pretty bad, right? Any suggestions for improving the performance?

Server Info: Apache 2, PHP5 as FastCGI, MySQL db, APC for caching.

15 Replies

http://wordpress.org/extend/plugins/wp-super-cache/

@dcelasun:

ab -n 10000 -c 10 http://www.mydomain.com/blog/

10,000 requests takes a while. Just to get an idea, try 100 requests first.

Make sure client-side compression is enabled, just in case the server has it, and try with fewer simultaneous connections and then more to see if that makes difference - something like:

two simultaneous client connections first

ab -n 100 -c 2 -H 'Accept-Encoding: gzip' http://www.mydomain.com/blog/

and then with 10 client connections:

ab -n 100 -c 10 -H 'Accept-Encoding: gzip' http://www.mydomain.com/blog/

James

P.S. What's HemenKur, Duru Can?

Well, WP-super-cache is not an option as it messes things up really bad when used with APC. Would you guys recommend eAccelerator (or something else) over APC?

Zunzun, thanks I'll try those and post back.

Btw, HemenKur is an abandoned project of mine. It basically installs a deb package on a system without apt. I coded it with Pardus Linux (http://www.pardus.org.tr/en) in mind, but several users reported that it works on Mandriva, openSuse and PcLinuxOS as well. It was still on a pre-alpha stage when I simply couldn't find enough time to continue.

How did you find out about it?

I think I remember seeing a version of wp-super-cache somewhere that worked with APC. You might look for that.

I haven't run the numbers to back this up, but you'll probably get the single biggest performance boost out of super-cache. It's hard to beat serving static files, so I'd start with super-cache and then go from there.

Thanks btmorex, I'll look into that.

James, here are the test results.

Output for: ab -n 100 -c 2 -H 'Accept-Encoding: gzip' http://www.mydomain.com/blog/

Concurrency Level:      2
Time taken for tests:   9.917412 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      4028300 bytes
HTML transferred:       3985500 bytes
Requests per second:    10.08 [#/sec] (mean)
Time per request:       198.348 [ms] (mean)
Time per request:       99.174 [ms] (mean, across all concurrent requests)
Transfer rate:          396.58 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:   178  197   7.9    199     218
Waiting:      177  197   7.9    198     218
Total:        178  197   7.9    199     218

Percentage of the requests served within a certain time (ms)
  50%    199
  66%    202
  75%    203
  80%    204
  90%    207
  95%    211
  98%    216
  99%    218
 100%    218 (longest request)

Output for: ab -n 100 -c 2 -H 'Accept-Encoding: gzip' http://www.mydomain.com/blog/

Concurrency Level:      10
Time taken for tests:   16.495488 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      4028300 bytes
HTML transferred:       3985500 bytes
Requests per second:    6.06 [#/sec] (mean)
Time per request:       1649.549 [ms] (mean)
Time per request:       164.955 [ms] (mean, across all concurrent requests)
Transfer rate:          238.43 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   180 1407 2760.3    228   13833
Waiting:      179 1407 2760.2    227   13832
Total:        180 1407 2760.3    228   13833

Percentage of the requests served within a certain time (ms)
  50%    228
  66%    318
  75%    429
  80%   1737
  90%   5083
  95%   9577
  98%  10899
  99%  13833
 100%  13833 (longest request)

It seems like the "Time per request" value is way too high and "Requests per second" is too low. Any suggestions?

I've also tried enabling mod_deflate for gzip compression, but it didn't make a difference.

@dcelasun:

Btw, HemenKur is an abandoned project of mine.

How did you find out about it?

You can't get away from Google with a user name that distinct. Still living in Turkey? Heh heh heh…

James

@dcelasun:

I've also tried enabling mod_deflate for gzip compression, but it didn't make a difference.

Change 'Accept-Encoding: gzip' to 'Accept-Encoding: gzip,deflate' and try again with both the gzip and deflate options on - since you only enabled deflate.

Your initial test showed that 2 requests at a time are OK, but with 10 at a time the server bogs down - that is how I see it, anyway.

James

@zunzun:

You can't get away from Google with a user name that distinct. Still living in Turkey?

Cyber stalker :P

With 2 connections:

Document Path:          /blog/
Document Length:        12024 bytes

Concurrency Level:      2
Time taken for tests:   10.138462 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      1253300 bytes
HTML transferred:       1202400 bytes
Requests per second:    9.86 [#/sec] (mean)
Time per request:       202.769 [ms] (mean)
Time per request:       101.385 [ms] (mean, across all concurrent requests)
Transfer rate:          120.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.2      0       2
Processing:   181  201   8.9    203     223
Waiting:      181  201   8.9    203     223
Total:        181  201   9.0    203     223

Percentage of the requests served within a certain time (ms)
  50%    203
  66%    205
  75%    207
  80%    209
  90%    213
  95%    216
  98%    223
  99%    223
 100%    223 (longest request)

With 10 connections:

Document Path:          /blog/
Document Length:        12024 bytes

Concurrency Level:      10
Time taken for tests:   22.611492 seconds
Complete requests:      100
Failed requests:        0
Write errors:           0
Total transferred:      1253300 bytes
HTML transferred:       1202400 bytes
Requests per second:    4.42 [#/sec] (mean)
Time per request:       2261.149 [ms] (mean)
Time per request:       226.115 [ms] (mean, across all concurrent requests)
Transfer rate:          54.09 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:   181 1738 3140.3    246   13333
Waiting:      181 1622 3138.2    226   13332
Total:        181 1738 3140.3    246   13333

Percentage of the requests served within a certain time (ms)
  50%    246
  66%    366
  75%   1795
  80%   2823
  90%   6092
  95%  11026
  98%  13056
  99%  13333
 100%  13333 (longest request)

Still terrible. Any suggestions?

@mwalling:

Cyber stalker :P Yep, it's my first :D

@dcelasun:

Still terrible. Any suggestions?

Well, find out how many simultaneous connections you can have before performance degrades (2,3,4?). Then over SSH run the top command and see if you are CPU bound (default for top) or memory bound (press capital 'M' when top is running). Knowing what processes are slowing you down, and whether they are CPU or memory limited, might suggest a next step.

James the Super Cyber Stalker

With "ab -n 100 -c 2" the php5 processes goes up to 95% cpu, but there's no package loss. 9.87 requests per second.

With "ab -n 100 -c 10" the spawned php5's go up to 300% cpu (all of'em summed up). RAM goes up to 70% as well. 5.07 requests per second.

What good is a server when it consumes 300% cpu with only 10 clients?

What do you think the reason is? Bad configuration? If so, where is it? Apache, php, APC?

@dcelasun:

What do you think the reason is? Bad configuration? If so, where is it? Apache, php, APC?

From what you reported, it is a PHP process that limits on CPU alone and not an Apache problem or a memory problem. I'm not familiar with PHP, so I can't help you troubleshoot past this point other than to recommend using a cache.

James

Thanks. I was using APC, now I took the advice above and installed xCache. The requests per second almost doubled. I'll keep testing and decide which cache to use.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct