Transfer rate seems very low

Hi there,

I am doing some load testing on a vanilla rails app (One model very basic) to get an idea of what types of nodes I will need to run my app

Node 1 - Ubuntu 10.04, I am running rails 3.1 passenger, apache

Node 2 - Ubuntu 10.04, Postgres 9.1

Both are 2gig nodes (I upgraded from 512 to see if it was a memory issue)

Both are in Atlanta

The app feels snappy in the the Browser.

When I run an ab test like this

ab -n 1000 -c 10 http://50.116.40.129/

I get the following output

Concurrency Level: 10

Time taken for tests: 5.341 seconds

Complete requests: 62

Failed requests: 0

Write errors: 0

Total transferred: 185504 bytes

HTML transferred: 138880 bytes

Requests per second: 11.61 #/sec

Time per request: 861.410 ms

Time per request: 86.141 [ms] (mean, across all concurrent requests)

Transfer rate: 33.92 [Kbytes/sec] received

The Req/sec, and Transfer rate seem very slow. I have talked with support and they have verified that nothing is wrong with my server or the network between be and the Linode. I have no firewall as I am just testing.

Support, told me that it must be my app configuration, but I would think that Transfer rate would be the Network & the Machine? No?

Also, SSH seems very sluggish.

I'd Love some insight/help sorting this out.

Thanks in advance.

7 Replies

@bohara:

Support, told me that it must be my app configuration, but I would think that Transfer rate would be the Network & the Machine? No?

The transfer rate's limit is likely the app, not the uplink. the app just isn't pushing any more than that out.

Looks to me it's the network connection from wherever you ran AB from I just did a quick test and got considerably better

ab -n 1000 -c 10 http://50.116.40.129/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 50.116.40.129 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests

Server Software:        Apache/2.2.14
Server Hostname:        50.116.40.129
Server Port:            80

Document Path:          /
Document Length:        2241 bytes

Concurrency Level:      10
Time taken for tests:   6.949 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      2993000 bytes
HTML transferred:       2241000 bytes
Requests per second:    143.91 [#/sec] (mean)
Time per request:       69.486 [ms] (mean)
Time per request:       6.949 [ms] (mean, across all concurrent requests)
Transfer rate:          420.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       22   22   0.4     22      27
Processing:    33   43  11.6     38     100
Waiting:       33   43  11.6     38     100
Total:         55   65  11.6     60     123

Percentage of the requests served within a certain time (ms)
  50%     60
  66%     63
  75%     66
  80%     69
  90%     86
  95%     94
  98%    101
  99%    106
 100%    123 (longest request)

The major limiting factor there seems to be requests per second. Are you testing from behind a NAT or some firewall that is doing rate limiting, or are you a significant distance away from your Linode?

Thanks for the replies.

I am not behind a firewall doing NAT, but I suppose I am pretty far from Atlanta, because I am in Chicago. My customers will be all over the country, but I am thinking to give New Jersey a try.

I setup a new VPS in New Jersey an things are just as Slow

Concurrency Level: 10

Time taken for tests: 199.322 seconds

Complete requests: 1000

Failed requests: 0

Write errors: 0

Total transferred: 2538000 bytes

HTML transferred: 1786000 bytes

Requests per second: 5.02 #/sec

Time per request: 1993.222 ms

Time per request: 199.322 [ms] (mean, across all concurrent requests)

Transfer rate: 12.43 [Kbytes/sec] received

Super slow.

However when I run from the server:

Concurrency Level: 10

Time taken for tests: 2.124 seconds

Complete requests: 1000

Failed requests: 0

Write errors: 0

Total transferred: 2538000 bytes

HTML transferred: 1786000 bytes

Requests per second: 470.77 #/sec

Time per request: 21.242 ms

Time per request: 2.124 [ms] (mean, across all concurrent requests)

Transfer rate: 1166.80 [Kbytes/sec] received

I get great results.

Even though I am running this last test from the local machine, it is still making a round trip, right?

So this must be my internet connection?

Thanks in advance

Running the AB test from a Dallas based server:

Concurrency Level: 10

Time taken for tests: 10.392409 seconds

Complete requests: 1000

Failed requests: 0

Write errors: 0

Total transferred: 2993000 bytes

HTML transferred: 2241000 bytes

Requests per second: 96.22 #/sec

Time per request: 103.924 ms

Time per request: 10.392 [ms] (mean, across all concurrent requests)

Transfer rate: 281.17 [Kbytes/sec] received

Which is pretty respectable.

Does anyone have experience where an ISP might be limiting Heavy request loads for things like AB testing. I use Comcast.

If you simply want to know how fast a given Linode is capable of serving requests with a specific application stack, you definitely want to get as much of the network out of the way. Position your test source as close (network-wise) to the target Linode as possible.

For example, testing from a Newark Linode to an Atlanta Linode (so no home connections involved) still has an ~20ms RTT. A concurrency of 1 will probably yield 25-50 req/s (1-2 RTTs) even if the server takes no time at all, simply due to network latency. You can crank up concurrency to amortize that latency but you're having to fight the latency as an unnecessary variable in your test. And all it takes are occasional dropped or delayed packet or changes in routing to massively change results.

So instead I would suggest either (a) testing locally on the same server as the app, or (b) testing from a local server in the same DC. (a) means the test harness is competing with the application and you have to watch to see if you run into shared bottlenecks (CPU, I/O) so (b) is preferred but if you don't have a spare box in the same DC you'll need to spin one up for the test. It's not uncommon to have a development box mirroring a production box for testing, so it often can serve as a useful platform for the tests. Even in (b) you'll want to make sure it's the application running out of steam and not the test box.

And don't forget to test relevant pages. The home page may be obvious for initial presentation, but you should try to find "tough" pages (requiring a lot of work to generate) and/or pages likely to be frequently accessed and test against those as well. Otherwise maybe you'll be testing a home page that is usually cached, but have no inkling that all your other pages take 10x longer to display.

Of course, even better is to have a few local boxes to avoid hitting a test harness generation limit on the test box, and/or better simulate multiple simultaneous users to better exercise how the application works in such cases. But at that point you're starting to get into diminishing returns for the effort involved. There are also test services you may look into at that point, like Load Impact (mentioned in the forums before, though I have no experience with them) that can generate load from many network sources simultaneously.

Unless you really care about the rate a single user from a single location could achieve (e.g., you specifically want to take network latency into account), which to be honest, is probably unlikely and/or unnecessary. In real life, no single user is going to achieve such rates (since individual users are unlikely to have a very high concurrent request level), so I think your goal should be to identify a peak load you can support across a lot of simultaneous users so is an upper bound on your application.

All in all, get a local test Linode (even if just for a day), run the tests, and if you get a req/s rate you're happy with, declare victory and go home :-) If not, tune/upgrade and re-test. Rinse, lather, repeat.

After that, just monitor your system resources and make sure you aren't running into a resource limit as you see actual usage.

– David

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct