How many hits/sec can 512 deal with ?
I'm launching an iPhone game that will be turn-based over the internet (think Draw Something but far far less game data) and I'm using my Linode to deal with all the server side stuff. My set up is Ubuntu 10.04 with Apache and PHP.
All the server side scripts the game uses are PHP and the database is external on Amazon DynamoDB.
Could my Linode 512 cope with 1 request per second, how about 10 or 30 ? The PHP scripts will be around 1kb or less and the amount of data going in and out will be just bytes.
Obviously if the game is successful and takes off then I'm prepared to migrate to a higher spec Linode but what can I expect to get out of my current set up ? Any optimisation tips? Any advice would be very helpful. Thanks.
13 Replies
> database is external on Amazon DynamoDB.
That will be your bottleneck communicating from linode to amazon over the network will be the slowest part. Consider adding a caching layer on the linode itself it will allow you to serve many more requests.
What do you mean by cache layer? If every request will be dynamic how will this help and can you point me in the right direction?
There will be a lot of latency making queries to a remote database over the internet (using Amazon DynamoDB from a Linode will be slow). Because of that, caching as much data locally as possible will help.
@figgy:
What do you mean by cache layer? If every request will be dynamic how will this help and can you point me in the right direction?
Without knowing your application details I can't make specific comments but if there's a lot of read only or rarely modified data your app pulls caching that locally will help a lot.
For example, if every request done by user A results in you doing the equivalent of "select * from user where username='user A'", but the user table isn't changing unless you explicitly do an update, then you should be caching the response in PHP such that you do something like this (not real code or caching mechanism, just illustrative):
if ( empty($userCache[$username]) )
{
$userCache[$username] = GetUserFromDB($username);
}
$userDetails = $userCache[$username];
You would then also update the local user cache any time you updated the user's details. Personally, I'm not an expert on this stuff: I normally work on projects small enough that the database is local to the same server, and I rely on the database and filesystem caches. But if your database server is, instead of being on the same server, running on some remote platform living in a different city, then this sort of caching is something you need to think about.
There are a variety of tools that let you store persistent data in memory that lasts between requests. APC is popular because it acts as a PHP accelerator on top of giving you this caching functionality.
EDIT: I should note that APC's memory caching is local: it's only a good idea if you have a single web server. If you have multiple web servers distributing the load, the memcached (which is a distributed cache) is required to keep data consistent between servers.
@Guspaz:
A 512 can handle between zero and one million hits per second.
Not quite. Assuming shortest possible request header:
GET / HTTP/1.1\n
Host: www.example.com\n
Accept: text/html\n\n
which amounts to 60 bytes, and let's say the response is
HTTP/1.0 200 OK\n
Date: Fri, 31 Dec 1999 23:59:59 GMT\n
Content-Type: text/html\n
Content-Length: 0\n\n
which amounts to 104 bytes, let's say then the minimum request-response cycle is 160 bytes.
At 50MBps which is 6.25 MB/s, divided by 160 bytes amounts to cca 40k requests per second of dead traffic without content. And this does not include TCP ACK's and resends in error (assuming full keepalive after first request).
knocks over the sarcasm sign and runs for his life
@Guspaz:
Linode can raise your limit to (they raise the limit if the customer needs it)
:P
Yes, but I don't think they'll raise your limit to 500+ Mbps on a 512 node. And that's for the theoretical dead traffic. Any meaningful content payload would require much more.
@Azathoth:
@Guspaz:Linode can raise your limit to (they raise the limit if the customer needs it)
:P Yes, but I don't think they'll raise your limit to 500+ Mbps on a 512 node. And that's for the theoretical dead traffic. Any meaningful content payload would require much more.
:wink:
Well, we're talking about roughly 160,000 gigabytes per month… If you throw $192,000 a year at Linode, I think they'll raise your limit to 500 Mbps for you
@Guspaz:
Well, we're talking about roughly 160,000 gigabytes per month… If you throw $192,000 a year at Linode, I think they'll raise your limit to 500 Mbps for you
;)
No, the question was how many hits per second can 512 node sustain. You just turned that around back into 1M and then readjusted Linode's plans to accomodate a node for $192k/yr.
But even in that scenario, I still doubt Linode would do that for a 512 node. They'd sooner suggest you to switch to a bigger node, if at all they have 10Gbps NICs on the hosts, because we're talking half of a 1Gbit nic just for 1M hits of dead traffic. Which would probably require switching readjustments.
Also, I doubt the free inbound would still apply at that level of traffic. And let's not forget there's bandwidth cap for regular sized nodes, bw does not scale up beyond a 2GB node, I guess there's a reason for that.