massive short-term bandwidth needs
My client is a music company that has released an extremely popular album for download. Their existing server is swamped - it's unable to deliver the 60Mb/s traffic of people trying to download this album (hundreds of simultaneous downloads), and performance of their main web site is suffering.
I would like to move this download onto a linode, freeing up the main server for normal work.
How much bandwidth can a Linode 4096 handle (I'm interested in the per-second rate)? 60Mb/s is more than half of a FastEthernet connection - can we saturate a 100Mbps link? or do we have a gigabit uplink?
The situation is urgent, and I hope to get this set up within the next few hours, if this is a workable solution.
thanks, matt.
13 Replies
Open a ticket and you will probably have a reply within 5 minutes from support.
Really if you're just serving a static file, create an account, setup running a LEMP stackscript to have it served via nginx:
And you'll be up and running in about 20-30 minutes if you just want to point to an IP address.
btw, 4096 is not the largest plan
Why aren't you looking at one of the big CDN providers for something like this?
60Mb/s wouldn't be a problem. A few quick points, however:
1) Linodes, by default, are capped at 50Mb/s. This can easily be raised with a support ticket and proper justification.
2) You may want to split this up between multiple Linodes.
3) 60Mb/s sustained for a month will total out around 18TB. Make sure you understand the transfer fees involved with pushing that much content.
I suggest you e-mail
Regards,
-Tom
EDIT: Wow… I was way too slow
One other thought though… even a simple DNS round-robin would cut the transfer requirements of any one particular server dramatically.
@waldo:
So you're not currently using Linode? By default, your node is restricted to 50mbps, but you can create a support ticket and ask them to increase that, or so I've read others have done.
I use Linode for my own servers, and those of two other clients, but this is one I inherited; they have a dedicated server set up elsewhere. (Their disk space needs are massive, far beyond anything in the published Linode service plans - currently about 1.2TB of audio).
@waldo:
Really if you're just serving a static file, create an account, setup running a LEMP stackscript to have it served via nginx:
http://www.linode.com/stackscripts/view … criptID=42">http://www.linode.com/stackscripts/view/?StackScriptID=42
What I plan to do is roll out a simple Apache install with static files protected by HTTP basic auth (.htaccess), and then generate a unique username/password for each customer with the right to this file. The file isn't public - these customers each paid the purchase price of the album - so we need to be sure there are no unauthorised downloads, as giving it away for free would violate our contracts with the artists. Using the customer's real name or email for the htaccess username will discourage them from posting the link in any public places.
> Why aren't you looking at one of the big CDN providers for something like this?
Linode is what I'm familiar with! I'm confident I can build a secure delivery system, with usernames that will appear in the access logs so we can detect abuse, in an hour or so.
Thanks for the tip - I'll be phoning my client in a few minutes to advise that we go with this plan, with the bandwidth caps raised.
@tasaro:
Hi Matt -
60Mb/s wouldn't be a problem. A few quick points, however:
1) Linodes, by default, are capped at 50Mb/s. This can easily be raised with a support ticket and proper justification.
2) You may want to split this up between multiple Linodes.
3) 60Mb/s sustained for a month will total out around 18TB. Make sure you understand the transfer fees involved with pushing that much content.
I suggest you e-mail
support@linode.com or open a support ticket if you already have an account.Regards,
-Tom
Thanks - I'm going to talk to the client in a few minutes and get permission to spend the money, then set up a 4096 and ask it be uncapped. We expect that the traffic will die down after a week or so, and it's US-only which means it will drop dramatically at night, so we might not even hit the monthly limit - but I've looked at your price for going over, and it's reasonable, so it's not a big deal if we do.
You mention multiple nodes - in terms of serving this content efficiently (it'll be flat files served up directly by Apache, no https, no CGI, PHP or any interpreted language), and effectively using the CPU, would it be better to have one big node or two small ones? I can easily have the same hostname point to both.
thanks, matt
For the type of traffic you're talking about, I would throw several smaller nodes at it, just using a DNS round-robin to distribute the load across them.
If you're just throwing static files out there, an evented server like nginx would probably be a better idea. Implementing the authentication you mention above would be just as easy.
@JshWright:
Two smaller nodes have the same amount of memory as one larger node, but they have "twice" the CPU availability and disk i/o bandwidth ("twice" is a simplification, hence the quotes).
For the type of traffic you're talking about, I would throw several smaller nodes at it, just using a DNS round-robin to distribute the load across them.
If you're just throwing static files out there, an evented server like nginx would probably be a better idea. Implementing the authentication you mention above would be just as easy.
http://wiki.nginx.org/HttpAuthBasicModule
Two nodes it'll be then - I can even locate them in different cities to spread the load. (Although with simple DNS round robin, not some GeoIP lookup, it'll be random as to who gets assigned to which).
Apache setup is as easy as "scp -r" the existing apache install tree from one of my existing nodes, so I'll probably go with that, at least at first - but if it appears to be hitting some cpu limitation I could look into nginx in a few days.
thanks, matt.
@obs:
I strongly suggest using mpm_worker instead or the standard prefork, worker is threaded so will use less ram, and since you aren't using php you don't need to worry about thread safe problems.
Thanks - I hadn't known about that, and will research that option. I've always done prefork.
(This would have been a brilliant plan, if only our site had exploded in popularity at the beginning of the month rather than at the end.)
I'll probably set up a second server within my own account - as my personal server is well under quota, I've got about 600GB I can donate.
And the cost isn't excessive, really… $0.10 per GB is cheap, considering that each and every transaction this server makes will be to serve a paying customer.
EDIT:
> And the cost isn't excessive, really… $0.10 per GB is cheap, considering that each and every transaction this server makes will be to serve a paying customer.
Yeah, but those charges add up… I run a brick-n-mortar business where we accept credit cards. While the transaction fees are small, spread that across the entire month of revenue and 1,000s of charges and they add up to a lot…. It's a bill I'd rather not have to pay. Well, to be honest, I don't mind paying for the service, I just wish the fees were even smaller