Linode 512 much worse then shared hosting (SOLVED)

Just moved my forum over to the 512 package and we barely have 100 users at any given time and things couldn't be worse. The server itself doesn't pass 4-6% cpu, avg's 130-160MB of ram and the bandwidth is around 1-200K up. Tried every apache config I could think of but nothing seems to work.

Are these packages being oversold?

29 Replies

They most definitely are not being oversold.

It will be a configuration issue, on your part, that is causing the bad performance. Or bad webapp software. Or both.

You didn't tell us anything about your configuration, so there's not much we can do to help you.

Not a chance. Either you're badly misconfigured, or someone on your host is hogging resources extremely badly.

What kind of setup do you have? The tuning wizards here will need to know a little more information before they can diagnosis more clearly.

Sorry for the lack of info, been a long few days.

> Mysql

key_buffer = 16M

maxallowedpacket = 64M

thread_stack = 192K

threadcachesize = 8

myisam-recover = BACKUP

max_connections = 50

table_cache = 1000

tabledefinitioncache = 1000

thread_concurrency = 12

querycachelimit = 1M

querycachesize = 12M

> Apache

StartServers 5

MinSpareServers 5

MaxSpareServers 5

MaxClients 150

MaxRequestsPerChild 1000

Was using the following for apache:

> StartServers 1

MinSpareServers 3

MaxSpareServers 5

MaxClients 50

MaxRequestsPerChild 1000

Same thing.

On the server I'm running VB4, Mysql, Postfix (outgoing only) , Apache.

You probably want to crank MaxClients waaaaay down (~20-25 or so), be sure KeepaliveTimeout is very low (1 second, vs. the default of 15 seconds), and ensure you're using adequate caching. 512 MB is not a huge amount of memory, and Apache's default configuration (for mpm_prefork, required to work around PHP's shortcomings) assumes you have an infinite amount of it.

Also be sure to use whatever caching is appropriate for your software, and consider mysqltuner.pl to make sure your MySQL configuration is optimal.

Randomly picking any thread in this forum has a 50% chance of finding a thread about this exact problem. It's an Apache+PHP problem more than a Linode problem, but PHP is a common enough affliction that this issue is addressed every few hours.

What forum software are you using? Are you using any sort of caching?

~JW

@JshWright:

What forum software are you using? Are you using any sort of caching?

~JW

Using Vbulletin which is already a resource hog.

Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.

@johnson46:

Using Vbulletin which is already a resource hog.

Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.

You definitely have something misconfigured.

I run a vBulletin site that generally has 200+ users with 600+ during spikes, and I'm on a 2048…

(wait, I'm not done explaining, I realize you're on a smaller box).

When I first started it up, I noticed the same problems you had. Once you get Apache tuned, you'll have no problems. I'm going to downgrade to a 1024 or a 768 in the near future.

@johnson46:

Using Vbulletin which is already a resource hog.

Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.

Did you follow HoopyCat's instructions to lower your MaxClients significantly?

I'd suggest the following changes:

MaxClients to 20

KeepAliveTimeout to 2

@JshWright:

@johnson46:

Using Vbulletin which is already a resource hog.

Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.

Did you follow HoopyCat's instructions to lower your MaxClients significantly?

I'd suggest the following changes:

MaxClients to 20

KeepAliveTimeout to 2

Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.

@johnson46:

Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.

are you serving a lot of static content? or is it mostly the php pages?

did you install apc?

@glg:

@johnson46:

Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.

are you serving a lot of static content? or is it mostly the php pages?

did you install apc?

I would say its about 50/50. I have apc & memcache installed but the problem is way too many requests on each page load. First I'm going with the nginx as a frontend for the static content and possibly installing a file server (or CDN) for the images. I'll post back with my results.

I am anxiously awaiting the solution to this problem.

I currently have shared hosting at HostGator and a VPS here at Linode. I had started with a 768 but moved to a 512 after Linode changed the resources included with each.

The 768 was overkill and the 512 continues to be much faster than anything that I have ever had with HostGator.

In my experience, the VPS here respond as if they have very little load.

Jeff

Finally found the issue, my firewall rules for SYN attacks were a bit too restrictive. I've used a similar rule on other forums & blogs but it seems VB needed a bit more wiggle room. Still plan on adding nginx as a reverse proxy and php fqm for apache. I'll try and compile a mini how to with links to a few sources that helped me along the way.

@johnson46:

Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.

Consider using Squid as a front-end transparent proxy/cache, forwarding dynamic requests to Apache.

Apache processes tend to be large and have lingering closes which can swamp a server.

By letting a front end proxy handle the static stuff, Apache is freed up to do the few (percentage wise) dynamic requests.

On our servers, over 90% of requests to our servers are handled by Squid and never make it to Apache.

Or - even better - install varnish to serve all static requests and let nginx/apache2 serve dynamic requests. Squid just seems like a bad fix for a small problem.

@exiges:

@johnson46:

Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.

Consider using Squid as a front-end transparent proxy/cache, forwarding dynamic requests to Apache.

Apache processes tend to be large and have lingering closes which can swamp a server.

By letting a front end proxy handle the static stuff, Apache is freed up to do the few (percentage wise) dynamic requests.

On our servers, over 90% of requests to our servers are handled by Squid and never make it to Apache.

I'll look into it. Thanks for the recommendation.

Now that I've had a chance to remove a lot of the bloat left over while on shared hosting, the forum is performing very well.

I'm still looking to scrap apache altogether and replace it with nginx + php-fqm + memcache + apc. The tests so far have exceeded all expectations.

Here is my test set up:

Nginx - No Cache (Still looking for a solution)

PHP-fqm - APC

Vbulletin - Memcache

Wouldn't mind having squid or varnish in front to handle all non cached static requests.

Just glad it wasn't the lower hosting packages being overloaded. I've been a fan of linode and set up for a while now and am pleased with the results.

Don't bother sticking anything in front of nginx. Nginx is perfectly capable of handling static requests very efficiently.

@JshWright:

Don't bother sticking anything in front of nginx. Nginx is perfectly capable of handling static requests very efficiently.

I'll test out both scenarios. I assume the following would be optimal:

Varnish –(Static)-> Nginx

Nginx --(Dynamic)-> php-fpm(apc)

Vbulletin -> Memcache

A static request coming in should be handled directly by Nginx.

A dynamic request would be proxied by Nginx to a PHP FastCGI process.

@JshWright:

A static request coming in should be handled directly by Nginx.

A dynamic request would be proxied by Nginx to a PHP FastCGI process.

Posted the static order incorrectly :) Meant to put varnish ahead of nginx.

You won't gain anything by putting varnish in front of nginx.

@JshWright:

You won't gain anything by putting varnish in front of nginx.

Hopefully not so I have one less server to worry about.

I would have to agree. nginx is amazingly fast. I use it in front of tomcat on a couple sites to serve static content. I once did a load test using JMeter from my home machine and got several thousand requests per second with no appreciable CPU usage on the machine.

@vegardx:

Or - even better - install varnish to serve all static requests and let nginx/apache2 serve dynamic requests. Squid just seems like a bad fix for a small problem.

I wouldn't say it's a "bad fix" if the problem is lack of static content caching. Overkill perhaps as Squid is a very sophisticated bit of software, not just a cache, it can be used as a load balancer, it can be used to map subdomain requests to other ports etc. If you think you may need some of those features in future it may be good to start out with Squid.

For simple caching though, varnish may be the simplest route.

@johnson46:

Here is my test set up:

Nginx - No Cache (Still looking for a solution)

PHP-fqm - APC

Vbulletin - Memcache

Wouldn't mind having squid or varnish in front to handle all non cached static requests.

I've noticed Nginx now has caching, and some say that it's faster than Varnish at serving static content..

http://wiki.nginx.org/NginxHttpProxyMod … cache_path">http://wiki.nginx.org/NginxHttpProxyModule#proxycachepath

Another reason nginx is best up front is that it handles gzip compression properly.

What exactly does varnish/squid do that's advantageous to put it before nginx? It caches stuff in memory.

What does the kernel do to disk accesses from nginx if it serves stuff directly? It caches stuff in memory.

So, I doubt there's any real advantage to putting varnish in front of nginx on the same box. Now, if Varnish is on a separate box that does nothing but varnish, sure, that can take some load off. But on the same machine, it's pointless.

@Guspaz:

What exactly does varnish/squid do that's advantageous to put it before nginx? I

None, but there's a good reason for putting it infront of Apache. ( as per OP )

IIRC, nginx can't act as a transparent cache/proxy

@exiges:

IIRC, nginx can't act as a transparent cache/proxy

Sure it can. Running Nginx as a reverse proxy in front of Apache is a rasonably common deployment.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct