Linode 512 much worse then shared hosting (SOLVED)
Are these packages being oversold?
29 Replies
It will be a configuration issue, on your part, that is causing the bad performance. Or bad webapp software. Or both.
You didn't tell us anything about your configuration, so there's not much we can do to help you.
What kind of setup do you have? The tuning wizards here will need to know a little more information before they can diagnosis more clearly.
> Mysql
key_buffer = 16M
maxallowedpacket = 64M
thread_stack = 192K
threadcachesize = 8
myisam-recover = BACKUP
max_connections = 50
table_cache = 1000
tabledefinitioncache = 1000
thread_concurrency = 12
querycachelimit = 1M
querycachesize = 12M
> Apache
StartServers 5 MinSpareServers 5
MaxSpareServers 5
MaxClients 150
MaxRequestsPerChild 1000
Was using the following for apache:
> StartServers 1
MinSpareServers 3
MaxSpareServers 5
MaxClients 50
MaxRequestsPerChild 1000
Same thing.
On the server I'm running VB4, Mysql, Postfix (outgoing only) , Apache.
Also be sure to use whatever caching is appropriate for your software, and consider mysqltuner.pl
Randomly picking any thread in this forum has a 50% chance of finding a thread about this exact problem. It's an Apache+PHP problem more than a Linode problem, but PHP is a common enough affliction that this issue is addressed every few hours.
~JW
@JshWright:
What forum software are you using? Are you using any sort of caching?
~JW
Using Vbulletin which is already a resource hog.
Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.
@johnson46:
Using Vbulletin which is already a resource hog.
Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.
You definitely have something misconfigured.
I run a vBulletin site that generally has 200+ users with 600+ during spikes, and I'm on a 2048…
(wait, I'm not done explaining, I realize you're on a smaller box).
When I first started it up, I noticed the same problems you had. Once you get Apache tuned, you'll have no problems. I'm going to downgrade to a 1024 or a 768 in the near future.
@johnson46:
Using Vbulletin which is already a resource hog.
Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.
Did you follow HoopyCat's instructions to lower your MaxClients significantly?
I'd suggest the following changes:
MaxClients to 20
KeepAliveTimeout to 2
@JshWright:
@johnson46:Using Vbulletin which is already a resource hog.
Just upgraded to the 768mb today and the same time out issues exist. At the moment I'm just got though all error logs in case I've overlooked something obvious.
Did you follow HoopyCat's instructions to lower your MaxClients significantly?
I'd suggest the following changes:
MaxClients to 20
KeepAliveTimeout to 2
Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.
@johnson46:
Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.
are you serving a lot of static content? or is it mostly the php pages?
did you install apc?
@glg:
@johnson46:Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.
are you serving a lot of static content? or is it mostly the php pages?
did you install apc?
I would say its about 50/50. I have apc & memcache installed but the problem is way too many requests on each page load. First I'm going with the nginx as a frontend for the static content and possibly installing a file server (or CDN) for the images. I'll post back with my results.
I currently have shared hosting at HostGator and a VPS here at Linode. I had started with a 768 but moved to a 512 after Linode changed the resources included with each.
The 768 was overkill and the 512 continues to be much faster than anything that I have ever had with HostGator.
In my experience, the VPS here respond as if they have very little load.
Jeff
@johnson46:
Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.
Consider using Squid as a front-end transparent proxy/cache, forwarding dynamic requests to Apache.
Apache processes tend to be large and have lingering closes which can swamp a server.
By letting a front end proxy handle the static stuff, Apache is freed up to do the few (percentage wise) dynamic requests.
On our servers, over 90% of requests to our servers are handled by Squid and never make it to Apache.
@exiges:
@johnson46:Performance improved slightly. Just looking into other ways to reduce the amount of requests per load on the server. Nginx as a reverse proxy might be the solution.
Consider using Squid as a front-end transparent proxy/cache, forwarding dynamic requests to Apache.
Apache processes tend to be large and have lingering closes which can swamp a server.
By letting a front end proxy handle the static stuff, Apache is freed up to do the few (percentage wise) dynamic requests.
On our servers, over 90% of requests to our servers are handled by Squid and never make it to Apache.
I'll look into it. Thanks for the recommendation.
Now that I've had a chance to remove a lot of the bloat left over while on shared hosting, the forum is performing very well.
I'm still looking to scrap apache altogether and replace it with nginx + php-fqm + memcache + apc. The tests so far have exceeded all expectations.
Here is my test set up:
Nginx - No Cache (Still looking for a solution)
PHP-fqm - APC
Vbulletin - Memcache
Wouldn't mind having squid or varnish in front to handle all non cached static requests.
Just glad it wasn't the lower hosting packages being overloaded. I've been a fan of linode and set up for a while now and am pleased with the results.
@JshWright:
Don't bother sticking anything in front of nginx. Nginx is perfectly capable of handling static requests very efficiently.
I'll test out both scenarios. I assume the following would be optimal:
Varnish –(Static)-> Nginx
Nginx --(Dynamic)-> php-fpm(apc)
Vbulletin -> Memcache
A dynamic request would be proxied by Nginx to a PHP FastCGI process.
@JshWright:
A static request coming in should be handled directly by Nginx.
A dynamic request would be proxied by Nginx to a PHP FastCGI process.
Posted the static order incorrectly
@JshWright:
You won't gain anything by putting varnish in front of nginx.
Hopefully not so I have one less server to worry about.
@vegardx:
Or - even better - install varnish to serve all static requests and let nginx/apache2 serve dynamic requests. Squid just seems like a bad fix for a small problem.
I wouldn't say it's a "bad fix" if the problem is lack of static content caching. Overkill perhaps as Squid is a very sophisticated bit of software, not just a cache, it can be used as a load balancer, it can be used to map subdomain requests to other ports etc. If you think you may need some of those features in future it may be good to start out with Squid.
For simple caching though, varnish may be the simplest route.
@johnson46:
Here is my test set up:
Nginx - No Cache (Still looking for a solution)
PHP-fqm - APC
Vbulletin - Memcache
Wouldn't mind having squid or varnish in front to handle all non cached static requests.
I've noticed Nginx now has caching, and some say that it's faster than Varnish at serving static content..
What does the kernel do to disk accesses from nginx if it serves stuff directly? It caches stuff in memory.
So, I doubt there's any real advantage to putting varnish in front of nginx on the same box. Now, if Varnish is on a separate box that does nothing but varnish, sure, that can take some load off. But on the same machine, it's pointless.
@Guspaz:
What exactly does varnish/squid do that's advantageous to put it before nginx? I
None, but there's a good reason for putting it infront of Apache. ( as per OP )
IIRC, nginx can't act as a transparent cache/proxy
@exiges:
IIRC, nginx can't act as a transparent cache/proxy
Sure it can. Running Nginx as a reverse proxy in front of Apache is a rasonably common deployment.