Intermittent 504s Using Nginx + PHP FastCGI

I have my Linode 360 set up to run Nginx and PHP FastCGI. I have around a dozen low-traffic Drupal sites and the total memory usage usually sits at around 160MB.

For the most part it's very fast and responsive. However, around every 10th page load or so I seem to be getting a 504 timeout. Since there's not much load on the server I have to imagine I have something less-than-optimal in my configuration, like maybe too many/not enough processes or some buffer size setting being out of order.

Here's my Nginx config:

user www-data;
worker_processes  2;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    server_names_hash_bucket_size 128;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;
    gzip_comp_level 5;
    gzip_http_version 1.0;
    gzip_min_length 0;
    gzip_types text/plain text/html text/css image/x-icon application/x-javascript;
    gzip_vary on;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

A typical site config for Nginx is like this:

server {
        listen 80;
        server_name www.findmud.com;
        rewrite ^/(.*) http://findmud.com/$1 permanent;
        }

server {
        listen 80;
        server_name findmud.com;
        access_log /home/zeta/public_html/findmud.com/log/access.log;
        error_log /home/zeta/public_html/findmud.com/log/error.log;

        location / {
            root /home/zeta/public_html/findmud.com/public;
            index index.php index.html index.htm;
            if (!-e $request_filename) {
                rewrite ^/(.*)$ /index.php?q=$1 last;
            }
        }

        location ~ .php$ {
                fastcgi_pass 127.0.0.1:9000;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME /home/zeta/public_html/findmud.com/public$fastcgi_script_name;
                include /etc/nginx/fastcgi.conf;
                fastcgi_param QUERY_STRING $query_string;
                fastcgi_param REQUEST_METHOD $request_method;
                fastcgi_param CONTENT_TYPE $content_type;
                fastcgi_param CONTENT_LENGTH $content_length;
        }

        location ~* ^.(jpg|jpeg|gif|css|png|js|ico|pdf|zip|exe)$ {
            access_log off;
            expires 30d;
        }

}

Here's my PHP FastCGI config:

START=yes
EXEC_AS_USER=(not shown)
FCGI_HOST=localhost
FCGI_PORT=9000
PHP_FCGI_CHILDREN=1
PHP_FCGI_MAX_REQUESTS=1000

PHP memory limit is set at 48M, but I never really get below 180M free on the server so I'm obviously not running out of RAM.

What can I do to reduce the likelihood of a gateway timeout? If there are any other config files I should post please let me know.

4 Replies

@Xangis:

Here's my PHP FastCGI config:

...
PHP_FCGI_CHILDREN=1 
...

you only have one php process to serve php pages. i'd try at least as many as you have nginx processes (= 2)

see if that helps

Thank you for the tip, I knew it'd be something obvious to someone who wasn't me.

As a general rule of thumb should children be equal the the number of Nginx-es, or is it OK to go higher (i.e. 4 or 8 if I have the extra RAM lying around)?

i'd run more nginx processes than php processes as not every http request needs php processing.

i run about 10 nginx worker instances and maybe 6 or 8 php instances, works pretty well.

@oliver:

i'd run more nginx processes than php processes as not every http request needs php processing.

i run about 10 nginx worker instances and maybe 6 or 8 php instances, works pretty well.

You are thinking in terms of Apache prefork MPM, where each process can only serve one request at a time.

Nginx uses async-based IO and can handle a lot more connections on a single process. I have a Linode 720 running a moderate sized Drupal site (around 2.5 million page views/70 million HTTP requests a month). With HTTP 1.1 keep alive at 15 second, currently it has 189 connections hanging onto it. It runs on 5x PHP/FastCGI process and… 1 single Nginx worker process.

Since Nginx is not blocking on IO, you don't need to have more than one process unless it's exhausting one single CPU core.

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct