NGINX 1.4 News: SPDY and PageSpeed

NGINX 1.4 comes with experimental support for SPDY, WebSocket proxying, and gunzip support, while Google enters beta with PageSpeed for NGINX.

Since the release of NGINX 1.3 in May of 2012, the community behind this growing web serverhas worked on a number of new features, the most notable including:

  • SPDY experimental support. The module needs to be enabled with the --with-http_spdy_module configuration parameter. Server push is still not supported.
  • HTTP/1.1 connections can be turned into WebSocket ones using HTTP/1.1’s protocol switch feature.
  • gunzip support for decompressing gzipped data useful when the client cannot do it.

The complete list of enhancements, changes and bug fixes can be found in the Changes 1.4 log.

On the heels of NGINX 1.4 release, Google has announced PageSpeed Beta for NGINX, making available over 40 optimization filters for its users, such as image compression, JavaScript and CSS minifying, HTML rewriting, etc.

According to Google, the ngx_pagespeed module has been used in production by some customers including MaxCDN, a CDN provider, which reported a reduction of “1.57 seconds from our average page load, dropped our bounce rate by 1%, and our exit percentage by 2.5%”.ZippyKid, a WordPress hosting provider, saw “75% reduction in page sizes and a 50% improvement in page rendering speeds” after using PageSpeed for NGINX.

NGINX is currently #3 web server overall with 16.2% market share, and #2 with 31.9% of top 10,000 websites, according to W3Techs.

(via infoq.com)

Using NGINX, PHP-FPM+APC and Varnish to make WordPress Websites fly

WordPress is one of the most popular content platforms on the Internet. It powers the majority of all freshly released websites and has a huge user-community. Running a WordPress site enables editors to easily and regularly publish content which might very well end up on Hacker News – so let us make sure the web server is up to its job!

This setup will easily let you serve hundreds of thousands of users per day, including brief peaks of up to 100 concurrent users on a ~15$/month machine with only 2 cores and 2GB of RAM.

Outrageous Assumptions

This guide assumes that you have a working knowledge of content management systems, web servers and their configurations. Additionally, you should be familiar with installing, starting and stopping services on a remote server via ssh. In short: if you know how to work a CLI, you’ll be fine.

Step 0/9: Use the Source, or: Enjoy the Repository

If you don’t need to adhere to a strict company security policy, use the Dotdeb repository for Debianto gain access to newer versions of NGINX and PHP. For Varnish, get the Varnish Debian repository. They are the easiest to use.

Step 1/9: The Little Engine that Could

Let’s start by firing up your favorite package management tool, potentially with Super Cow Powers, and install NGINX. The following configuration optimizes NGINX for WordPress. Put it into/etc/nginx/nginx.conf to make NGINX use both CPU cores, do proper Gzipping and more. Note: single-line configurations shown here may extend over multiple lines for readability – take care when copying.

user www-data;
worker_processes 2;
pid /var/run/nginx.pid;

events {
    worker_connections 768;
    multi_accept on;
    use epoll;
}

http {

    # Let NGINX get the real client IP for its access logs
    set_real_ip_from 127.0.0.1;
    real_ip_header X-Forwarded-For;

    # Basic Settings
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 20;
    client_max_body_size 15m;
    client_body_timeout 60;
    client_header_timeout 60;
    client_body_buffer_size  1K;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
    send_timeout 60;
    reset_timedout_connection on;
    types_hash_max_size 2048;
    server_tokens off;

    # server_names_hash_bucket_size 64;
    # server_name_in_redirect off;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Logging Settings
    # access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    # Log Format
    log_format main '$remote_addr - $remote_user [$time_local] '
    '"$request" $status $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';

    # Gzip Settings
    gzip on;
    gzip_static on;
    gzip_disable "msie6";
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_min_length 512;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;
    gzip_types text/css text/javascript text/xml text/plain text/x-component 
    application/javascript application/x-javascript application/json 
    application/xml  application/rss+xml font/truetype application/x-font-ttf 
    font/opentype application/vnd.ms-fontobject image/svg+xml;

    # Virtual Host Configs
    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

In case your site will make use of custom webfonts, NGINX may need some help to deliver them with the correct MIME type – otherwise it will default to application/octet-stream. Edit/etc/nginx/mime.types and check that the webfont types are set to the following:

    application/vnd.ms-fontobject           eot;
    application/x-font-ttf                  ttf;
    font/opentype                           ott;
    application/font-woff                   woff;

Step 2/9: Who Enters my Domain?

Now that you have done the basic setup for NGNIX, let’s make it play nicely with your domain: remove the symlink to default from /etc/nginx/sites-enabled/, create a new configuration file at /etc/nginx/sites-available/ named yourdomain.tld and symlink to it at/etc/nginx/sites-enabled/yourdomain.tld. NGINX will use the new configuration file which we will fill now. Put the following into /etc/nginx/sites-available/yourdomain.tld:

server {
    # Default server block blacklisting all unconfigured access
    listen [::]:8080 default_server;
    server_name _;
    return 444;
}

server {
    # Configure the domain that will run WordPress
    server_name yourdomain.tld;
    listen [::]:8080 deferred;
    port_in_redirect off;
    server_tokens off;
    autoindex off;

    client_max_body_size 15m;
    client_body_buffer_size 128k;

    # WordPress needs to be in the webroot of /var/www/ in this case
    root /var/www;
    index index.html index.htm index.php;
    try_files $uri $uri/ /index.php?q=$uri&$args;

    # Define default caching of 24h
    expires 86400s;
    add_header Pragma public;
    add_header Cache-Control "max-age=86400, public, must-revalidate, proxy-revalidate";

    # deliver a static 404
    error_page 404 /404.html;
    location  /404.html {
        internal;
    }

    # Deliver 404 instead of 403 "Forbidden"
    error_page 403 = 404;

    # Do not allow access to files giving away your WordPress version
    location ~ /(\.|wp-config.php|readme.html|licence.txt) {
        return 404;
    }

    # Add trailing slash to */wp-admin requests.
    rewrite /wp-admin$ $scheme://$host$uri/ permanent;

    # Don't log robots.txt requests
    location = /robots.txt {
        allow all;
        log_not_found off;
        access_log off;
    }

    # Rewrite for versioned CSS+JS via filemtime
    location ~* ^.+\.(css|js)$ {
        rewrite ^(.+)\.(\d+)\.(css|js)$ $1.$3 last;
        expires 31536000s;
        access_log off;
        log_not_found off;
        add_header Pragma public;
        add_header Cache-Control "max-age=31536000, public";
    }

    # Aggressive caching for static files
    # If you alter static files often, please use 
    # add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate";
    location ~* \.(asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|
    gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|
    mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|otf|pdf|png|pot|pps|
    ppt|pptx|ra|ram|svg|svgz|swf|tar|t?gz|tif|tiff|ttf|wav|webm|wma|woff|
    wri|xla|xls|xlsx|xlt|xlw|zip)$ {
        expires 31536000s;
        access_log off;
        log_not_found off;
        add_header Pragma public;
        add_header Cache-Control "max-age=31536000, public";
    }

    # pass PHP scripts to Fastcgi listening on Unix socket
    # Do not process them if inside WP uploads directory
    # If using Multisite or a custom uploads directory,
    # please set the */uploads/* directory in the regex below
    location ~* (^(?!(?:(?!(php|inc)).)*/uploads/).*?(php)) {
        try_files $uri = 404;
        fastcgi_split_path_info ^(.+.php)(.*)$;
        fastcgi_pass unix:/var/run/php-fpm.socket;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_intercept_errors on;
        fastcgi_ignore_client_abort off;
        fastcgi_connect_timeout 60;
        fastcgi_send_timeout 180;
        fastcgi_read_timeout 180;
        fastcgi_buffer_size 128k;
        fastcgi_buffers 4 256k;
        fastcgi_busy_buffers_size 256k;
        fastcgi_temp_file_write_size 256k;
    }

    # Deny access to hidden files
    location ~ /\. {
        deny all;
        access_log off;
        log_not_found off;
    }

}

# Redirect all www. queries to non-www
# Change in case your site is to be available at "www.yourdomain.tld"
server {
    listen [::]:8080;
    server_name www.yourdomain.tld;
    rewrite ^ $scheme://yourdomain.tld$request_uri? permanent;
}

This configuration is for a single-site WordPress install at /var/www/ webroot with a media upload limit of 15MB. Read the comments within the configuration if you want to change things: nginx -tis your friend when altering configuration files.

Step 3/9: Let’s get the Elephant into the Room

Next, we’ll get PHP-FPM onboard. Install it any way you like, preferably PHP version 5.4. Just make sure you get APC as well. After successful installation, we’ll need to edit several configuration files. Let’s start by editing the well-known /etc/php5/fpm/php.ini. It’s a huge file, so instead of showing the entire file here, please go through the following configuration line by line and set them to the respective values in your php.ini:

short_open_tag = Off
ignore_user_abort = Off
post_max_size = 15M
upload_max_filesize = 15M
default_charset = "UTF-8"
allow_url_fopen = Off
default_socket_timeout = 30
mysql.allow_persistent = Off

At the very end of your php.ini, add the following block which will configure APC, the opcode cache:

[apc]
apc.stat = "0"
apc.max_file_size = "1M"
apc.localcache = "1"
apc.localcache.size = "256"
apc.shm_segments = "1"
apc.ttl = "3600"
apc.user_ttl = "7200"
apc.gc_ttl = "3600"
apc.cache_by_default = "1"
apc.filters = ""
apc.write_lock = "1"
apc.num_files_hint= "512"
apc.user_entries_hint="4096"
apc.shm_size = "256M"
apc.mmap_file_mask=/tmp/apc.XXXXXX
apc.include_once_override = "0"
apc.file_update_protection="2"
apc.canonicalize = "1"
apc.report_autofilter="0"
apc.stat_ctime="0"
;This should be used when you are finished with PHP file changes.
;As you must clear the APC cache to recompile already cached files.
;If you are still developing, set this to 1.
apc.stat="0"

In your /etc/php5/fpm/php-fpm.conf, please set the following lines to their respective values:

pid = /var/run/php5-fpm.pid
error_log = /var/log/php5-fpm.log
emergency_restart_threshold = 5
emergency_restart_interval = 2
events.mechanism = epoll

Finally, in /etc/php5/fpm/pool.d/www.conf, do the same procedure again:

user = www-data
group = www-data
listen = /var/run/php-fpm.socket
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
listen.allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 50
pm.start_servers = 15
pm.min_spare_servers = 5
pm.max_spare_servers = 25
pm.process_idle_timeout = 60s
request_terminate_timeout = 30
security.limit_extensions = .php

and add the following to the end of it:

php_flag[display_errors] = off
php_admin_value[error_reporting] = 0
php_admin_value[error_log] = /var/log/php5-fpm.log
php_admin_flag[log_errors] = on
php_admin_value[memory_limit] = 128M

Kudos! You have just configured PHP to run on a high performance UNIX socket, spawn child processes to deal with requests and cache everything so that there’s less load on the system.

Step 4/9: Give Varnish a Polish

At this point, you already have a lean, mean webserving machine – albeit on port 8080. Now imagine ~95% of all your incoming traffic hitting an additional layer above which keeps static content in RAM. PHP won’t even have to process anything most of the time and your database will receive more load from editors adding new content than from queries caused by the frontend. That’s worth the extra mile, right? So let’s go install Varnish!

After installing, edit /etc/default/varnish to make it use the two cores, static content in RAM and proper timeouts:

DAEMON_OPTS="-a :80 \
    -T localhost:6082 \
    -f /etc/varnish/default.vcl \
    -u www-data -g www-data \
    -S /etc/varnish/secret \
    -p thread_pools=2 \
    -p thread_pool_min=25 \
    -p thread_pool_max=250 \
    -p thread_pool_add_delay=2 \
    -p session_linger=50 \
    -p sess_workspace=262144 \
    -p cli_timeout=40 \
    -s malloc,768m"

And for the very last step in this web server stack setup, put the following configuration into/etc/varnish/default – it’s well commented so you can see what each part does:

# We only have one backend to define: NGINX
backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

# Only allow purging from specific IPs      
acl purge {
    "localhost";
    "127.0.0.1";
}

sub vcl_recv {
    # Handle compression correctly. Different browsers send different
    # "Accept-Encoding" headers, even though they mostly support the same
    # compression mechanisms. By consolidating compression headers into
    # a consistent format, we reduce the cache size and get more hits.
    # @see: http:// varnish.projects.linpro.no/wiki/FAQ/Compression
    if (req.http.Accept-Encoding) {
        if (req.http.Accept-Encoding ~ "gzip") {
            # If the browser supports it, we'll use gzip.
            set req.http.Accept-Encoding = "gzip";
        }
        else if (req.http.Accept-Encoding ~ "deflate") {
            # Next, try deflate if it is supported.
            set req.http.Accept-Encoding = "deflate";
        }
        else {
            # Unknown algorithm. Remove it and send unencoded.
            unset req.http.Accept-Encoding;
        }
    }

    # Set client IP
    if (req.http.x-forwarded-for) {
        set req.http.X-Forwarded-For =
        req.http.X-Forwarded-For + ", " + client.ip;
    } else {
        set req.http.X-Forwarded-For = client.ip;
    }

    # Check if we may purge (only localhost)
    if (req.request == "PURGE") {
        if (!client.ip ~ purge) {
            error 405 "Not allowed.";
        }
        return(lookup);
    }

    if (req.request != "GET" &&
        req.request != "HEAD" &&
        req.request != "PUT" &&
        req.request != "POST" &&
        req.request != "TRACE" &&
        req.request != "OPTIONS" &&
        req.request != "DELETE") {
            # /* Non-RFC2616 or CONNECT which is weird. */
            return (pipe);
    }

    if (req.request != "GET" && req.request != "HEAD") {
        # /* We only deal with GET and HEAD by default */
        return (pass);
    }

    # admin users always miss the cache
    if( req.url ~ "^/wp-(login|admin)" || 
        req.http.Cookie ~ "wordpress_logged_in_" ){
            return (pass);
    }

    # Remove cookies set by Google Analytics (pattern: '__utmABC')
    if (req.http.Cookie) {
        set req.http.Cookie = regsuball(req.http.Cookie,
            "(^|; ) *__utm.=[^;]+;? *", "\1");
        if (req.http.Cookie == "") {
            remove req.http.Cookie;
        }
    }

    # always pass through POST requests and those with basic auth
    if (req.http.Authorization || req.request == "POST") {
        return (pass);
    }

    # Do not cache these paths
    if (req.url ~ "^/wp-cron\.php$" ||
        req.url ~ "^/xmlrpc\.php$" ||
        req.url ~ "^/wp-admin/.*$" ||
        req.url ~ "^/wp-includes/.*$" ||
        req.url ~ "\?s=") {
            return (pass);
    }

    # Define the default grace period to serve cached content
    set req.grace = 30s;

    # By ignoring any other cookies, it is now ok to get a page
    unset req.http.Cookie;
    return (lookup);
}

sub vcl_fetch {
    # remove some headers we never want to see
    unset beresp.http.Server;
    unset beresp.http.X-Powered-By;

    # only allow cookies to be set if we're in admin area
    if( beresp.http.Set-Cookie && req.url !~ "^/wp-(login|admin)" ){
        unset beresp.http.Set-Cookie;
    }

    # don't cache response to posted requests or those with basic auth
    if ( req.request == "POST" || req.http.Authorization ) {
        return (hit_for_pass);
    }

    # don't cache search results
    if( req.url ~ "\?s=" ){
        return (hit_for_pass);
    }

    # only cache status ok
    if ( beresp.status != 200 ) {
        return (hit_for_pass);
    }

    # If our backend returns 5xx status this will reset the grace time
    # set in vcl_recv so that cached content will be served and 
    # the unhealthy backend will not be hammered by requests
    if (beresp.status == 500) {
        set beresp.grace = 60s;
        return (restart);
    }

    # GZip the cached content if possible
    if (beresp.http.content-type ~ "text") {
        set beresp.do_gzip = true;
    }

    # if nothing abovce matched it is now ok to cache the response
    set beresp.ttl = 24h;
    return (deliver);
}

sub vcl_deliver {
    # remove some headers added by varnish
    unset resp.http.Via;
    unset resp.http.X-Varnish;
}

sub vcl_hit {
    # Set up invalidation of the cache so purging gets done properly
    if (req.request == "PURGE") {
        purge;
        error 200 "Purged.";
    }
    return (deliver);
}

sub vcl_miss {
    # Set up invalidation of the cache so purging gets done properly
    if (req.request == "PURGE") {
        purge;
        error 200 "Purged.";
    }
    return (fetch);
}

sub vcl_error {
    if (obj.status == 503) {
                # set obj.http.location = req.http.Location;
                set obj.status = 404;
        set obj.response = "Not Found";
                return (deliver);
    }
}

This configuration will deliver high speed, gzipped content, caching aggressively while letting authenticated backend users view the live site uncached. It also protects the stack from unwelcome traffic while allowing cache purges from certain sources.

Step 5/9: Code Is Poetry

If you didn’t already have WordPress installed before, do it now. There’s plenty of good guidance on this already. Should you come across odd reloads of the same step during install, restart the three services NGINX, PHP-FPM and Varnish on your machine.

After successful installation of WordPress, we’ll need to make it talk to the high performance webserver stack we’ve just built for it. Let’s start by reading, understanding and installing the NGINX compatibility plugin for WordPress. Don’t forget to follow the instructions on the plugin site. Now, WordPress can deal with NGINX.

Next, let’s make WordPress work with PHP’s opcode cache APC: Read, understand and then installthe Object-Cache plugin for WordPress.

The final step is to empower WordPress so it can purge the Varnish cache. Again, read, understand and only then install Pål-Kristian Hamre’s Varnish plugin for WordPress. Afterwards, go to its configuration page within the WordPress admin-interface and set the following parameters:

Varnish Administration IP Address: 127.0.0.1
Varnish Administration Port: 80
Varnish Secret: (get it at /etc/varnish/secret)
Check: "Also purge all page navigation" and "Also purge all comment navigation"
Varnish Version: 3

And that’s all concerning the high performance webserver stack, folks! Your WordPress will now handle a great deal of traffic while keeping TTFB to a minimum. The times of high CPU load and disk I/O troubles are now past.

The Performance Golden Rule

Steve Souders’ Performance Golden Rule clearly states that most performance gains can be achieved by optimizing the frontend side of things instead of scaling the backend to be able to deal with a huge amount of HTTP request. Or, as Patrick Meenan puts it, one should always question “the need to make the requests in the first place”. So let’s not stop at our high performance webserver, but continue with a high performance frontend!

Step 6/9: Bust that Cache Wide Open!

If you have read the configuration files for NGINX, you will have noticed some lines about a cache-buster for CSS and JS files. What it does is altering the link to your theme’s unified stylesheet and JS to something like style.1352707822.css, in which the number is the filemtime. So whenever you alter your files, their apparent filename within the site will change and clients will download the newest version of those files while you don’t need to alter any paths manually. Simply place the following into your theme’s functions.php:

/**
 * Automated cache-buster function via filemtime
 **/
function autoVer($url){
  $name = explode('.', $url);
  $lastext = array_pop($name);
  array_push(
    $name, 
    filemtime($_SERVER['DOCUMENT_ROOT'] . parse_url($url, PHP_URL_PATH)), 
    $lastext);
  echo implode('.', $name) ;
}

Now use the autoVer function when including CSS and JS, e.g. in your functions.php (see wp_register_style) or header.php. Examples:

# Add in your theme's functions.php:
wp_register_style(
  'style', 
  get_bloginfo('stylesheet_directory') . autoVer('style.css'), 
  false, 
  NULL, 
  'all');

# Add in your theme's header.php:
autoVer(get_bloginfo('stylesheet_url'))

Step 7/9: Avoid the wicked @import for WordPress child themes

WordPress child themes rely on @import in their CSS file to get the styles from their parent theme. Sadly, @import is a wicked load-blocker. Here’s a simple trick to have two stylesheets included via link in the website’s header if it’s a child theme because stylesheets called via link can download in parallel. Remove the @import from your child-theme’s CSS file and place the following PHP snippet into your header.php when linking your stylesheets:

<?php
    /*
     * Circumvent @import CSS for WordPress child themes
     * If we're in a child theme, build links for both parent and child CSS
     * This way, we can remove the @import from the child theme's style.css
     * CSS loaded via link can load simultaneously, while @import blocks loading
     * See: http://www.stevesouders.com/blog/2009/04/09/dont-use-import/
     */
    if(is_child_theme()) {
        echo '<link rel="stylesheet" href="';
        autoVer(get_bloginfo('template_url').'/style.css');
        echo '" />'."\n\t\t";
        echo '<link rel="stylesheet" href="';
        autoVer(get_bloginfo('stylesheet_url'));
        echo '" />'."\n";
    } else {
        echo '<link rel="stylesheet" href="';
        autoVer(get_bloginfo('stylesheet_url'));
        echo '" />'."\n";
    } 
?>

Additionally, you should use @media print within your unified stylesheet because, as Stoyan Stefanov has pointed out, browsers will always download all stylesheets no matter the medium.

Step 8/9: Unification Day

After combining your CSS into a unified stylesheet and making it load in a non-blocking manner, you should think about minifying it. There are plenty of decent minification tools around.

Also, you can enable loading JS libraries from Google and other CDNs there. No matter which libraries you use, unify and compress you local JS with e.g. Google Closure Compiler. If you’re using Google Analytics, check out this excellent GA snippet by Mathias Bynens which you can easily integrate into your unified JS. If you’re not using jQuery on the frontend, deregister its default inclusion via your theme’s “functions.php”.

The last thing to be minified is the website source code outputted by WordPress. There are several minification plugins either as standalone or integrated into WP Super Cache or WP Total Cache plugins. I recommend “WP HTML Compression”, especially when used together with the Roots Theme

Step 9/9: Famous Last Words

In case the imperfections of the code outputted by WordPress keep you up at night, your performance optimizations don’t have to end here. Check out the Roots Theme, which finally brings decent relative URL structures, clean navigation menus and much more to WordPress. It’s not as easy as installing a common theme or plugin, but worth it.

If you ever dreamed of using Nicole Sullivan’s excellent OOCSS high performance CSS selectors with WordPress, check out the PHP DOMDocument parser to filter the output of WordPress and set classes for all relevant DOM objects in your source so you can adhere to the OOCSS code standards. While parsing the entire website output during creation by WordPress may seem excessive, consider that it finally gives you total control over syntax and defined classes while editors can use the familiar WordPress interface to manage content. And with such a mighty & multi-layered caching system in place, the frontend won’t slow down at all.

So Long, and Thanks for All the Time

If you’ve reached the point that you’re worrying about the performance implications of CSS selectors and the execution time of PHP’s DOMDocument, it is time to thank you. You have very likely managed to reduce your loading time of your WordPress site by more than 3 seconds. This means that with ~10.000 visitors per day, you’ve saved them about 8 hours of time collectively. That’s amazing and more than enough time to sit back and enjoy some of the season’s spirit – you’ve earned it!

(via calendar.perfplanet.com)

Rate Limiting With nginx

This article explains how to use the nginx HttpLimitReqModule to limit the number of requests for a given session. This is useful, for example, if your site is hammered by a bot doing multiple requests per second and thus increasing your server load. With the HttpLimitReqModule you can define a rate limit, and if a visitor exceeds this rate, he will get a 503 error.

I do not issue any guarantee that this will work for you!

1 Using The HttpLimitReqModule

Open your nginx.conf…

vi /etc/nginx/nginx.conf

… and define an area where the session states are stored – this must go inside the http {} container:

http {
    [...]
    limit_req_zone  $binary_remote_addr  zone=one:10m   rate=1r/s;
    [...]
}

This area is called one and is allocated 10MB of storage. Instead of the variable $remote_addr, we use the variable $binary_remote_addr which reduces the size of the state to 64 bytes. There can be about 16,000 states in a 1MB zone, so 10MB allow for about 160,000 states, so this should be enough for your visitors. The rate is limited to one request per second. Please note that you must use integer values here, so if you’d like to set the limit to half a request per second, you’d use 30r/m (30 requests per minute).

To put this limit to work, we use the limit_req directive. You can use this directive in http {}, server {}, and location {} containers, but in my opinion it is most useful in location {}containers that pass requests to your app servers (PHP-FPM, mongrel, etc.) because otherwise, if you load a single page with lots of images, CSS, and JavaScript files, you would probably exceed the given rate limit with a single page request.

So let’s put this in a location ~ \.php$ {} container:

[...]
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
                limit_req zone=one burst=5;
        }
[...]

limit_req zone=one burst=5; specifies that this rate limit belongs to the session storage area we defined before (because of zone=one) which means the rate limit is 1r/s. You can imagine the meaning of burst like a kind of queue. It means that if you exceed the rate limit, the following requests are delayed, and only if you have more requests waiting in the queue than specified in the burst parameter, will you get a 503 error (e.g. like this:

Click to enlarge

 

).

If you don’t want to use this queue (i.e. deliver a 503 immediately if someone exceeds the rate limit), you must use the nodelay option:

[...]
        location ~ \.php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
                limit_req zone=one burst=5 nodelay;
        }
[...]

Don’t forget to reload nginx to make your changes take effect:

/etc/init.d/nginx reload

2 Links

(via Howtoforge.com)

WordPress.Com Serves 70,000 Req/Sec And Over 15 Gbit/Sec Of Traffic Using NGINX

WordPress.com serves more than 33 million sites attracting over 339 million people and 3.4 billion pages each month. Since April 2008, WordPress.com has experienced about 4.4 times growth in page views. WordPress.com VIP hosts many popular sites including CNN’s Political Ticker, NFL, Time Inc’s The Page, People Magazine’s Style Watch, corporate blogs for Flickr and KROQ, and many more. Automattic operates two thousand servers in twelve, globally distributed, data centers. WordPress.com customer data is instantly replicated between different locations to provide an extremely reliable and fast web experience for hundreds of millions of visitors.

Problem

WordPress.com, which began in 2005, started on shared hosting, much like all of the WordPress.org sites. It was soon moved to a single dedicated server and then to two servers. In late 2005, WordPress.com opened to the public and by early 2006 had expanded to four web servers, with traffic being distributed using round robin DNS. Soon thereafter WordPress.com expanded to a second data center and then to a third. It quickly became apparent that round robin DNS wasn’t a viable long-term solution.

While hardware appliances like F5 BIG-IP’s offered many features that WordPress.com required, the 5-member Automattic Systems Team decided to evaluate different options built on existing open source software. Using open source software on commodity hardware provides the ultimate level of flexibility and also comes with a cost savings—“Purchasing a pair of capable hardware appliances in a failover configuration for a single datacenter may be a little expensive, but purchasing and servicing 10 sets for 10 data centers soon becomes very expensive.”

At first, the WordPress.com team chose Pound as a software load balancer because of its ease of use and built-in SSL support. After using Pound for about two years, WordPress.com required additional functionality and scalability, namely:

  • On-the-fly reconfiguration capabilities, without interrupting live traffic.
  • Better health check mechanisms, allowing to smoothly and gradually recover from a backend failure, without overloading application infrastructure with unexpected load of requests.
  • Better scalability—both requests per second, and the number of concurrent connections. Pound’s thread-based model wasn’t able to reliably handle over 1000 requests per second per load balancing instance.

Solution

In April 2008 Automattic converted all WordPress.com load balancers from Pound to NGINX. Before that Automattic engineers had been using NGINX for Gravatar for a few months and were impressed by its performance and scalability, so moving WordPress.com over was the natural next step. Before switching WordPress.com to NGINX, Automattic evaluated several other products, including HAProxy, and LVS. Here are some of the reasons why NGINX was chosen:

  • Easy, flexible and logical configuration.
  • Ability to reconfigure and upgrade NGINX instances on-the-fly, without dropping user requests.
  • Application request routing via FastCGI, uwsgi or SCGI protocols; NGINX can also serve static content directly from storage for additional performance optimization.
  • The only software tested that was capable of reliably handling over 10,000 request per second of live traffic to WordPress applications from a single server.
  • NGINX’s memory and CPU footprints are minimal, and predictable. After switching to NGINX the CPU usage on the load balancing servers dropped three times.

Overall WordPress.com is serving about 70,000 req/sec and over 15 Gbit/sec of traffic from its NGINX powered load balancers at peak, with plenty of room to grow. Hardware configuration is Dual Xeon 5620 4 core CPUs with hyper-threading, 8-12GB of RAM, running Debian Linux 6.0. As part of high availability setup WordPress.com previously used Wackamole/Spread but has recently started to migrate to Keepalived. Even distribution of inbound requests across NGINX-based web acceleration and load balancing layer is based on DNS round-robin mechanism.

(via highscalability.com)

How To Set Up nginx As A Reverse Proxy For Apache2 On Ubuntu 12.04

nginx (pronounced “engine x”) is a free, open-source, high-performance HTTP server. nginx is known for its stability, rich feature set, simple configuration, and low resource consumption. This tutorial shows how you can set up nginx as a reverse proxy on front of an Apache2 web server on Ubuntu 12.04.

I do not issue any guarantee that this will work for you!

1 Preliminary Note

In this tutorial I use the hostname server1.example.com with the IP address 192.168.0.100. These settings might differ for you, so you have to replace them where appropriate.

I’m assuming that you have an existing Apache vhost (I will use example.com in this tutorial) that is listening on port 80 on the IP address 192.168.0.100 that you want to proxy through nginx.

Please note that this tutorial covers http only, not https (SSL).

2 Configuring Apache

The first thing we have to do is configure our Apache vhost to listen on localhost (127.0.0.1) on an unused port other than 80 (e.g. 8000). Open /etc/apache2/ports.conf…

vi /etc/apache2/ports.conf

… and modify the NameVirtualHost and Listen lines for port 80 to use port 8000:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default
# This is also true if you have upgraded from before 2.2.9-3 (i.e. from
# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and
# README.Debian.gz

NameVirtualHost *:8000
Listen 8000

<IfModule mod_ssl.c>
    # If you add NameVirtualHost *:443 here, you will also have to change
    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl
    # to <VirtualHost *:443>
    # Server Name Indication for SSL named virtual hosts is currently not
    # supported by MSIE on Windows XP.
Listen 443
</IfModule>

<IfModule mod_gnutls.c>
Listen 443
</IfModule>

Next open the vhost configuration file (e.g. /etc/apache2/sites-available/example.com.vhost)…

vi /etc/apache2/sites-available/example.com.vhost

… and change the <VirtualHost> line to use the IP address 127.0.0.1 and the port 8000:

<VirtualHost 127.0.0.1:8000>
[...]

We will configure nginx as a transparent proxy, i.e., it will pass on the original user’s IP address in a field called X-Forwarded-For to the backend Apache. Of course, the backend Apache should log the original user’s IP address in their access logs instead of the IP address of nginx (127.0.0.1). There are two ways to achieve this:

1) We can modify the LogFormat line in /etc/apache2/apache2.conf and replace %h with %{X-Forwarded-For}i:

vi /etc/apache2/apache2.conf

[...]
#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
[...]

2) On Debian/Ubuntu, we can install the Apache module libapache2-mod-rpaf which takes care of logging the correct IP address:

apt-get install libapache2-mod-rpaf

After all these changes, restart Apache:

/etc/init.d/apache2 restart

3 Configuring nginx

If nginx isn’t already installed, install it as follows:

apt-get install nginx

Create its system startup links and make sure it is started:

update-rc.d nginx defaults
/etc/init.d/nginx restart

It should now be listening on port 80.

Some standard proxy parameters are in the file /etc/nginx/proxy_params:

vi /etc/nginx/proxy_params

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

As we will include that file later on in the proxy part of our nginx vhost for example.com, you can add further proxy directives to that file if you like, e.g. as follows:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

client_max_body_size 100M;
client_body_buffer_size 1m;
proxy_intercept_errors on;
proxy_buffering on;
proxy_buffer_size 128k;
proxy_buffers 256 16k;
proxy_busy_buffers_size 256k;
proxy_temp_file_write_size 256k;
proxy_max_temp_file_size 0;
proxy_read_timeout 300;

Now create the example.com vhost for nginx – make sure it uses the same document root as the Apache vhost for example.com (e.g. /var/www/example.com/web) so that nginx can deliver static files directly without passing the request to Apache:

vi /etc/nginx/sites-available/example.com.vhost

server {
       listen 80;
       server_name www.example.com example.com;
       root /var/www/example.com/web;
       if ($http_host != "www.example.com") {
                 rewrite ^ http://www.example.com$request_uri permanent;
       }
       index index.php index.html;

       location / {
                proxy_pass http://localhost:8000;
                include /etc/nginx/proxy_params;
       }
}

This is a very simple configuration which would proxy all requests to Apache.

To enable that vhost, we create a symlink to it from the /etc/nginx/sites-enabled/ directory:

cd /etc/nginx/sites-enabled/
ln -s /etc/nginx/sites-available/example.com.vhost example.com.vhost

Reload nginx for the changes to take effect:

/etc/init.d/nginx reload

You can now type www.example.com into your browser, and you should see your web site, but this time delivered through nginx.

As I said before this is a very simple configuration which proxies all requests to Apache. But because nginx is much faster delivering static files (like JavaScript, CSS, images, PDF files, static HTML files, etc.) than Apache, it is a good idea to let nginx serve these files directly. This can be done by adding a new location for these files, e.g. as follows:

server {
       listen 80;
       server_name www.example.com example.com;
       root /var/www/example.com/web;
       if ($http_host != "www.example.com") {
                 rewrite ^ http://www.example.com$request_uri permanent;
       }
       index index.php index.html;

       location / {
                proxy_pass http://localhost:8000;
                include /etc/nginx/proxy_params;
       }
       location ~* \.(js|css|jpg|jpeg|gif|png|svg|ico|pdf|html|htm)$ {
       }
}

Reload nginx:

/etc/init.d/nginx reload

You can even set an Expires HTTP header for these files so that browsers will cache these files (see Make Browsers Cache Static Files On nginx for more details):

server {
       listen 80;
       server_name www.example.com example.com;
       root /var/www/example.com/web;
       if ($http_host != "www.example.com") {
                 rewrite ^ http://www.example.com$request_uri permanent;
       }
       index index.php index.html;

       location / {
                proxy_pass http://localhost:8000;
                include /etc/nginx/proxy_params;
       }
       location ~* \.(js|css|jpg|jpeg|gif|png|svg|ico|pdf|html|htm)$ {
                expires      30d;
       }
}

We can now take this setup one step further by letting nginx serve as many requests as it can fulfill and only pass the remaining requests plus PHP files to Apache:

server {
       listen 80;
       server_name www.example.com example.com;
       root /var/www/example.com/web;
       if ($http_host != "www.example.com") {
                 rewrite ^ http://www.example.com$request_uri permanent;
       }
       index index.php index.html;

       location / {
                try_files $uri @proxy;
       }
       location ~* \.(js|css|jpg|jpeg|gif|png|svg|ico|pdf|html|htm)$ {
                expires      30d;
       }
       location @proxy {
                proxy_pass http://127.0.0.1:8000;
                include /etc/nginx/proxy_params;
       }
       location ~* \.php$ {
                proxy_pass http://127.0.0.1:8000;
                include /etc/nginx/proxy_params;
       }
}

Reload nginx:

/etc/init.d/nginx reload

Of course, you can fine-tune this setup even more, for example by using the nginx proxy_cache (if your application allows it – for example, you must make sure that captchas or shopping carts aren’t cached, and that logged-in users always get a fresh copy of the page) or if your application has a full page cache – nginx could access the full page cache directly in such a case (you can find an example in this tutorial: How To Speed Up Drupal 7.7 With Boost And nginx (Debian Squeeze)).

4 Links

(via Howtoforge.com)

Make Browsers Cache Static Files On nginx

This tutorial explains how you can configure nginx to set the Expires HTTP header and the max-age directive of the Cache-Control HTTP header of static files (such as images, CSS and Javascript files) to a date in the future so that these files will be cached by your visitors’ browsers. This saves bandwidth and makes your web site appear faster (if a user visits your site for a second time, static files will be fetched from the browser cache).

I do not issue any guarantee that this will work for you!

1 Preliminary Note

I’m assuming you have a working nginx setup, e.g. as shown in this tutorial: Installing Nginx With PHP5 (And PHP-FPM) And MySQL Support (LEMP) On Ubuntu 12.04 LTS 

2 Configuring nginx

The Expires HTTP header can be set with the help of the expires directive which can be placed in inside http {}, server {}, location {}, or an if statement inside a location {} block. Usually you will use it in a location block for your static files, e.g. as follows:

location ~*  \.(jpg|jpeg|png|gif|ico|css|js)$ {
   expires 365d;
}

In the above example, all .jpg, .jpeg, .png, .gif, .ico, .css, and .js files get an Expires header with a date 365 days in the future from the browser access time. Therefore, you should make sure that the location {} block really only contain static files that can be cached by browsers.

Reload nginx after your changes:

/etc/init.d/nginx reload

You can use the following time settings with the expires directive:

  • off makes that the Expires and Cache-Control headers will not be modified.
  • epoch sets the Expires header to 1 January, 1970 00:00:01 GMT.
  • max sets the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years.
  • A time without an @ prefix means an expiry time relative to the browser access time. A negative time can be specified, which sets the Cache-Control header to no-cache. Example:expires 10d; or expires 14w3d;
  • A time with an @ prefix specifies an absolute time-of-day expiry, written in either the form Hh or Hh:Mm, where H ranges from 0 to 24, and M ranges from 0 to 59. Exmaple: expires @15:34;

You can use the following time units:

  • ms: milliseconds
  • s: seconds
  • m: minutes
  • h: hours
  • d: days
  • w: weeks
  • M: months (30 days)
  • y: years (365 days)

Examples: 1h30m for one hour thirty minutes, 1y6M for one year and six months.

Also note that if you use a far future Expires header you have to change the component’s filename whenever the component changes. Therefore it’s a good idea to version your files. For example, if you have a file javascript.js and want to modify it, you should add a version number to the file name of the modified file (e.g. javascript-1.1.js) so that browsers have to download it. If you don’t change the file name, browsers will load the (old) file from their cache.

Instead of basing the Expires header on the access time of the browser (e.g. expires 10d;), you can also base it on the modification date of a file (please note that this works only for real files that are stored on the hard drive!) by using the modified keyword which precedes the time:

expires modified 10d;

3 Testing

To test if your configuration works, you can install the Live HTTP Headers plugin for Firefox and access a static file through Firefox (e.g. an image). In the Live HTTP Headers output, you should now see an Expires header and a Cache-Control header with a max-age directive (max-age contains a value in seconds, for example 31536000 is one year in the future):

Click to enlarge

4 Links

(via Howtoforge.com)

Configuring Your LEMP System (Linux, nginx, MySQL, PHP-FPM) For Maximum Performance

If you are using nginx as your webserver, you are looking for a performance boost and better speed. nginx is fast by default, but you can optimize its performance and the performance of all parts (like PHP and MySQL) that work together with nginx. Here is a small, incomprehensive list of tips and tricks to configure your LEMP system (Linux, nginx, MySQL, PHP-FPM) for maximum performance. These tricks work for me, but your mileage may vary. Do not implement them all at once, but one by one and check what effect the modification has on your system’s performance.

I do not issue any guarantee that this will work for you!

1 Reducing Disk I/O By Mounting PArtitions With noatime And nodiratime

Add noatime and nodiratime to your mount options in /etc/fstab:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
# / was on /dev/sda2 during installation
UUID=9cc886cd-98f3-435a-9830-46b316e2a20e /          
     ext3    errors=remount-ro,noatime,nodiratime,usrjquota=quota.user,grpjquota=quota.group,jqfmt=vfsv0 0       1
# swap was on /dev/sda1 during installation
UUID=bba13162-121d-40a4-90a7-10f78a0097ae none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0

#Parallels Shared Folder mount
none         /media/psf   prl_fs   sync,nosuid,nodev,noatime,share,nofail     0       0

Remount the modified partitions as follows (make sure you use the correct mount point for each partition):

mount -o remount /

You can read more about this in this howto: Reducing Disk IO By Mounting Partitions With noatime

2 Tuning nginx

2.1 worker_processes

Make sure you use the correct amount of worker_processes in your /etc/nginx/nginx.conf. This should be equal to the amount of CPU cores in the output of

cat /proc/cpuinfo | grep processor

root@server1:~# cat /proc/cpuinfo | grep processor
processor : 0
processor : 1
processor : 2
processor : 3
processor : 4
processor : 5
processor : 6
processor : 7
root@server1:~#

In this example, we have eight CPU cores, so we set

vi /etc/nginx/nginx.conf

[...]
worker_processes 8;
[...]

2.2 keepalive_timeout, sendfile, tcp_nopush, tcp_nodelay

Set keepalive_timeout to a sensible value like two seconds. Enable sendfile, tcp_nopush, and tcp_nodelay:

vi /etc/nginx/nginx.conf

[...]
http {
[...]
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 2;
        types_hash_max_size 2048;
        server_tokens off;
[...]
}
[...]

2.3 File Cache

Enable the nginx file cache:

vi /etc/nginx/nginx.conf

[...]
http {
[...]
        ##
        # File Cache Settings
        ##

        open_file_cache          max=5000  inactive=20s;
        open_file_cache_valid    30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors   on;
[...]
}
[...]

2.4 Enable Gzip Compression

You can read more about Gzip compression here: How To Save Traffic With nginx’s HttpGzipModule (Debian Squeeze)

vi /etc/nginx/nginx.conf

[...]
http {
[...]
        ##
        # Gzip Settings
        ##

        gzip on;
        gzip_static on;
        gzip_disable "msie6";
        gzip_http_version 1.1;
        gzip_vary on;
        gzip_comp_level 6;
        gzip_proxied any;
        gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
        gzip_buffers 16 8k;
[...]
}
[...]

2.5 Enable The SSL Session Cache

If you serve https web sites, you should enable the SSL session cache:

vi /etc/nginx/nginx.conf

[...]
http {
[...]
        ssl_session_cache    shared:SSL:10m;
        ssl_session_timeout  10m;
        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers on;
[...]
}
[...]

2.6 Use The FastCGI Cache

If you have cacheable PHP content, you can use the nginx FastCGI cache to cache that content. In your nginx.conf, add a line similar to this one:

vi /etc/nginx/nginx.conf

[...]
http {
[...]
        fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m;
[...]
}
[...]

The cache directory /var/cache/nginx must exist and be writable for nginx:

mkdir /var/cache/nginx
chown www-data:www-data /var/cache/nginx

(By using tmpfs, you can even place the directory directly in your server’s memory which provides another small speed advantage – take a look at this tutorial to learn more: Storing Files/Directories In Memory With tmpfs).

In your vhost configuration, add the following block to your location ~ \.php$ {} section (you can modify it depending on when content should be cached and when not):

[...]
                # Setup var defaults
                set $no_cache "";
                # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
                if ($request_method !~ ^(GET|HEAD)$) {
                    set $no_cache "1";
                }
                # Drop no cache cookie if need be
                # (for some reason, add_header fails if included in prior if-block)
                if ($no_cache = "1") {
                    add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
                    add_header X-Microcachable "0";
                }
                # Bypass cache if no-cache cookie is set
                if ($http_cookie ~* "_mcnc") {
                            set $no_cache "1";
                }
                # Bypass cache if flag is set
                fastcgi_no_cache $no_cache;
                fastcgi_cache_bypass $no_cache;
                fastcgi_cache microcache;
                fastcgi_cache_key $scheme$host$request_uri$request_method;
                fastcgi_cache_valid 200 301 302 10m;
                fastcgi_cache_use_stale updating error timeout invalid_header http_500;
                fastcgi_pass_header Set-Cookie;
                fastcgi_pass_header Cookie;
                fastcgi_ignore_headers Cache-Control Expires Set-Cookie;
[...]

So the full location ~ \.php$ {} section could look as follows:

[...]
location ~ \.php$ {

                # Setup var defaults
                set $no_cache "";
                # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
                if ($request_method !~ ^(GET|HEAD)$) {
                    set $no_cache "1";
                }
                # Drop no cache cookie if need be
                # (for some reason, add_header fails if included in prior if-block)
                if ($no_cache = "1") {
                    add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";
                    add_header X-Microcachable "0";
                }
                # Bypass cache if no-cache cookie is set
                if ($http_cookie ~* "_mcnc") {
                            set $no_cache "1";
                }
                # Bypass cache if flag is set
                fastcgi_no_cache $no_cache;
                fastcgi_cache_bypass $no_cache;
                fastcgi_cache microcache;
                fastcgi_cache_key $scheme$host$request_uri$request_method;
                fastcgi_cache_valid 200 301 302 10m;
                fastcgi_cache_use_stale updating error timeout invalid_header http_500;
                fastcgi_pass_header Set-Cookie;
                fastcgi_pass_header Cookie;
                fastcgi_ignore_headers Cache-Control Expires Set-Cookie;

                try_files $uri =404;
                include /etc/nginx/fastcgi_params;
                fastcgi_pass unix:/var/lib/php5-fpm/web1.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_script_name;
                fastcgi_intercept_errors on;
}
[...]

This would cache pages with the return codes 200, 301, and 302 for ten minutes.

You can read more about this topic here: Why You Should Always Use Nginx With Microcaching

2.7 Use FastCGI Buffers

In your vhost configuration, you can add the following lines to your location ~ \.php$ {} section:

[...]
                fastcgi_buffer_size 128k;
                fastcgi_buffers 256 16k;
                fastcgi_busy_buffers_size 256k;
                fastcgi_temp_file_write_size 256k;
                fastcgi_read_timeout 240;
[...]

The full location ~ \.php$ {} section could look as follows:

[...]
location ~ \.php$ {
                try_files $uri =404;
                include /etc/nginx/fastcgi_params;
                fastcgi_pass unix:/var/lib/php5-fpm/web1.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_script_name;
                fastcgi_intercept_errors on;

                fastcgi_buffer_size 128k;
                fastcgi_buffers 256 16k;
                fastcgi_busy_buffers_size 256k;
                fastcgi_temp_file_write_size 256k;
                fastcgi_read_timeout 240;
}
[...]

2.8 Use memcached

nginx can read full pages directly from memcached. So if your web application is capable of storing full pages in memcached, nginx can fetch that page from memcached. An example configuration (in your vhost) would be as follows:

[...]
        location ~ \.php$ {
                set $no_cache "";
                if ($query_string ~ ".+") {
                        set $no_cache "1";
                }
                if ($request_method !~ ^(GET|HEAD)$ ) {
                        set $no_cache "1";
                }
                if ($request_uri ~ "nocache") {
                        set $no_cache "1";
                }
                if ($no_cache = "1") {
                        return 405;
                }

                set $memcached_key $host$request_uri;
                memcached_pass     127.0.0.1:11211;
                default_type text/html;
                error_page 404 405 502 = @php;
                expires epoch;
        }

        location @php {
                        try_files $uri =404;
                        include /etc/nginx/fastcgi_params;
                        fastcgi_pass unix:/var/lib/php5-fpm/web1.sock;
                        fastcgi_index index.php;
                        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                        fastcgi_param PATH_INFO $fastcgi_script_name;
                        fastcgi_intercept_errors on;
        }
[...]

It is important that your web application uses the same key for storing pages in memcached that nginx uses to fetch these pages from memcached (in this example it’s$host$request_uri), otherwise this will not work.

If you store lots of data in memcached, make sure you have allocated enough RAM to memcached, e.g.:

vi /etc/memcached.conf

[...]
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 512
[...]

2.9 Make Browsers Cache Static Files With The expires Directive

Files (like images, CSS, JS, etc.) that don’t change often can be cached by the visitor’s browser by using the expires directive (see http://wiki.nginx.org/HttpHeadersModule#expires):

[...]
               location ~*  \.(jpg|jpeg|png|gif|ico)$ {
                         expires 365d;
               }
[...]

2.10 Disable Logging For Static Files

Normally it doesn’t make much sense to log images or CSS files in the access log. To reduce disk I/O, we can disable logging for such files, e.g. as follows:

[...]
               location ~*  \.(jpg|jpeg|png|gif|ico)$ {
                         log_not_found off;
                         access_log off;
               }
[...]

3 Tuning PHP-FPM

3.1 Use A PHP Opcode Cache Like Xcache Or APC

Make sure you have a PHP opcode cache such as Xcache or APC installed. On Debian/Ubuntu, Xcache can be installed as follows:

apt-get install php5-xcache

APC can be installed as follows:

apt-get install php-apc

Make sure you have just one installed (either Xcache or APC), not both. Reload PHP-FPM after the installation:

/etc/init.d/php5-fpm reload

3.2 Allocate Enough Memory To Xcache/APC

If you have lots of PHP scripts, you should probably raise the memory that is allocated to Xcache or APC. For Xcache, you can do this in /etc/php5/conf.d/xcache.ini:

vi /etc/php5/conf.d/xcache.ini

[...]
xcache.size  =                512M
[...]

Likewise for APC:

vi /etc/php5/conf.d/apc.ini

[...]
apc.shm_size="512"
[...]

Reload PHP-FPM after your modification:

/etc/init.d/php5-fpm reload

3.3 PHP-FPM Emergency Settings

This is more of a reliability setting than a performance setting: PHP-FPM can restart itself if it stops working:

vi /etc/php5/fpm/php-fpm.conf

[...]
; If this number of child processes exit with SIGSEGV or SIGBUS within the time
; interval set by emergency_restart_interval then FPM will restart. A value
; of '0' means 'Off'.
; Default Value: 0
emergency_restart_threshold = 10

; Interval of time used by emergency_restart_interval to determine when
; a graceful restart will be initiated.  This can be useful to work around
; accidental corruptions in an accelerator's shared memory.
; Available Units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
emergency_restart_interval = 1m

; Time limit for child processes to wait for a reaction on signals from master.
; Available units: s(econds), m(inutes), h(ours), or d(ays)
; Default Unit: seconds
; Default Value: 0
process_control_timeout = 10s
[...]

3.4 For PHP >= 5.3.9: Use The ondemand Process Manager

If you use PHP >= 5.3.9, you can use the ondemand process manager in a PHP-FPM pool instead of static or dynamic, this will save you some RAM:

[...]
pm = ondemand
pm.max_children = 100
pm.process_idle_timeout = 5s
[...]

3.5 Use Unix Sockets Instead Of TCP Sockets

To reduce networking overhead, you should configure your pools to use Unix sockets instead of TCP:

[...]
;listen = 127.0.0.1:9000
listen = /var/lib/php5-fpm/www.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
[...]

If you change this, you must of course adjust the location ~ \.php$ {} section in your nginx vhost to use the socket (fastcgi_pass unix:/var/lib/php5-fpm/www.sock; instead offastcgi_pass 127.0.0.1:9000;):

[...]
location ~ \.php$ {
                try_files $uri =404;
                include /etc/nginx/fastcgi_params;
                ##fastcgi_pass 127.0.0.1:9000;
                fastcgi_pass unix:/var/lib/php5-fpm/www.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_script_name;
                fastcgi_intercept_errors on;
}
[...]

3.6 Avoid 502 Bad Gateway Errors With Sockets On Busy Sites

If you use Unix sockets with PHP-FPM, you might encounter random 502 Bad Gateway errors with busy websites. To avoid this, we raise the max. number of allowed connections to a socket. Open /etc/sysctl.conf…

vi /etc/sysctl.conf

… and set:

[...]
net.core.somaxconn = 4096
[...]

Run

sysctl -p

afterwards for the change to take effect.

4 Tuning MySQL

4.1 Optimize Your my.cnf

You should use scripts such as mysqltuner.pl or tuning-primer.sh (or both) to find out which settings you should adjust in your my.cnf file. One of the most important variables isquery_cache_size, and, if you use InnoDB tables, innodb_buffer_pool_size.

This is an example configuration from a test server with 16GB RAM, about 30 databases with 50% MyISAM tables and 50% InnoDB tables – this worked out quite well for database-driven test sites that were stressed with a benchmark tool (ab):

[...]
key_buffer = 256M

max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 100

table_open_cache = 16384
table_definition_cache = 8192

sort_buffer_size = 256K

read_buffer_size = 128K

read_rnd_buffer_size = 256K

myisam_sort_buffer_size = 64M
myisam_use_mmap = 1
thread_concurrency = 10
wait_timeout = 30

myisam-recover = BACKUP,FORCE

query_cache_limit = 10M
query_cache_size = 1024M
query_cache_type = 1

join_buffer_size = 4M

log_slow_queries        = /var/log/mysql/mysql-slow.log
long_query_time = 1

expire_logs_days        = 10
max_binlog_size         = 100M

innodb_buffer_pool_size = 2048M
innodb_log_file_size = 256M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 0
innodb_thread_concurrency = 8
innodb_read_io_threads = 64
innodb_write_io_threads = 64
innodb_io_capacity = 50000
innodb_flush_method = O_DIRECT
innodb_file_per_table
innodb_additional_mem_pool_size = 256M
transaction-isolation = READ-COMMITTED

innodb_support_xa = 0
innodb_commit_concurrency = 8
innodb_old_blocks_time = 1000
[...]

Please note: If you need ACID compliance, you must set innodb_flush_log_at_trx_commit to 1. You can find out more about this on http://dev.mysql.com/doc/refman/5.5/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit.

innodb_io_capacity should be set to high values only if you use MySQL on an SSD. If you use it on a normal hard drive, you better leave that line out.

4.2 Use An SSD

You can get a big performance boost by using MySQL with a solid state disk (SSD) as this reduces disk I/O a lot. The easiest way to do this is by mounting the /var/lib/mysql directory to an SSD.

5 Web Application Caching

Lots of web applications (such as WordPress with the WP Super Cache or W3 Total Cache plugins, Drupal with the Boost module, TYPO3 with the nc_staticfilecache extension) offer the possibility to create a full page cache which is stored on the hard drive and which can be accessed directly by nginx so that it can bypass the whole PHP-MySQL stack. This provides a huge performance boost.

You can find tutorials about this here:

You can speed the static file cache up even more by placing it directly in the server’s memory with the tmpfs filesystem:

Storing Files/Directories In Memory With tmpfs

Of course, you can use tmpfs also for the nginx FastCGI cache from chapter 2.6.

6 Links

(via howtoforge.com)