Using Nginx, SSI and Memcache to Make Your Web Applications Faster
5 Aug2007

If you’d take a look at any web site, you will notice, that almost all of the pages on this given site are pretty static in their nature. Or course, this site could have some dynamic elements like login field or link in the header, some customized menu elements and some other things… But entire page could be considered static in many cases.

When I started thinking about my sites from this point of view, I understood, how great it would be to be able to cache entire page somewhere (in memcache for example) and be able to send it to the user without any requests to my applications, which are pretty slow (comparing to memcache ๐Ÿ˜‰ ) in content generation. Then I came up with a pretty simple and really powerful idea I’ll describe in this article. An idea of caching entire pages of the site and using my application only to generate small partials of the page. This idea allows me to handle hundreds of queries with one server running pretty slow (yeah! it is slow even after all optimizations on MySQL side and lots of tweaks in site’s code) Ruby on Rails application. Of course, the idea is kind of universal and could be used with any back-end languages, technologies or frameworks because all of them are slower then memcache in content “generation”.

So, first of all, let me describe tools I use in my solution (but it does not mean, that you must use the same software – it is just for example):

  • Memcached – for handling all requests to cached information without generation on every request
  • Nginx (with SSI enabled) – for handling all HTTP requests to my site and retrieving information from memcache or from my application back-end.
  • Ruby on Rails – as an example of backend application (could be Python, PHP, Perl, Java, etc).

And now, let me describe generic request handling process with all these tools together. Let’s imagine some user hits your site’s page. Browser sends a request to your web-server. Web server has nginx installed there (frontend) and Rails-based application deployed like described in one of my previous posts or in other people’s posts on the net (nginx+mongrel). So, nginx proxies your request to one of mongrel backend instances and mongrel serves the request.

This was generic process of request handling with 2-tier web server deployment (frontend + backend). In this case all your requests would be handled by mongrel and Rails which is pretty slow. So, we need to lighten up this process by removing unnecessary work from Rails application. First of all, we need to change nginx configuration to let it server pages from memcache and ask backend mongrels only of some request result has not been cached yet:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# Defining mongrel cluster
upstream mongrel {
    server 127.0.0.1:8150;
    server 127.0.0.1:8151;
    server 127.0.0.1:8152;
    server 127.0.0.1:8153;
}

# Defining web server
server {
    listen 216.86.155.55:80;
    server_name domain.tld;

    # All dynamic requests will go here
    location / {
        default_type text/html;
           
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_redirect off;
           
        # All POST requests go to mongrel directly
        if ($request_method = POST) {
            proxy_pass http://mongrel;
            break;
        }
           
        # Say nginx to try to fetch some key from memcache: "yourproject:Action:$uri", like ""yourproject:Action:/posts/1"
        set $memcached_key "yourproject:Action:$uri";
        memcached_pass localhost:11211;

        proxy_intercept_errors  on;
           
        # If no info would be found in memcache or memecache would be dead, go to /fallback location
        error_page 404 502 = /fallback$uri;
    }

    # This location would be called only if main location failed to serve request
    location /fallback/ {
        # This means, that we can't get to this location from outside - only by internal redirect
        internal;
           
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            
        proxy_redirect off;

        # Pass request to mongrel
        proxy_pass http://mongrel/;
    }

    # Some static location to serve directly w/o bothering backend
    # (here should be more of such static paths or one regex-based location)
    location /images/ {
        root /rails/lazygeeks/public/current/public;
    }
}

Notice: This config is not optimal, but it shows us what we’re going to do here.

So, what do can we see above: nginx tries to fetch a page from memcache first using a key like “yourproject:Action:/some/uri“, then, it this approach failed (no page in cache, or cache is dead), it sends a request to the backend server.

But how pages would appear in the cache?“, you can ask. And you’d be right – we need to put them there manually from our application when first request comes to us and then all other requests to this specific URI will be served from our cache (I’d left this task for my readers to implement because or you can take a look to Dmytro Shteflyuk’s blog – he is going to post some information about how it could be done).

But what if our page would be changed a bit later?” – of course, you’d know when it happened and you’d remove a page from he cache ;-). For example, you’d decide to implement this scheme for your /post/* URIs in your ultra-popular blog. Then you’ll need to remove a key “mycoolblog:Action:/post/X” from the cache when something changes on this page (like comment added or page edited).

And last, most interesting question you’d ask: “What if we have absolutely dynamic parts on our pages? Login field in the top part of the pages and ‘comment’ form for logged in users“. To solve this problem, we need to do following:

  1. Separate this small page partials generation from the main page logic to a separate URIs like /dynamic/login_field and /dynamic/comment_form.
  2. Add option “ssi on” to both locations in your nginx config: “location /” and “location /fallback” to ask nginx for SSI support in your proxied responses from mongrel and Rails
  3. Add one more location you your config (not mandatory, but it would work faster):
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    # This location would be called only from SSI tags
    location /dynamic {
        # This means, that we can't get to this location from outside - only by internal redirect
        internal;
               
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;            
        proxy_redirect off;

        # Pass request to mongrel
        proxy_pass http://mongrel;
    }
  4. Replace dynamic parts in your templates with
    1
    <!--# include virtual="full_url_to_your_partial" -->

    where full_url_to_your_partial is something like http://domain.tld/dynamic/login_field.

So, what we’ll get with all these changes? Now, when we have SSI enabled in our nginx service and we’ve replaced out dynamic partials with an appropriate SSI-tags, nginx will get our pages from backend or from cache and will find these SSI tags there. After this it will ask backend processes (with parallel queries, which is really great for scalability) for these small partials. Your application will be able to spend really small amount of resources to generate tiny partials without any complicated logic there. When nginx will get all info it needs, it will compose final page and send it to a user. That’s it! Your site becomes really fast and could serve tons of requests where before this modification it’d kill your brand new N-CPU server with M Gb of RAM ๐Ÿ˜‰

If you’ll try to implement this schema, you’ll definitely come to an idea of putting all this stuff to some rails plugin and just using some ssi_include some_url helper to output your partials and some simple code to mark, which actions should be cached and which shouldn’t. And, of course, you’ll understand, that you can do all this stuff even without nginx and SSI and without Rails – you can ask for these partials using AJAX requests on clients’ side or even compose both these approaches to fallback to AJAX when you have no SSI or switching between them with simple configuration setting. But anyways, this approach worth a try because it gives you huge performance boost with a pretty small cost of implementation.