High-Performance Ruby On Rails Setups Test: mongrel vs lighttpd vs nginx
22 Aug2006

Because of not fully correct testing methodology, benchmark results are not fully correct. So, I decided to redo all tests. New benchmark results you can get in “Looking For Optimal Solution” series Summary post.

This week we have started one new project with Ruby on Rails as primary framework. My first task was to prepare runtime environment for it on one of our development servers. When I have tried to research how people doing it, I noted that there is no information about how to deploy rails application with nginx as frontend and what is performance of such solution. Before blindly make any decisions about future platform I’ve decided to make some performance tests of the some most popular rails web servers/frontends. Results of these tests you can find here with configuration samples for all software that I have used.

First of all I will describe hardware/software plaftorm of our development server. Our development server has following configuration:

  • CPU: 4 x XEON CPUs
  • Memory: 4 Gb of RAM
  • OS: Debian GNU/Linux Testing with recent 2.6 kernel

Later I will describe every used configuration and apache benchmarking tool results. At the end of article you will be see aggregated results and performance comparison diagram for all solutions. All tests was performed with following command:

$ ab -c 100 -n 10000 http://127.0.0.1:PORT/

where PORT is specific port number that has was chosen for every test.

So, first described solution will be simple WEBrick server (web server written entirely on ruby and included in rails framework). To start it I have used following command:

$ ./script/server --port 8080 -d

Results of this test was really impressive for me. I thought, that Webrick should be really slow, but it shown non-zero results ;-). So, here are results:

  • Web Server: WEBrick/1.3.1
  • Time taken for tests: 51.490
  • Time per request (mean): 514.896
  • Time per request (mean, across all concurrent requests): 5.149
  • Requests per second (mean): 194,21
  • Transfer rate (Kbytes/sec): 1478,88

As you can see, this results may be acceptable for some small projects or for development, but for big frequently visited site we need something faster. 🙂

Second test was performed for Mongrel:

Mongrel is a fast HTTP library and server for Ruby that is intended for hosting Ruby web applications of any kind using plain HTTP rather than FastCGI or SCGI. It is framework agnostic and already supports Ruby On Rails, Og+Nitro, and Camping frameworks.

The easiest way to get started with Mongrel is to install it via RubyGems and then run a Ruby on Rails application. You can do this easily:

$ sudo gem install mongrel
$ cd your_rails_app
$ mongrel_rails start -d

Which runs Mongrel in the background. You can stop it with:

$ mongrel_rails stop

And you’re all set. There’s quite a few options you can set for the start command. Use the mongrel_rails start -h to see them all.

My Mongrel test was performed for single mongrel process started with simple:

$ mongrel_rails start -d --port 8081

Results are following:

  • Web Server: Mongrel 0.3.13.3 (single process)
  • Time taken for tests: 17.212
  • Time per request (mean): 172.117
  • Time per request (mean, across all concurrent requests): 1.721
  • Requests per second (mean): 581,00
  • Transfer rate (Kbytes/sec): 4398,28

As you can see here, Mongrel is really faster comparing to simple WEBrick server, but let’s see what other servers can show.

Third test was done with simple tcp balancer (pen) and 5 processes of Mongrel. Pen has been installed from debian testing repository, so installation was simple (apt-get install pen). Testing was performed on pen process started with following command line:

$ pen 8082 127.0.0.1:3000 \\
           127.0.0.1:3001 \\
           127.0.0.1:3002 \\
           127.0.0.1:3003 \\
           127.0.0.1:3004

In my test mongrel servers has been spawned by simple shell script written by me, but later I found mongrel-cluster – GemPlugin that wrappers the mongrel HTTP server and simplifies the deployment of webapps using a cluster of mongrel servers. So you can simply use it to make your setup process easier.

Results of this tests are better then in first test, but not as good as with single process:

  • Web Server: Mongrel 0.3.13.3 (5 processes through Pen balancer)
  • Time taken for tests: 19.864
  • Time per request (mean): 198.642
  • Time per request (mean, across all concurrent requests): 1.986
  • Requests per second (mean): 503,42
  • Transfer rate (Kbytes/sec): 3810,98

Next two test was performed in with two popular reverse-proxy servers: Nginx and Lighttpd and the same 5 Mongrel processes.

First tested server was Lighttpd, built with following commands:

# ./configure --prefix=/opt/lighttpd
# make
#make install

Folloiwing config file was used to start lighttpd:

server.modules              = (
                                "mod_access",
                                "mod_proxy",
                                "mod_accesslog"
                              )

server.document-root        = "/opt/lighttpd/www/"
server.errorlog             = "/opt/lighttpd/logs/lighttpd.error.log"
index-file.names            = ( "index.htm" )

server.tag                 = "lighttpd"
accesslog.filename          = "/opt/lighttpd/logs/access.log"
static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" )

server.port                = 8083
server.pid-file            = "/var/run/lighttpd.pid"

proxy.balance = "fair"

proxy.server  = (
  "/" =>
     ( ( "host" => "127.0.0.1", "port" => 3000 ),
       ( "host" => "127.0.0.1", "port" => 3001 ),
       ( "host" => "127.0.0.1", "port" => 3002 ),
       ( "host" => "127.0.0.1", "port" => 3003 ),
       ( "host" => "127.0.0.1", "port" => 3004 )
     )
  )

Results was really better, than in previous tests:

  • Web Server: Lighttpd 1.4.11 (proxy for 5 Mongrel processes)
  • Time taken for tests: 14.256
  • Time per request (mean): 142.570
  • Time per request (mean, across all concurrent requests): 1.426
  • Requests per second (mean): 701,41
  • Transfer rate (Kbytes/sec): 5321,53

Because of some strange problems authors of mongrel server do not suggest using lighttpd for proxying, but as for my it works well.

And last test was performed with nginx reverse-proxy and web server. I decided to test it after all other solutions because there are no documents about its performance and I want to show, how to setup it and check its performance with Ruby on Rails when I know all other results… So, I have built it with standard commands:

# ./configure --prefix=/opt/nginx
# make
# make install

and tested with following config file:

worker_processes  2;

error_log  logs/error.log notice;
pid        logs/nginx.pid;

events {
    worker_connections  16384;
}

http {
    include       conf/mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    tcp_nopush     on;

    keepalive_timeout  65;
    tcp_nodelay        on;

    upstream mongrel {
        server 127.0.0.1:3000;
        server 127.0.0.1:3001;
        server 127.0.0.1:3002;
        server 127.0.0.1:3003;
        server 127.0.0.1:3004;
    }

    server {
        listen       8084;
        server_name  localhost;

        access_log  off;

        location / {
            proxy_pass  http://mongrel;
        }
    }
}

Results of this test are really impressive, so I want to say “Thanks” to Igor Syosev for such great product!

  • Web Server: nginx/0.3.60 (proxy for 5 Mongrel processes)
  • Time taken for tests: 10.449
  • Time per request (mean): 104.495
  • Time per request (mean, across all concurrent requests): 1.045
  • Requests per second (mean): 956.99
  • Transfer rate (Kbytes/sec): 7267.27

Aggregated tests results you can see at following diagram:
Rails Performance with Different Access Methods

So, I made my choice – I will use nginx to proxy requests to Mongrel cluster, because it is really flexible solution with great performance. 🙂

If you have some suggestions about additional tests or about results I described above, you can post your comments here and I will answer to all of you. Thanks and see you next time!