Our Unicorn Setup

Eric Wong normalperson at yhbt.net
Fri Oct 9 18:01:10 EDT 2009

Dusty Doris <unicorn at dusty.name> wrote:
> Thanks for this post Chris, it was very informative and has answered a
> few questions that I've had in my head over the last couple of days.
> I've been testing unicorn with a few apps for a couple days and
> actually already moved one over to it.
> I have a question for list.

First off, please don't top post, thanks :)

> We are currently setup with a load balancer that runs nginx and
> haproxy.  Nginx, simply proxies to haproxy, which then balances that
> across multiple mongrel or thin instances that span several servers.
> We simply include the public directory on our load balancer so nginx
> can serve static files right there.  We don't have nginx running on
> the app servers, they are just mongrel or thin.
> So, my question.  How would you do a Unicorn deployment when you have
> multiple app servers?

For me, it depends on the amount of static files you serve with nginx
and also the traffic you hit.

Can I assume you're running Linux 2.6 (with epoll + awesome VFS layer)?

May I also assume your load balancer box is not very stressed right now?

> 1.  Simply use mongrels upstream and let it round-robin between all
> the unicorn instances on the different servers?  Or, perhaps use the
> fair-upstream plugin?
> nginx -> [unicorns]

Based on your description of your current setup, this would be the best
way to go.  I would configure a lowish listen() :backlog for the
Unicorns, fail_timeout=0 in nginx for every server  This setup means
round-robin by default, but if one machine gets a :backlog overflow,
then nginx will automatically retry on a different backend.

> 2.  Keep haproxy in the middle?
> nginx -> haproxy -> [unicorns]

This is probably not necessary, but it can't hurt a whole lot either.

Also an option for balancing.  If you're uncomfortable with the first
approach you can also configure haproxy as a backup server:

  upstream unicorn_failover {
    # round-robin between unicorn app servers on the LAN:
    server fail_timeout=0;
    server fail_timeout=0;
    server fail_timeout=0;

    # haproxy, configured the same way as you do now
    # the "backup" parameter means nginx won't hit haproxy unless
    # all the direct unicorn connections have backlog overflows
    # or other issues
    server fail_timeout=0 backup; # haproxy backup

So your traffic flow may look like the first for the common case, but
you may have a slightly more balanced/queueing solution in case you're
completely overloaded.

> 3.  Stick haproxy in front and have it balance between the app servers
> that run their own nginx?
> haproxy -> [nginxs] -> unicorn # could use socket instead of tcp in this case

This is probably only necessary if:

  1) you have a lot of static files that don't all fit in the VFS caches

  2) you handle a lot of large uploads/responses and nginx buffering will
     thrash one box

I know some sites that run this (or similar) config, but it's mainly
because this is what they've had for 5-10 years and don't have
time/resources to test new setups.

> I would love to hear any opinions.

You can also try the following, which is similar to what I describe in:


Pretty much all the above setups are valid.  The important part is that
nginx must sit *somewhere* in between Unicorn and the rest of the world.

Eric Wong

More information about the mongrel-unicorn mailing list