scaling unicorn

Jamie Wilkinson jamie at
Tue Jun 22 14:57:58 EDT 2010

>> Somewhat related -- I've been meaning to discuss the finer points of
>> backlog tuning.
>> I've been experimenting with the multi-server socket+TCP megaunicorn
>> configuration from your CDT:

On Jun 22, 2010, at 11:03 AM, snacktime wrote:

> Seems like you would have some type of 'reserve'
> cluster for requests that hit the listen backlog, and when you start
> seeing too much traffic going to the reserve, you add more servers to
> your main pool.  How else would you manage the configuration for
> something like this when you are working with 100 - 200 servers?  You
> can't be changing the nginx configs every time you add servers, that's
> just not practical.

We are using chef for machine configuration which makes these kinds of  numbers doable

I would love to see a nginx module for distributed configuration mgmnt

Right now we are running 6 frontend machines, 4 in use & 2 in reserve like you described. We are doing about 5000rpm with this, almost all dynamic. 10-30% of requests might be 'slow' (1+s) depending on usage patterns. 

To measure health I am using munin to watch system load, nginx requests & nginx errors. In this configuration 502 Bad Gateways from frontend nginx indicate a busy unicorn socket & thus a handoff of the request to the backups. Then we measure the rails production.log for request counts + speed on each server as well as using NewRelic RPM

monit also emails us when 502s show up. 
In theory monit could  be automatically spinning up another backup server, provisioning it using chef, then reprovisioning the rest of the cluster to start handing over traffic. Alternately the new server could just act as backup for the one overloaded machine, which could make isolating performance issues easier.


More information about the mongrel-unicorn mailing list