Maintaining capacity during deploys

Devin Ben-Hur dbenhur at whitepages.com
Fri Nov 30 01:28:32 UTC 2012


On 11/29/12 3:34 PM, Lawrence Pit wrote:
>> Unfortunately, while the new workers are forking and begin processing
>> requests, we're still seeing significant spikes in our haproxy request
>> queue. It seems as if after we restart, the unwarmed workers get
>> swamped by the incoming requests.
>
> Perhaps it's possible to warm up the workers in the unicorn after_fork block?

I've successfully applied this methodology to a nasty rails app that had 
a lot of latent initialization upon first request. Each worker gets a 
unique private secondary listen port and each worker sends a warm-up 
request to a prior worker in the after_fork hook. In our environment our 
load balancer drains each host as it's being deployed, and this does 
effect the length of deployment across many hosts in a cluster, but the 
warmup bucket brigade is effective at making sure workers on that host 
are responsive when they get added back to the available pool.

A better solution is to use a profiler to identify what extra work is 
being done when an unwarm worker gets its first request and move that 
work into an initialization step which occurs before fork when run with 
app preload enabled.


More information about the mongrel-unicorn mailing list