Unicorn is killing our rainbows workers
samuel.kadolph at shopify.com
Thu Jul 19 14:29:02 UTC 2012
On Wed, Jul 18, 2012 at 8:26 PM, Eric Wong <normalperson at yhbt.net> wrote:
> Samuel Kadolph <samuel.kadolph at shopify.com> wrote:
>> On Wed, Jul 18, 2012 at 5:52 PM, Eric Wong <normalperson at yhbt.net> wrote:
>> > Samuel Kadolph <samuel.kadolph at jadedpixel.com> wrote:
>> >> Hey rainbows-talk,
>> >> We have 40 servers that each run rainbows with 2 workers with 100
>> >> threads using ThreadPool. We're having an issue where unicorn is
>> >> killing the worker process. We use ThreadTimeout (set to 70 seconds)
>> >> and originally had the unicorn timeout set to 150 seconds and we're
>> >> seeing unicorn eventually killing each worker. So we bumped the
>> >> timeout to 300 seconds and it took about 5 minutes but we started
>> >> seeing unicorn starting to kill workers again. You can see our stderr
>> >> log file (timeout at 300s) at
>> >> https://gist.github.com/9ec96922e55a59753997. Any insight into why
>> >> unicorn is killing our ThreadPool workers would help us greatly. If
>> >> you require additional info I would be happy to provide it.
> Also, are you using "preload_app true" ?
Yes we are using preload_app true.
> I'm a bit curious how these messages are happening, too:
> D, [2012-07-18T15:12:43.185808 #17213] DEBUG -- : waiting 151.5s after
They are strange. My current hunch is the killing and that message are
symptoms of the same issue. Since it always follows a killing.
> Can you tell (from Rails logs) if the to-be-killed workers are still
> processing requests/responses the 300s before when the unicorn timeout
> hits it? AFAIK, Rails logs the PID of each worker processing the
rails doesn't log the pid but it would seem that after upgrading to
mysql 0.2.18 it is no longer killing workers that are busy with
> Also, what in your app takes 150s, or even 70s? I'm curious why the
> timeouts are so high. I wonder if there are bugs with unicorn/rainbows
> with huge timeout values, too...
> If anything, I'd lower the unicorn timeout to something low (maybe
> 5-10s) since that detects hard lockups at the VM level. Individual
> requests in Rainbows! _are_ allowed to take longer than the unicorn
We lowered the unicorn timeout to 5 seconds and but that did not
change the killings but they seem to be happening less often. I have
some of our stderr logs after setting the timeout to 5 seconds at
> Can you reproduce this in a simulation environment or only with real
> traffic? If possible, can you setup an instance with a single worker
> process and get an strace ("strace -f") of all the threads when this
We haven't been able to reproduce it locally. We have a staging
environment for this app so I will see if I can use it and try to
>> We're running ruby 1.9.3-p125 with the performance patches at
> Can you reproduce this with an unpatched 1.9.3-p194? I'm not too
> familiar with the performance patches, but I'd like to reduce the amount
> of less-common/tested code to isolate the issue.
We cannot try p194 right now because one of our ops is on a trip but
once he's back I'm sure we'll try that and let you know.
>> I listed the gems we use and which
>> ones that have c extension at https://gist.github.com/3139226.
> Fortunately, I'm familiar with nearly all of these C gems.
> Newer versions of mysql2 should avoid potential issues with
> ThreadTimeout/Timeout (or anything that hits Thread#kill). I think
> mysql2 0.2.9 fixed a fairly important bug, and 0.2.18 fixed a very rare
> (but possibly related to your issue) bug,
Upgrading mysql2 seems to have stopped unicorn from killing workers
that are currently busy. We were stress testing it last night and
after we upgraded to 0.2.18 we had no more 502s from the app but this
could be a coincidence since the killings are still happen.
> Unrelated to your current issue, I strongly suggest Ruby 1.9.3-p194,
> previous versions had a nasty GC memory corruption bug triggered
> by Nokogiri (ref: https://github.com/tenderlove/nokogiri/issues/616)
> I also have no idea why mongrel is in there :x
I forgot to only show bundle for production.
>> We'll try running without the ThreadTimeout. We don't think we're
>> having deadlock issues because our stress tests do not timeout but
>> they do 502 when the rainbows worker gets killed during a request.
> OK. I'm starting to believe ThreadTimeout isn't good for the majority
> of applications out there, and perhaps the only way is to have support
> for this tightly coupled with the VM. Even then, "ensure" clauses would
> still be tricky/ugly to deal with... So maybe forcing developers to use
> app/library-level timeouts for everything they do is the only way.
Our ops guys say we had this problem before we were using ThreadTimeout.
More information about the rainbows-talk