feature request - when_ready() hook
sunaku at gmail.com
Mon Nov 30 18:47:59 EST 2009
On Thu, Nov 26, 2009 at 11:53 AM, Eric Wong <normalperson at yhbt.net> wrote:
> Suraj Kurapati <sunaku at gmail.com> wrote:
>> the XML dataset loading
>> (see above) kept increasing the master's (and the new set of workers')
>> memory footprint by 1.5x every time Unicorn was restarted via SIGUSR2.
> Side problem, but another thing that makes me go "Huh?"
> Did the new master's footprint increase?
Yes, but this seems to have been my fault. I programmed the master
(via the Unicorn configuration file) to accept a SIGPWR signal which
made it (1) reload the XML dataset (i.e. added memory bloat) and (2)
send SIGUSR2 to itself (thereby providing the new XML data to the new
Unicorn master + workers). The idea was to cut down the time required
for loading the XML dataset. Unfortunately, the extra memory bloat
added by reloading the XML dataset carried over to the new Unicorn
generation as a side-effect.
> Are you mmap()-ing the XML dataset?
Nope, nothing fancy like that.
> Is RSS increasing or just VmSize?
Hmm, I did not pay attention to these individual stats. I just saw
that the memory% statistic in `ps xv` would increase by 10% every time
Unicorn was restarted through my SIGPWR handler. Again, this was my
fault for using such a non-standard approach.
> Unicorn sets FD_CLOEXEC on
> the first 1024 (non-listener) file descriptors, so combined with exec(),
> that should give the new master (and subsequent workers) a clean memory
Thanks. This is good to know, now that I'm using the standard
>> > At this stage, maybe even implementing something as middleware and
>> > making it hook into request processing (that way you really know the
>> > worker is really responding to requests) is the way to go...
>> Hmm, but that would incur a penalty on each request (check if I've
>> already killed the old master and do it if necessary).
> I don't think a runtime condition would be any more expensive than all
> the routing/filters/checks that any Rails app already does and you can
> cache the result into a global variable.
> As you may have noticed, I'm quite hesitant to add new features,
> especially for uncommon/rare cases. Things like supporting the
> "working_directory" directive and user-switching took *months* of
> debating with myself before they were finally added.
No problem. I ended up using a simple workaround for this whole
problem: from Capistrano, I send SIGUSR2 to the existing Unicorn
master (which will become the old Unicorn master), wait 90 seconds,
and then send SIGQUIT to the old Unicorn master. There's nothing
fancy in my Unicorn configuration file anymore --- no before/after
hooks at all; just a number of workers + listen directive.
This configuration is working out pretty well, and I have finally
achieved zero downtime deploys. (Yay! :-) The only thing I'm worried
about is that I'll have to keep adjusting this timeout as the
infrastructure my app depends upon becomes slower/faster. A
when_ready() hook would really do wonders for me, and I will implement
and try it as planned when I get some time.
> let us know if it's the DB doing reverse DNS because
> I've seen that to be a problem in a lot of cases.
I'll ask about this and let you know.
Thanks for your consideration.
More information about the mongrel-unicorn