anything left before 1.0?

Eric Wong normalperson at
Tue Jun 15 22:27:27 EDT 2010

Andrew Grim <andrew at> wrote:
> > Let us know if there's anything else missing, pipe up within the next 24
> > hours or so...
> >
> Hey Eric,
> I was hoping to spend some more time debugging myself, but since you
> were going to release 1.0 I thought I'd get your thoughts on this.
> Quick overview, I work for, one of the larger rails
> sites, and one of the only large sites (that I know of personally)
> running ruby 1.9.  We are currently running mostly mongrels, but I've
> got one server testing Unicorn.  Things are mostly great, we are
> seeing nearly 20ms improvement in average response time, which is
> awesome.  Now to the issue.
> SYMPTOM: The master process is killing the workers fairly frequently
> based on the workers timing out.
> CAUSE: I've added some logging to get the backtrace when I send a
> SIGTERM, and it is always stuck on line 68 in http_response.rb:
>       body.each { |chunk| socket.write(chunk) }
> I ran some straces, and here's an example of the last few lines where
> it gets killed:
> 06:50:23.239931 clock_gettime(CLOCK_REALTIME, {1276523423, 239967000})
> = 0 <0.000172>
> 06:50:23.240213 write(12, "HTTP/1.1 200 OK\r\nDate: Mon, 14 J"...,
> 1896) = 1896 <0.000087>
> 06:50:23.242072 write(12, "<!DOCTYPE html PUBLIC \"-//W3C//D"...,
> 166842) = 128000 <0.000107>
> 06:50:23.242230 select(13, NULL, [12], NULL, NULL <unfinished ...>
> 06:51:22.167122 +++ killed by SIGKILL +++
> So it's writing and then (to my understanding) waiting on the socket
> to return, but you can see that for a full 60s it isn't.

Hi Andrew,

The timer starts when the app is initially dispatched, not when writing
starts.  You can check the log output from the master process (which
usually goes to stderr_path) and see the exact time the master process
saw before killing it.

Are you using nginx (or something else) to reverse proxy?  You should
be using nginx :)

> My best guess off-hand is that the large size of the string being
> written to the socket is causing an issue, and I have noticed that it
> is happening primarily on requests that return larger payloads.

That's unlikely.  I suspect the client you're using to hit Unicorn with
is not reading the other end of the socket, so once the kernel buffers
fill up, Unicorn blocks on the write.

Not a real solution, but you can probably hide the problem by increasing
the buffer sizes in the Linux kernel (net.core.wmem_max and
net.ipv4.tcp_wmem sysctls), but the defaults are already very generous.

> At the same time, it isn't that much data, so I'm a little surprised
> it would be an issue.  I am planning on trying to split the body up
> into smaller chunks in a rack middleware or something.

I doubt the middleware would help at all.

> Or I could be totally off.  Just wanted to see if you have any ideas,
> I'm not even sure this is a Unicorn issue, definitely could be ruby
> 1.9 bug too.

I would definitely look at your _client_ (which should be nginx).

You should isolate your client from other requests and strace that, too,
and see if it's reading off the socket at the same time.  I've used
1.9.1 pretty heavily with Rainbows! and large responses myself in

nginx will freeze up badly when running poorly-written Perl code with
the embedded Perl support.  Other than that it's been very solid in my

> Sorry about the long email, but I appreciate any help you can give.

No problem, let us know what you find out.

Eric Wong

More information about the mongrel-unicorn mailing list