Unicorn hangs on POST request

Eric Wong normalperson at yhbt.net
Mon Mar 11 23:01:57 UTC 2013


Tom Pesman <tom at tnux.net> wrote:
> > Tom Pesman <tom at tnux.net> wrote:
> >> Eric Wong wrote:
> >> > Tom Pesman <tom at tnux.net> wrote:
> >> >> I'm trying to fix a problem I'm experiencing with my Rails
> >> >> application hosted at Heroku. I've one POST request which hangs and
> >> >> with the help of a customized rack-timeout gem
> >> >> (https://github.com/tompesman/rack-timeout) I managed to get a
> >> >> stacktrace: https://gist.github.com/tompesman/7b13e02d349aacc720e0
> >> >>
> >> >> How can I debug this further to get to the bottom of this and is
> >> >> this a rack or a unicorn problem?
> >> >
> >> > It's a client or proxy problem.
> >> >
> >> > The request was too large to be transferred within your configured
> >> > timeout, or the client or proxy layer was too slow at transferring the
> >> > POST to unicorn, or the host running unicorn was too overloaded/slow
> >> > to buffer the request.
> >> >
> >> > Factors:
> >> > 1) Disk/filesystem/memory speed on the (client|proxy) talking to
> >> unicorn
> >> > 2) Disk/filesystem/memory speed on the host running unicorn.
> >> > 3) The network link between the (client|proxy) <-> unicorn.
> >> >
> >> > I don't know about Heroku, but nginx will fully buffer the request
> >> body
> >> > before sending to unicorn, so all 3 factors are within your control.
> >> >
> >> > Does Heroku limit (or allow limiting of) the size of request bodies?
> >> >
> >> > Maybe a bad client sent a gigantic request.  nginx limits request
> >> bodies
> >> > to 1M by default (client_max_body_size config directive).
> >> >
> >> > [1] unicorn buffers request bodies to TMPDIR via TeeInput
> >> >
> >>
> >> I agree with you if the POST request has a file to upload, but the
> >> requests we're dealing with do not upload a file and are actually quite
> >> small.
> >
> > Do you have error logs from the proxy Heroku uses?
> 
> The only output Heroku proxy (router called at Heroku) gives is something
> like this:
> 2013-03-11 15:55:58+00:00 heroku router - - at=info method=POST
> path=/api/v1/games/2257907/move.json host=aaa.bbb.ccc
> fwd="xxx.xxx.xxx.xxx/NX" dyno=web.1 queue=0 wait=8ms connect=9ms
> service=5019ms status=500 bytes=0
> 
> > Even with small requests, clients/networks can fail to send the entire
> > request.  nginx will log prematurely aborted client requests; check
> > if whatever proxy Heroku uses does the same.
> 
> I think it's hard to debug with Heroku as the router doesn't give more
> output than this. I'm in contact with Heroku and I'll try to debug this
> with them. For me I have 2 problems with this issue, the waiting affects
> other requests and it's happening frequently (say multiple times per
> minute at 2k-4k rpm). How can I reduce the impact while not aborting valid
> requests?

The best way is to use nginx.

Otherwise, lowering your rack-timeout is probably the best way

More below...

> >> Can I modify the my customized rack-timeout gem to get more information
> >> to
> >> debug this problem?
> >> https://github.com/tompesman/rack-timeout/blob/master/lib/rack/timeout.rb
> >
> > Your env.inspect should show you @bytes_read in the Unicorn::TeeInput
> > object before the timeout was hit.
> 
> This is the output of env.inspect:
> https://gist.github.com/tompesman/1217d4f02aa33efcd873
> 
> It shows a CONTENT_LENGTH of 1351 and the "rack.input" =>
> Unicorn::TeeInput -> @bytes_read == 0.
> 
> So what is happening here?
> 
> 1. Heroku router/proxy sends the header to unicorn.
> 2. Unicorn receives the request and gives the request to Rack
> 3. Rack receives the request and asks Unicorn for the contents of the request
> 4. Unicorn should give the content to Rack. Now it gets interesting, how
> can I see the raw contents of the request? Or are we certain that it isn't
> there because @bytes_read == 0?

The request body doesn't seem to be there, presumably because Heroku
isn't sending it.

Doe heroku fully buffer the request body before sending it to unicorn?
nginx fully buffers, and this is why I can only recommend nginx for slow
clients.

The proxy -> unicorn transfer speed should _never_ be dependent by the
client -> proxy transfer speed.

1) client        ------------------ (slow) --------------> proxy
2) proxy (nginx) --- (as fast as the connection goes) ---> unicorn

With nginx, 1) and 2) are decoupled and serialized.  This adds latency,
but is the only way for multiprocess servers like unicorn to efficiently
handle slow clients.


More information about the mongrel-unicorn mailing list