Unicorn and streaming in Rails 3.1
fxn at hashref.com
Sat Jun 25 18:23:34 EDT 2011
On Sat, Jun 25, 2011 at 10:16 PM, Eric Wong <normalperson at yhbt.net> wrote:
> Basically the per-connection overhead of Unicorn is huge, an entire Ruby
> process (tens to several hundreds of megabytes). The per-connection
> overhead of nginx is tiny: maybe a few KB in userspace (including
> buffers), and a few KB in in the kernel. You don't want to maintain
> connections to Unicorn for a long time because of that cost.
I see. I've read also the docs about design and philosophy in the website.
So if I understand it correctly, as far as memory consumption is
concerned the situation seems to be similar to the old days when
mongrel cluster was the standard for production, except perhaps for
setups with copy-on-write friendly interpreters, which weren't
So you configure only a few processes because of memory consumption,
and since there aren't many you want them to be ready to serve a new
request as soon as possible to handle some normal level of
concurrency. Hence the convenience of buffering in Nginx.
>> in the use case we have in mind in
>> Rails 3.1, which is to serve HEAD as soon as possible.
> Small nit: s/HEAD/the response header/ "HEAD" is a /request/ that only
> expects to receive the response header.
Oh yes, that was ambiguous. I actually meant the HEAD element of HTML
documents. The main use case in mind for adding streaming to Rails is
to be able to send the top of your layout (typically everything before
yielding to the view) so that the browser may issue requests for CSS
costly dynamic response.
> nginx only sends HTTP/1.0 requests to unicorn, so Rack::Chunked won't
> actually send a chunked/streamed response. Rails 3.1 /could/ enable
> streaming without chunking for HTTP/1.0, but only if the client
> didn't set a non-standard HTTP/1.0 header to enable keepalive. This
> is because HTTP/1.0 (w/o keepalive) relies on the server to close
> the connection to signal the end of a response.
It's clear then. Also, Rails has code that prevents streaming from
being triggered if the request is HTTP/1.0:
> You can use "X-Accel-Buffering: no" if you know your responses are small
> enough to fit into the kernel socket buffers. There's two kernel
> buffers (Unicorn + nginx), you can get a little more space there. nginx
> shouldn't make another request to Unicorn if it's blocked writing a
> response to the client already, so an evil pipelining client should not
> hurt unicorn in this case:
More information about the mongrel-unicorn