Is a client uploading a file a slow client from unicorn's point of view?

Eric Wong normalperson at
Tue Oct 9 20:03:06 UTC 2012

Laas Toom <laas at> wrote:
> On 09.10.2012, at 4:58, Eric Wong <normalperson at> wrote:
> > I'm not familiar with the nginx upload module, but stock nginx will
> > already do full request buffering for you.  It looks like the nginx
> > upload module[1] is mostly meant for standalone apps written for
> > nginx, and not when nginx is used as a proxy for Rails app...
> AFAIK the upload module will give you two things:
> 1) handle the whole body parsing up to the point of storing the file
> to disk in correct place. Then it strips the file from POST request
> and replaces with reference to the location on disk.

That sounds awesome performance-wise.

> 2) make the upload progress available, so e.g. AJAX-powered upload forms can show progressbar, which is really neat. No need for Flash-based uploaders.

It does?  I'm not seeing it in the documentation, but I know there's
a separate upload progress module, though:

A side note on upload progress: I wrote upr[1] back in the day since I
wanted to share upload progress state via memcached for multi-machine

> I have a Rails app that accepts media uploads. All the processing happens in background, front-end handles only the actual upload and stores it to disk.
> But with uploads as large as 1.4 GB, I've seen Rails response times > 200 secs. This starts to give timeouts in weird places.

Yikes.  I assume you're constrained by disk I/O there?

For some of the large file situations under Linux, I find it beneficial
to lower the dirty_*ratio/*bytes drastically to avoid large, sudden
bursts of disk activity and instead favor smaller writes.  I get lower
throughput, but more consistent performance.

> Eric, correct me if I'm wrong, but doesn't Nginx-Unicorn-Rails stack
> write the whole file up to 3 times to disk:
> 1) Nginx buffers the body (in encoded state)


> 2) Unicorn parses the body and writes to TMP folder (as requested by Rails)

Rack does multipart parsing.  Unicorn itself doesn't do body parsing
other than handling Transfer-Encoding:chunked (which hardly anybody

> 3) if Rails accepts the file, it moves it to actual storage. But as /tmp is often different device from storage, this is actually a full copy.

Depends on the Rack/Rails app, but usually this is the case.

For my use, all uploads are PUT requests with "curl -T", so there's no
multipart parsing involved and much faster :)

> In such a situation the upload module would help out, because instead
> of simply buffering the body on disk, it actually parses the body. And
> it is implemented in C, which should make it faster.


> Afterwards it will only handle out the file location and Rails can
> complete it's work a lot faster, freeing up workers.
> Unicorn won't even see the file and Rails has the responsibility to
> delete the file if it's invalid.

I think the only problem with this approach is it won't work well on
setups where nginx is on separate machines from unicorn.  Shared
storage would be required, but that ends up adding to network I/O,

More information about the mongrel-unicorn mailing list