negative timeout in Rainbows::Fiber::Base

Eric Wong normalperson at
Wed Sep 5 23:27:39 UTC 2012

"Lin Jen-Shin (godfat)" <godfat at> wrote:
> On Fri, Aug 31, 2012 at 9:37 AM, Eric Wong <normalperson at> wrote:
> > I seem to recall problems with some of the more esoteric test cases in
> > Rainbows! a few years ago.
> >
> > Now that I think more about it, it might've been related to client
> > pipelining.  If a client pipelines requests, I don't think using
> > EM.defer {} makes it easy to guarantee the servers responses are
> > returned in the correct order.
> >
> > This is made worse since (AFAIK) EM provides no easy way to
> > temporarily disable firing read callbacks for a socket, so
> > a client which pipelines aggressively becomes bad news.
> After some experiments, now I understood why it is hard. But I can't
> figure it out by some quick glimpses for how you did solve this problems
> for other concurrency model?

Simple: we don't read from the socket at all while processing a request

Extra data the client sends gets buffered in the kernel, eventually TCP
backoff will kick in and the Internet stays usable :>

With the inability to easily stop read callbacks via EM, the socket
buffers constantly get drained so the clients are able to keep sending
data.  The only option I've found for Rainbows! + EM was to issue
shutdown(SHUT_RD) if a client attempts to pipeline too much (which
never happens with legitimate clients AFAIK).

> One possible and simple way would be... just make piped requests
> sequential, but this would greatly reduce the concurrency ability,
> is it right? At least Puma server runs quite poorly whenever I am
> testing pipeline requests.

Yes, pipelining is handled sequentially.  It's the easiest and safest
way to implement since HTTP/1.1 requires the responses to be returned in
the same order they were sent (as you mention below).

> My test script  is:
> httperf --hog --server localhost --port 8080 --uri /cpu --num-calls 4
> --burst-length 2 --num-conn 2 --rate 8 --print-reply
> But Zbatery runs quite smoothly with ThreadPool and ThreadSpawn.
> I assume it's because Zbatery would handle piped requests concurrently
> and collect responses and reply them with the correct order, though
> I cannot tell from the code, at least from some quick glimpses.

Rainbows!/Zbatery handles pipelined requests sequentially.  They only
have concurrency on a per-socket level.

> At this point I am more confident to say that Unicorn family is the best
> Ruby application servers. :)

Good to hear :)


> To address ordering issue, I guess we can remember the
> index of a certain request, and if there's a request being
> processed which has a lower index, the response shouldn't
> be written back before the lower one has been written.
> Not sure if this is wroth the effort though... This must touch
> Rainbows' internal, and it cannot be easily handled by
> simply extending the client class.

I don't think it is worth the effort.  I'm not even sure how often
pipelining is used in the real world, all I know is Rainbows! can
handle it without falling over.

> > Maybe disabling keepalive/persistent connections will make this work
> > correctly (but you obviously lose latency benefits, too).
> >
> > I also don't think it's possible to say "no pipelining" to a client if
> > we support persistent connections at all.
> I wonder if we always run Nginx or something similar in front of
> Rainbows, does it still matter?

It shouldn't matter for nginx, I don't think nginx will (ever) pipeline
to a backend.  Nowadays nginx can do persistent connections to backends,
though I'm not sure how much of a benefit it is for local sockets.

> I see. Never thought of that EM might be buffering a lot of large
> responses in the memory. As for loading large amounts of data
> into memory, I guess I can't tell. As far as I know, no, but who knows :P
> This must be accidental if there's one...

I think your comment is unfortunately representative of a lot of
software development nowadays.

Embrace pessimism and let it be your guide :)

> > Ruby 1.9 sets stack sizes to 512K regardless of ulimit -s.  At least on
> > Linux, memory defaults to being overcommited and is lazily allocated in
> > increments of PAGE_SIZE (4K on x86*).  It's likely the actual RSS overhead
> > of a native thread stack is <64K.
> >
> > VMSize overhead becomes important on 32-bit with many native threads,
> > though.  In comparison, Fibers use only 4K stack and has no extra
> > overhead in the kernel.
> I see, thanks for the explanation. I guess that does matter a bit, but only
> if we're using thousands of threads/fibers, and it should be quite rarely
> in a web app, I guess.
> Using fibers are also risking from system stack overflow, especially in
> a Rails app with a lot of plugins, I guess... Umm, but I also heard that
> fibers stack is increased a bit in newer Ruby?

You're right, fiber stacks got bigger.  They're 64K on all 1.9.3 and
128K on 64-bit for 2.0.0dev.  So there's even less benefit in using
Fibers nowadays for memory concerns.

> >> Though I really doubt if threads are really that heavy comparing to fibers.
> >> At least in some simple tests, threads are fine and efficient enough.
> >
> > I agree native threads are light enough for most cases (especially since
> > you're already running Ruby :).
> Speaking to this and green threads, I wonder if it's worth the effort to
> implement m:n threading for Ruby? Or we can just compile and
> link against a threading library which supports m:n threading?
> Goroutine? :P

*shrug*  Not worth _my_ effort for m:n threads.

More information about the rainbows-talk mailing list