negative timeout in Rainbows::Fiber::Base

Lin Jen-Shin (godfat) godfat at
Wed Sep 5 20:06:04 UTC 2012

On Fri, Aug 31, 2012 at 9:37 AM, Eric Wong <normalperson at> wrote:
> I seem to recall problems with some of the more esoteric test cases in
> Rainbows! a few years ago.
> Now that I think more about it, it might've been related to client
> pipelining.  If a client pipelines requests, I don't think using
> EM.defer {} makes it easy to guarantee the servers responses are
> returned in the correct order.
> This is made worse since (AFAIK) EM provides no easy way to
> temporarily disable firing read callbacks for a socket, so
> a client which pipelines aggressively becomes bad news.

After some experiments, now I understood why it is hard. But I can't
figure it out by some quick glimpses for how you did solve this problems
for other concurrency model?

One possible and simple way would be... just make piped requests
sequential, but this would greatly reduce the concurrency ability,
is it right? At least Puma server runs quite poorly whenever I am
testing pipeline requests.

My test script  is:

httperf --hog --server localhost --port 8080 --uri /cpu --num-calls 4
--burst-length 2 --num-conn 2 --rate 8 --print-reply

But Zbatery runs quite smoothly with ThreadPool and ThreadSpawn.
I assume it's because Zbatery would handle piped requests concurrently
and collect responses and reply them with the correct order, though
I cannot tell from the code, at least from some quick glimpses.

At this point I am more confident to say that Unicorn family is the best
Ruby application servers. :)

> Thank you, your code makes it clear.  I think your approach will work
> with most HTTP clients.
> However, I think pipelined requests will hit the same problems as
> EM.defer, too.  Can you try with pipelining?

Honestly I don't know about keep-alive and pipelined requests, and
I just learned it from trying httperf, which seems to be a very good tool
to make trials against web servers. Puma and Thin worked poorly
in my tests with above httperf command, while Zbatery worked
perfectly fine. (except with my hack for adding fibers/threads on top
of EventMachine, which were raising errors)

After pondering and reading the codes in Rainbows for a while, I got
managed to make it work without errors, but I believe it is still suffering
from the ordering issues. There's no promise for ordering.

Here's the new code. It's for fibers but I think it's the same with EM.defer.
def app_call input
  # [...] as before{
    status, headers, body = catch(:async) {!(RACK_DEFAULTS))
    if nil == status || -1 == status
      @deferred = true
      @deferred = nil # response is ready, no more @deferred
      ev_write_response(status, headers, body,
  @deferred = true # we're always deferring

To address ordering issue, I guess we can remember the
index of a certain request, and if there's a request being
processed which has a lower index, the response shouldn't
be written back before the lower one has been written.

Not sure if this is wroth the effort though... This must touch
Rainbows' internal, and it cannot be easily handled by
simply extending the client class.

> Maybe disabling keepalive/persistent connections will make this work
> correctly (but you obviously lose latency benefits, too).
> I also don't think it's possible to say "no pipelining" to a client if
> we support persistent connections at all.

I wonder if we always run Nginx or something similar in front of
Rainbows, does it still matter?

Nevertheless, I guess it's good enough for us right now.
Many thanks for your review. On the other hand, I would still be
very interested to see if this could be addressed. Last time I want
to replicate what other concurrency models solved this, but failed
to see how.

> It's likely some corner case in your code.  Do you generate potentially
> large responses or read in large amounts of data?  (e.g. SELECT
> statements without a LIMIT, large files (uploads?)).
> A slow client which triggers large server responses (which EM may
> buffer even if the Rack app streams it out) can hit this, too.
> I don't think EM can be configured to buffer writes to the file
> system (nginx will automatically do this, though).

I see. Never thought of that EM might be buffering a lot of large
responses in the memory. As for loading large amounts of data
into memory, I guess I can't tell. As far as I know, no, but who knows :P
This must be accidental if there's one...

Anyway, we don't see that often nowadays. Or it could be that
Ruby 1.9.3 had fixed some memory leak issues. Or some other
3rd party libraries we're using.

> Ruby 1.9 sets stack sizes to 512K regardless of ulimit -s.  At least on
> Linux, memory defaults to being overcommited and is lazily allocated in
> increments of PAGE_SIZE (4K on x86*).  It's likely the actual RSS overhead
> of a native thread stack is <64K.
> VMSize overhead becomes important on 32-bit with many native threads,
> though.  In comparison, Fibers use only 4K stack and has no extra
> overhead in the kernel.

I see, thanks for the explanation. I guess that does matter a bit, but only
if we're using thousands of threads/fibers, and it should be quite rarely
in a web app, I guess.

Using fibers are also risking from system stack overflow, especially in
a Rails app with a lot of plugins, I guess... Umm, but I also heard that
fibers stack is increased a bit in newer Ruby?

>> Though I really doubt if threads are really that heavy comparing to fibers.
>> At least in some simple tests, threads are fine and efficient enough.
> I agree native threads are light enough for most cases (especially since
> you're already running Ruby :).

Speaking to this and green threads, I wonder if it's worth the effort to
implement m:n threading for Ruby? Or we can just compile and
link against a threading library which supports m:n threading?
Goroutine? :P

>> EventMachine is still a lot faster than regular sockets (net/http) though,
>> so I'll still keep EventMachine for a while even if I switched to threads.
> I think part of that is the HTTP parser and I/O buffering being
> implemented in C/C++ vs Ruby.  Things like the net-http-persistent gem
> should help with pure-Ruby performance, though (and performance is
> likely to be better with upcoming Ruby releases).

I haven't got a chance to try net-http-persistent, but it seems I should try it.
(or try that on em-http-request, it seems it supports it

Or if it's all about HTTP parsing, [http][] gem should help too.
It is using [http_parser.rb] underneath, which is based on NodeJS'

Sometimes I feel it's all about throwing away EventMachine...
I've heard that EM is bad, but not bad enough to be rewritten...


> I enjoy my near-anonymity and want as little reputation/recognition as
> possible.  I'm always happy if people talk about software, but I prefer
> software stand on its own and not on the reputation of its authors.
> (The only reason I use my name is for potential GPL enforcement)

I see. Thanks for explaining. I'll then avoid talking about authors :)

More information about the rainbows-talk mailing list