[Backgroundrb-devel] [ANN] BackgrounDRb 1.0 pre-release available now

Stevie Clifton stevie at slowbicycle.com
Wed Jan 9 14:17:32 EST 2008

Hi Hemant,

Sorry for referring to an old thread, but I'm confused about appropriate
times to use thread_pool vs putting sockets in the select loop (or if that
has been deprecated).

In your thread_pool example, you have this:

def fetch_url(url)
  puts "fetching url #{url}"
  thread_pool.defer(url) do |url|
      data = Net::HTTP.get(url,'/')
      File.open("#{RAILS_ROOT}/log/pages.txt" ,"w") do |fl|
     logger.info "Error downloading page"

But in this thread (from Dec 11) you mentioned that it is much better to put
connections in the select loop via Backgroundrb::HTTP.open to stay with the
evented paradigm instead of directly calling Net::HTTP.  In my case, I want
to open up a bunch of lightweight connections to various servers, but I'd
like to have at most 20 open at a time.  In this case, would it make sense
to use a thread_pool of size 20 to manage the size of the connection pool,
but also use Backgroundrb:: HTTP.open for better socket concurrency?  Just
trying to get this all straight.



On 12/11/07, hemant kumar <gethemant at gmail.com> wrote:
> Hi
> On Tue, 2007-12-11 at 10:53 +0100, Emil Marceta wrote:
> > Does this means that slave/daemons are not the dependency anymore?
> Yes, its gone. bdrb no longer depends on slave and daemons.
> > By 'not encouraged' do you mean that 1.0 is not supporting multiple
> > threads in the worker or just as a general guidance?
> >
> > Could you please comment, how would you approach the following
> > scenario with 1.0. Currently, we have a worker that creates threads
> > that process financial payment transactions. An http request sends
> > several 10s or 100s payment transaction records. They are handled by
> > the single worker instance. Within the worker there is a pool of
> > threads created that is calculated based on the number of
> > transactions. For example for 200 transactions there will be 20
> > threads where each thread handles 10 requests in a squence. Each
> > transaction takes about 3-5 seconds, so our throughput is
> > significantly improved by internal worker parallelization with a
> > thread pool. The worker periodically updates custom backgroundjob
> > databse record, so that following ajax request from the client can
> > read the status of the worker process. The job is identified with the
> > worker key that is stored in the session.
> Its not encouraged, thats all. You can still have threads in your
> workers. However, I am planning to add thread pool feature in bdrb
> itself, that should simplify things a bit.
> Also ideally, when using EventDriven network programming, you want all
> your sockets within select loop for efficiency. So, you wouldn't need
> any damn threads, if you can use a HTTP handler that works in Evented
> manner. What i mean to say is, you don't do this:
> a = Net::HTTP.get("http://www.google.com")
> but you do,
> Backgroundrb::HTTP.open("http://www.google.com") do |data|
> process_data(data)
> end
> What I am trying to illustrate is, when you ask to open, google.com
> page, evented model allows you to attach callback ( the block in this
> case ), which will be called when data arrives from google.com, rather
> than waiting for it in a thread. So, BackgrounDRb::HTTP.open() returns
> immediately. And you are concurrent as hell.
> But this is not possible, because if you are charging cards, then you
> are probably using ActiveMerchant which is using Net::HTTP and which
> blocks when you make request. But trust me, writing a simple http client
> is not that difficult, there is already connect() available in all
> workers.
> > How this works with fastcgi or multiple mongrel based engines where it
> > is not guaranteed to hit the same process with the next request? We
> > are using custom database tables and code for sharing the status
> > information now but I was wandering whether the plumbing includes
> > something to address this.
> Thats no problem at all, BackgrounDRb is a TCP server, so if you have
> followed the README file, no matter from which machine, you are making
> the request if you are specifying worker X, then its guaranteed to hit
> the same worker(with optional job_key if you are starting your worker
> dynamically)
> _______________________________________________
> Backgroundrb-devel mailing list
> Backgroundrb-devel at rubyforge.org
> http://rubyforge.org/mailman/listinfo/backgroundrb-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080109/cc7a4947/attachment-0001.html 

More information about the Backgroundrb-devel mailing list