[Backgroundrb-devel] Upper limit to number of jobs queued by BDRb?

Raghu Srinivasan raghu.srinivasan at gmail.com
Fri May 30 13:25:40 EDT 2008


Hi Hemant - no, I am not using thread_pool right now. I do have two separate
workers like you and Ryan Leavengood suggested - one for the batch process
and the other for the live/web user initiated process - which by the way
works out great, thanks!.

How are other folks handling 1000s of RSS refreshes? Via BDRb - or something
else? Is BDRb really the best tool for what I am trying to do? I'd really
appreciate if others could share their experiences.

Thanks in advance

On Fri, May 30, 2008 at 9:17 AM, hemant <gethemant at gmail.com> wrote:

> On Fri, May 30, 2008 at 9:35 AM, Raghu Srinivasan
> <raghu.srinivasan at gmail.com> wrote:
> > I use BDRb to process RSS feeds for users on my site (
> http://feedflix.com).
> >
> > I have a batch job that queries the DB for records that haven't been
> updated
> > in the last so many hours and kicks off a background job for each of
> them.
> > If N records are returned by the DB, N background jobs gets queued and
> get
> > done serially. As long as N is 255 or under, everything works like a
> charm.
> > I've noticed that whenever N is >= 256 (2 power 8), then at the 257th job
> > BDRb stops processing any more users. I can get around it by limiting the
> DB
> > query to return no more than 255 records and then all is fine. No
> problems
> > at all. But over that, I see this issue. Repeatedly.
> >
>
> How are you queuing the jobs? Are you using thread_pool? I am afraid,
> it could be because of restriction in number of open file descriptors
> open to 1024.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080530/3c0ccdad/attachment.html>


More information about the Backgroundrb-devel mailing list