[Backgroundrb-devel] Upper limit to number of jobs queued by BDRb?

hemant gethemant at gmail.com
Sat May 31 08:17:25 EDT 2008

On Sat, May 31, 2008 at 1:08 AM, Raghu Srinivasan
<raghu.srinivasan at gmail.com> wrote:
> On Fri, May 30, 2008 at 12:01 PM, hemant kumar <gethemant at gmail.com> wrote:
>> On Fri, 2008-05-30 at 10:25 -0700, Raghu Srinivasan wrote:
>> > Hi Hemant - no, I am not using thread_pool right now. I do have two
>> > separate workers like you and Ryan Leavengood suggested - one for the
>> > batch process and the other for the live/web user initiated process -
>> > which by the way works out great, thanks!.
>> >
>> > How are other folks handling 1000s of RSS refreshes? Via BDRb - or
>> > something else? Is BDRb really the best tool for what I am trying to
>> > do? I'd really appreciate if others could share their experiences.
>> >
>> okay, then it shouldn't be a problem. Can you try git version of
>> backgroundrb as suggest in following link and report us back?
>> http://gnufied.org/2008/05/21/bleeding-edge-version-of-backgroundrb-for-better-memory-usage/
> Hemant - Thanks for your help. I'll try this first on development and then
> report back. With this, will I be able to queue up 256+ or 1024+ b/g jobs?

Why, the newer version will fix many of the problem is because, newer
version makes use of fork and exec, rather than just fork(). What it
means is, each of worker process is totally independent of each other.
So, even if you are opening lots of files and network resources in
your workers, they stay completely independent of each other. Now,
Ruby runtime is not copy on write friendly and hence with plain fork
it copies all the resources in forked processes  and hence causing
some memory issues.

Let them talk of their oriental summer climes of everlasting
conservatories; give me the privilege of making my own summer with my
own coals.


More information about the Backgroundrb-devel mailing list