[Backgroundrb-devel] Thread_pool bug?

Dave Dupre gobigdave at gmail.com
Thu Jan 3 16:40:55 EST 2008


I have a previous question regarding long tasks and the thread_pool (sorry
for the dup, I didn't see the first go through).  To try and track things
down, I made a change based on a suggestion found in the archives.  I moved
my import contacts worker to its own file and set the pool_size to 1.

class ImportContactsWorker < BackgrounDRb::MetaWorker
  set_worker_name :import_contacts_worker
  pool_size(1)
  def create(args = nil)
    # Restart any import jobs that didn't complete or start
    ImportJob.process_all_ready
  end
  def import_contacts(args = nil)
    thread_pool.defer(args) do |job_id|
      begin
        job = ImportJob.find(job_id)
        job.process_job
      rescue => err
        logger.error "ImportContactsWorker(#{job_id}) failed! #{err.class}:
#{err}"
      end
    end
  end
end

I started one long import job no problem. Then, I started another one, and I
expected that the thread_pool would queue it up and execute it when the
thread became available.  At least, that's what this archive says. However,
my call to ask_work blocked until the first call completed.  My ask_work
call:

MiddleMan.ask_work(:worker => :import_contacts_worker, :worker_method =>
:import_contacts, :data => job_id)

Shouldn't the thread_pool have queued to request and import_contacts
returned immediately?

These import jobs can take 10 minutes, so it's not fun for the user to hang
around and wait.

BTW, I'm still see bad memory growth with the backgroundrb tasks.  Run the
same methods outside of backgroundrb, and memory does not grow at all.

As it stands now, I need to come up with another plan or go back to the old
version of backgroundrb.

Dave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/backgroundrb-devel/attachments/20080103/e4880471/attachment.html 


More information about the Backgroundrb-devel mailing list