Forking off the unicorn master process to create a background worker

Eric Wong normalperson at
Tue Jun 15 20:06:11 EDT 2010

Russell Branca <chewbranca at> wrote:
> On Tue, Jun 15, 2010 at 3:14 PM, Eric Wong <normalperson at> wrote:
> > Russell Branca <chewbranca at> wrote:
> >> Hello Eric,
> >>
> >> Sorry for the delayed response, with the combination of being sick and
> >> heading out of town for a while, this project got put on the
> >> backburner. I really appreciate your response and think its a clean
> >> solution for what I'm trying to do. I've started back in getting the
> >> job queue working this week, and will hopefully have a working
> >> solution in the next day or two. A little more information about what
> >> I'm doing, I'm trying to create a centralized resque job queue server
> >> that each of the different applications can queue work into, so I'll
> >> be using redis behind resque for storing jobs and what not, which
> >> brings me an area I'm not sure of the best approach on. So when we hit
> >> the job queue endpoint in the rack app, it spawns the new worker, and
> >> then immediately returns the 200 ok started background job message,
> >> which cuts off communication back to the job queue. My plan is to save
> >> a status message of the result of the background task into redis, and
> >> have resque check that to verify the task was successful. Is there a
> >> better approach for returning the resulting status code with unicorn,
> >> or is this a reasonable approach? Thanks again for your help.
> >
> > Hi Russell, please don't top post, thanks.
> >
> > If you already have a queue server (and presumably a standalone app
> > processing the queue), I would probably forgo the background Unicorn
> > worker entirely.
> >
> > Based on my ancient (mid-2000s) knowledge of user-facing web
> > applications: the application should queue the job, return 200, and have
> > HTML meta refresh to constantly reload the page every few seconds.
> >
> > Hitting the reload endpoint would check the database (Redis in this
> > case) for completion, and return a new HTML page to stop the meta
> > refresh loop.
> >
> > This means you're no longer keeping a single Unicorn worker idle and
> > wasting it.  Nowadays you could do it with long-polling on
> > Rainbows!/Thin/Zbatery, too, but long-polling is less reliable for
> > people switching between WiFi access points.  The meta refresh method
> > can be a waste of power/bandwidth on the client side if the background
> > job takes a long time, though.
> >
> > I'm familiar at all with Resque or Redis, but I suspect other folks
> > on this mailing list should be able to help you flesh out the details.
> Hi Eric,
> I have a queue server, but I don't have a standalone app processing
> the jobs, because I have a large number of stand alone applications on
> a single server. Right now I've got 12 separate apps running, so if I
> wanted to have a standalone app for each, that would be 12 additional
> applications in memory for handling background jobs. The whole reason
> I want to go with the unicorn worker approach for handling background
> jobs, is so I can fork off the master process as needed, avoid the
> spawning time for a normal rails instance, and only use workers as
> needed. This way I can have just a few workers running at any given
> time, rather than 1 worker for each app. The number of apps is only
> going to increase, but I want to keep the worker pool a constant. I'll
> probably just update status of completion with redis, these jobs won't
> be run by users, this is all background stuff like sending
> notifications, data analysis, feed parsing, etc etc, so I'm planning
> on just having resque initiate a request directly, and then use
> unicorn to process the task in the background.

Ah, so I guess it's a single queue server but multiple queues?  I
guess thats where I got confused with your description.

> I didn't exactly follow what you meant when you were talking about a
> unicorn worker being idle, from the example you responded
> with earlier on, it looks like I can just spawn a new worker that will
> be outside of the normal worker pool to handle the job. I'm pretty
> sure this will work, I was curious about the best approach for
> returning completion status, but I think just having the worker record
> its status and exit is better than having long polling connections
> open between the job queue and the new unicorn worker.

Yes, having the fork as I made in the example should work.  I haven't
tested it, of course :)  My instincts tell me recording the status and
exiting ASAP is better because it uses less memory.

You should test and experiment with it either way.  You know your apps,
requirements, and Redis/Resque far better than I do :)  Consider
software an evolutionary process, so whatever the "best approach" may
be, another one can usurp it eventually or be completely wrong in a
slightly different setting :)

Eric Wong

More information about the mongrel-unicorn mailing list