[Mongrel] Ferret and Mongrel. OSX vs. Linux

Zed A. Shaw zedshaw at zedshaw.com
Sun Apr 15 19:06:57 EDT 2007

On Sun, 15 Apr 2007 16:04:06 -0400
Erik Morton <eimorton at gmail.com> wrote:

> I'm having a strange problem accessing a 1.7GB Ferret index from  
> within Mongrel (1.0.1) on Linux. On OSX a Ferret search through Rails  
> takes a fraction of a second. From the command line, bypassing  
> Mongrel, the search takes about the same amount of time. On Fedora  
> Core 4 a Ferret search from the command line takes a fraction of a  
> second, but the same search through Mongrel never returns. The  
> mongrel just spins and using 50% or so of the CPU.
> Has anyone seen anything like this? Is there some kind of limit on  
> the size of a file that Mongrel allows Rails to access in Linux? Help  
> is greatly appreciated. I had no problem on Linux searching a Ferret  
> index of about 1.5GB.

No, mongrel doesn't set any limits, and I don't think it could set
those kinds of OS limits without more extensive Ruby support.  It's
also odd that you have no problems on OSX but do have them on Linux.
When it comes to file IO problems it's usually the other way around.

There's a couple things you can do to get to the bottom of this.  Since
you have a reproducible test scenario, you can simply try your query,
make the CPU go 50% and then attach to it with:

 strace -p PID

This will print out the system calls being done by that ruby process
and should give you and the ferret author or myself an idea of what to
do next.

You may also have to delve into using gdb, but that's kind of complex.
Hit me up if strace doesn't help as there's a way to attach to a ruby
process via gdb and then force it to throw a ruby exception.  At a
minimum you could attach and do the command backtrace to see where in
the C callstack it's stuck.

Zed A. Shaw, MUDCRAP-CE Master Black Belt Sifu
http://www.awprofessional.com/title/0321483502 -- The Mongrel Book

More information about the Mongrel-users mailing list