[Mongrel] Debugging high CPU with Mongrel

kigsteronline at mac.com kigsteronline at mac.com
Wed Dec 6 18:35:15 EST 2006


One more thing to consider.

I remember doing research on file system management for common OS  
choices a few years back.  There was a paper published by someone at  
NetApp R&D that analyzed the time it takes to open a file by name  
when there are thousands of files in the same directory.

As I understand it, this requires scanning the directory file until  
you find the given file, then determining it's i-node and then  
reading the file blocks directly from the i-node.

This should tell you that depending on how the directory file is  
structured, and how the search is conducted will have a dramatic  
affect on the performance.  Their whole point was that in standard  
Linux file system implementations (and most other UNIXes) the access  
time would increase exponentially with the number of files in the  
directory.  When you reach 10-100K files in a single directory you  
may notice the slowdown without a stopwatch.  NetApp's own file  
system was designed to have a constant access time regardless of the  
number of files in the directory (that was their claim to fame anyway).

Bottom line is that I wouldn't assume that a directory with 10K+  
files is not the main reason for your performance degradation.

Hope this helps,

On Dec 6, 2006, at 3:13 PM, Dallas DeVries wrote:

> Thanks Konstantin,
> We will give that a try (apache+mod_proxy).  We try to cache  
> everything (using fragment caching) we can.  Gets to be tens of  
> thousands of files quickly(never too many in one directory).  It  
> just seemed bizarre to us that load goes way up periodically( and  
> only recently) and that removing our cache directory fixes this.   
> Just restarting mongrel seemingly has no effect until we do remove  
> the cache.   Anyways we will try with a clean install and use  
> apache.  Thanks for the info.

More information about the Mongrel-users mailing list