Virtual Memory Usage
somers.ben at gmail.com
Fri Nov 11 19:13:36 EST 2011
Thank you! I did some pmapping, and it looks like they are grabbing a
big [anon] chunk, but it's shared between all the workers for a given
master (both the low- and high-memory workers). Still no swapping.
I'll proceed with my live testing (we're going to run just one
webserver on unicorn for a few weeks), and see how it goes.
On Thu, Nov 10, 2011 at 3:50 PM, Eric Wong <normalperson at yhbt.net> wrote:
> Ben Somers <somers.ben at gmail.com> wrote:
> > m1.xlarge running Ubuntu 9.10 (yes, we know it's out od date), with
> > 15GB RAM, 4x2GHz CPU.
> > The server is behaving fine, with most of the unicorn workers sitting
> > at about 280MB resident memory, comparable to passenger's workers. But
> > I've been testing how many workers to run with, and I've noticed that
> > once I get above three workers, a couple of them get ballooning
> > virtual memory, jumping up to a little over 2GB. I initially thought
> > that the server was overloaded and swapping, but swap usage is still
> > 0, and I get this behavior at over 50% free memory. It doesn't seem to
> > be causing performance problems, but I was wondering if anyone else
> > has observed this or similar behavior? I'm trying to decide if it's
> > normal or an item for concern.
> Since you're on Linux, you can run "pmap $PID" to see how chunks of
> virtual memory are used by any process.
> If a chunk is backed by a regular file, it hardly matters, the kernel
> will share that with other processes and won't swap that memory (since
> it's already backed by a regular file).
> Big "[ anon ]" chunks you see in pmap output are usually not shared, so
> potentially more problematic. It depends on your application and the
> libraries it uses, of course. Some libraries will share anonymous
> memory between processes (raindrops does this for example, but only for
> small integer counters).
> I've seen Unicorn processes using TokyoCabinet and TDB with database
> files many gigabytes large with large VM sizes to match. I also know
> both of those database libraries cooperate with the VM subsystem so I
> had nothing to worry about as far as memory usage goes :)
> In case you (or somebody else) does notice high RSS and you're using the
> stock glibc malloc, try setting MALLOC_MMAP_THRESHOLD_=131072 or
> similar. The author of jemalloc once blogged about this behavior with
> glibc malloc, the title was "Mr. Malloc gets schooled" or something
> along those lines. Making the malloc implementation use mmap() more
> to reduce memory usage may hurt performance, though.
> I always do incremental processing on large datasets so that helps
> me avoid issues with unchecked/unpredictable memory growth due to
> malloc behavior.
>  http://bogomips.org/ruby-tdb/
> Unicorn mailing list - mongrel-unicorn at rubyforge.org
> Do not quote signatures (like this one) or top post when replying
More information about the mongrel-unicorn