workers not utilizing multiple CPUs

Nate Clark nate at pivotallabs.com
Wed Jun 1 02:51:38 EDT 2011


Thanks for the responses, all.

Eric, you were right, our load was not enough. We had just started on
load testing our app, and I think we started with too many app servers
and not enough load. Once we cranked up the load and used fewer
instances, we're now definitely seeing all CPU cores being utilized. I
was not aware that the kernel would optimize like you described.

Once we did start seeing heavier load, our collectd data and htop were
reporting usage on the virtual cores correctly.
Thanks again, very happy with the results so far,

Nate

On Wed, Jun 1, 2011 at 2:35 PM, Nate Clark <nate at pivotallabs.com> wrote:
>
> Thanks for the responses, all.
> Eric, you were right, our load was not enough. We had just started on load testing our app, and I think we started with too many app servers and not enough load. Once we cranked up the load and used fewer instances, we're now definitely seeing all CPU cores being utilized. I was not aware that the kernel would optimize like you described.
> Once we did start seeing heavier load, our collectd data and htop were reporting usage on the virtual cores correctly.
> Thanks again, very happy with the results so far,
> Nate
> On Tue, May 31, 2011 at 11:55 PM, Clifton King <cliftonk at gmail.com> wrote:
>>
>> Thanks Eric, I had expected that to be the case (we are under light
>> load as of now).
>>
>> On Tue, May 31, 2011 at 10:48 AM, Eric Wong <normalperson at yhbt.net> wrote:
>> > Clifton King <cliftonk at gmail.com> wrote:
>> >> We experience the same problem. I believe the problem has more to do
>> >> with the kernel CPU scheduler than anything else. If you figure put a
>> >> reliable way to spread the load, I'd like to hear it.
>> >
>> > Load not being spread is /not/ a problem unless there are requests that
>> > get stuck in the listen queue.
>> >
>> > If no requests are actually stuck in the queue (light load), the kernel
>> > is right to put requests into the most recently used worker since it can
>> > get better CPU cache behavior this way.
>> >
>> >
>> > == The real problem
>> >
>> > Under high loads (many cores, fast responses), Unicorn currently uses
>> > more resources because of non-blocking accept() + select().  This isn't
>> > a noticeable problem for most machines (1-16 cores).
>> >
>> > Future versions of Unicorn may take advantage of /blocking/ accept()
>> > optimizations under Linux.  Rainbows! already lets you take advantage
>> > of this behavior if you meet the following requirements:
>> >
>> > * Ruby 1.9.x under Linux
>> > * only one listen socket (if worker_connections == 1 under Rainbows!)
>> > * use ThreadPool|XEpollThreadPool|XEpollThreadSpawn|XEpoll
>> >
>> > I haven't had a chance to benchmark any of this on very big machines so
>> > I have no idea how well it actually works compared to Unicorn, only how
>> > well it works in theory :)
>> >
>> >
>> > Blocking accept() under Ruby 1.9.x + Linux should distribute load evenly
>> > across workers in all situations, even in the non-busy cases where load
>> > distribution doesn't matter (your case :).
>> >
>> > [1] - http://rainbows.rubyforge.org/Rainbows/XEpollThreadPool.html
>> >
>> > --
>> > Eric Wong
>> > _______________________________________________
>> > Unicorn mailing list - mongrel-unicorn at rubyforge.org
>> > http://rubyforge.org/mailman/listinfo/mongrel-unicorn
>> > Do not quote signatures (like this one) or top post when replying
>> >
>> _______________________________________________
>> Unicorn mailing list - mongrel-unicorn at rubyforge.org
>> http://rubyforge.org/mailman/listinfo/mongrel-unicorn
>> Do not quote signatures (like this one) or top post when replying
>


More information about the mongrel-unicorn mailing list