[Mongrel] performance observation on redhat
cremes.devlist at mac.com
Tue Oct 2 10:56:34 EDT 2007
On Sep 23, 2007, at 1:41 PM, barsalou wrote:
> Since your goofing around with that, how about some of the other
> settings like maxtime and ttl values?
> Seems like you have deleayd when some garbage collection operations
> taking place, so maybe tweeking that a little more will give you the
> performance your looking for.
> I've never used this feature, but thought that might interesting as
> Mike B.
> Quoting "Wayne E. Seguin" <wayneeseguin at gmail.com>:
>> On Sep 23, 2007, at 02:30 , armin roehrl wrote:
>>> I made an interesting observation using webservers (not just
>>> mongrel) under red hat enterprise
>>> linux ES release 4 (Nahant Update 5). Maybe this is helpful or
>>> somebody with deeper networking
>>> expertise can comment on this.
>>> echo 500000 > /proc/sys/net/ipv4/inet_peer_threshold
>>> There is a trade-off here. A too small value causes too many delays
>>> from inet-peer-storage cleaning
>>> and a too big value makes life well for some limited time, but when
>>> it hits you, it becomes really expensive.
>>> Did you ever see this?
>> We might put this in the documentation, will discuss with the dev
I've searched all over the place to confirm this issue with RHEL 4
update 5 and have come up empty. What's the original source of the
"fix" ? Also, any suggestions on how to build a test harness to
confirm new values actually *improve* the situation rather than make
More information about the Mongrel-users