[Mongrel] clients hang on large PUTs to Mongrel::HttpHandler-based web service
rf at ufl.edu
Sun Jul 13 19:03:54 EDT 2008
Follow up to an old problem, finally solved, in case anyone else
stumbles across the same problem.
> I have a problem with a storage web service our group wrote using
> Mongrel::HttpHandler We have a consistent problem when using
> http PUT to this service when the data is larger than about 4 GB.
Well, it turns out I could only repeat it consistently between two
particular systems. There was some back and forth on this
list, and I threw out the red herring that the http11_parser.c code
used an unsigned int for the content size. Zed pointed out that
particular variable was just dead code:
> Instead you have this line in http_request.rb:
> content_length = @params[Const::CONTENT_LENGTH].to_i
> Now, that means it relies on Ruby's base integer type to store the
> content length:
Since @params[Const:CONTENT_LENGTH] is a string, ruby's
to_i method can get it right, casting to a fixnum internally when
necessary - integer overflow was not the issue.
On Tue, Jun 3, 2008 at 8:30 PM, Michael D'Auria
<michael.dauria at gmail.com> wrote:
> Are you sure this is an issue with the size of the input and not the
> of time that the connection is left open?
That turns out to be the correct answer, though I eliminated it
by using curl's limit-bandwidth option to get times greater than that
exhibited by my 4GB transfers - those all worked.
What was causing the problem was the lag between the end of the
upload/request from the client, to the time when the server finally
sent a response after processing the request (the processing time was
entirely taken up with copying the upload as a temporary mongrel
file to its permanent disk file location).
Still, using tcpdump showed that the response was making it back
to the client from the server intact and correctly.
What was timing out was the firewall on the client system, which
was using statefull packet filtering (iptables on an oldish redhat
system). The dead time in the http request/response had
exceeded the time to live for the state tables. Turning off the
keep-state flag in the firewall rules allowed the transfer to
complete. Now it's just a matter of tweaking the parameters so
we can get keep-state working again.
Thanks for all the help on this.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mongrel-users