Re: don't honor redirect on PUT/POST
Date: Fri, 11 Oct 2002 14:10:58 +0200 (MET DST)
(Moving this discussion over from the bug tracker to the mailing list, to
make more people see it and be able to participate to share their views.
Full background is available here:
[ Quoting Jeff Lawson on why the current behavior is bad ]
> One related reason to explicitly make curl check and exit with an error
> code when condition is detected is because its interactions with
> CURLOPT_READFUNCTION are very poor.
Well, that doesn't necessary have to mean that the follow-location code is
bad, it could also mean that we lack a few mechanisms for curl to inform the
application on what happens with its data or similar.
> When a read-function is used, curl calls the function to read the entire
> file (until 0 bytes are returned) when the first POST/PUT is performed.
That's not true. It has a limited buffer for this purpose, but if the file is
small enough, then yes the whole one will be read at once. If the file is
big, it'll use multiple reads.
> However, when the redirect is encountered, curl begins calling the
> read-function again even though the end of the file has been reached.
I understand that this might make things weird. I'm not sure the best
behavior is to return an error code though.
> If the read- function continues to return 0 bytes, then curl hangs in a
> blocking condition (writing nothing and receiving nothing in the select()
> loop) until eventually the timeout occurs 5 minutes later, or so.
It does? Then there's a bug in there. Have you discovered where it hangs like
> Is it really expected that read-functions know that they need to rewind and
> replay the contents of the file again after they have reached the end?
> Is it necessarily guaranteed that the read-function will always be called
> to read to the end of the file before needing to rewind for handling a
> redirected upload?
Certainly not. I think this points to a perfect reason why the "Expect:
100-continue" header is a good thing. If you set that header, I think that
libcurl won't call that read callback prematurely. Of course, that only goes
> In any case, it would be preferable to honor the intention of the RFC and
> ensure that applications cannot interact with curl in a way that would
> allow them to act in violation of the RFC. Citing examples of other common
> violators of the RFC should really not be an adequate reason to be in
> violation as well.
Saying that puts me between a rock and a hard place. One very huge part of
the (lib)curl users want the library to perform like browsers do. If they
break standars, so does curl. You now say that you don't care about what
browsers do, that curl should follow the spec (even though the spec as well
details how clients commonly break the 302 rule).
I think the short-term work-around/fix for you is to disable the
FOLLOWLOCATION and always deal with them yourself.
I think the long-term fix is for libcurl to have some kind of rewind or
restart command for the read.
I am very interested in ideas and suggestions here.
-- Daniel Stenberg -- curl, cURL, Curl, CURL. Groks URLs. ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sfReceived on 2002-10-11