Re: http resume
Date: Wed, 25 Feb 2004 18:03:40 +0100
At 13:51 25/02/2004, you wrote:
>On Mon, 23 Feb 2004, Eduardo García wrote:
> > On the other hand, as far as I must download quite big-sized files, I
> > would like to resume downloads when available data is in my local disk and
> > the server supports this operation. I am testing with a server which
> > support resume (as far as I can see from the results, and check by using
> > verboursly comand line version of curl). When I set CURLOPT_RESUME_FROM I
> > get Ok from curl_easy_perform, while the downloaded result is a buffer just
> > sized=(totalFileSize - resumedBytes), but starting from the beggining
> of the
> > file !!!
>libcurl tries to detect that situation (and test case 38 verifies this). Can
>you show us exactly what headers your server returns and what libcurl options
>to use? Preferably, you provide us with a complete test source using a public
>URL that makes the bug appear!
> Daniel Stenberg -- http://curl.haxx.se/ -- http://daniel.haxx.se/
> [[ Do not send mails to this email address. They won't reach me. ]]
I have extracted calls to libcurl from my code and I am attaching
a little test source reproducing 'the bug'.
The test sample begins to download a file:
http://www.bitcomet.com/achive/BitComet_0.44.exe and aborts the download
after at least 4000 bytes. If you run it again it checks the partially
downloaded size and resumes the download after it has got 8000 bytes at
least, and then aborts it again. The sample dumps libcurl verbose info
for both attemps, so you can check headers and bytes downloaded.
The result is 'the bug' again: headers messages (in dumps) shows
correctly resumed download commands, but resumed bytes are always from the
beginning of the file, an not from resume point as headers show.
But I have got another strange behaviour while running it one and
another in order to be sure of it as a good sample for you:
Once I have completely downloaded the file on other disk-file ("Save as" in
a web-explorer) in order to compare the downloads, the second run of the
sample (resume) doesn't work, it cannot download even a byte and libcurl
(curl_easy_perform) returns CURLE_HTTP_RANGE_ERROR.
The same happens changing the name of the remote file to download:
incorrect resumed downloads on second runnings until I download the same
URL into another disk file and then the second runnings return range error.
May it be happenning some kind of problem with any proxy or
somehow agent in the way from my computer to/from bitcomet site ?
I am really atonished with this behaviour. I fear that I cannot rely on
resumed downloads on asorted servers while not undestanding why it is
happening, and so being able to detect or by-passing it....
- text/plain attachment: CurlLibResumeTest.cpp