cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: http resume

From: Eduardo García <garedagmad_at_telefonica.net>
Date: Wed, 03 Mar 2004 11:25:52 +0100

Hello again Daniel.

I have had no time at all in the last two or three days to send you this
info, but finally, here are the headers.

I knew from curl doc I were always implicitly using "Pragma: no-cache" in
http headers, so I have added a little change into the test program:
     struct curl_slist* headers = NULL;
     headers = curl_slist_append(headers,"Pragma:");
     curl_easy_setopt(curl,CURLOPT_HTTPHEADER,headers);
and tested it again with and without this no-cache pragma header.

I am using libCurl 7.10.7 (it appears so in dll version) over Windows XP Sp1.
As you could see I am programming on plain C (although compiled as C++) on
MSVisualStudio7.0
For these headers test I worked downloading:
http://www.bitcomet.com/achive/BitComet_0.40.zip

This is what I tested, and I am sending you the headers as get by
VERBOSE+DebugFunction (I have shorted binary part of debug dump, keeping
only the very first bytes in order to show if downloaded bytes resume from
file-beginning or from "resumed-byte"):

1) Begin downloading the file and abort the downloading after 4Kb:
         curlDebug1.before.log
2) (Try to) Resume downloading with curl-default-pragma:no-cache header,
and abort the downloading after another
4Kb: curlDebug2_without_cache.before.log
         (I got the beginning of the file instead of resumed bytes)
3) (Try to) Resume downloading (from the same point) having removed
no-cache header, and abort the downloading after another
4Kb: curlDebug2_cacheAllowed.before.log
         (I got the beginning of the file instead of resumed bytes)
4) I downloaded the entire file with another application. The downloaded
filename as saved in disk is not the same we areusing when testing, and so
our test-program is not trying to resume on the full-downloaded file: this
is not the reason for any CURLE_HTTP_RANGE_ERROR.
5) Having fully downloaded the file once, and deleted any partially
downloaded from previous tests, begin downloading the file and abort the
downloading after 4Kb:
         curlDebug1.after.log
6) (Try to) Resume downloading with curl-default-pragma:no-cache header,
and abort the downloading after another
4Kb: curlDebug2_without_cache.after.log
         (I got no bytes at all !!!, curl reports CURLE_HTTP_RANGE_ERROR)
7) (Try to) Resume downloading (from the same point) having removed
no-cache header, and abort the downloading after another
4Kb: curlDebug2_cacheAllowed.after.log
         (I got the correctly resumed bytes !!! remember I must have
previously downloaded the full file in order this to work :-( )

Now you can see that allowing or denying use of cache is connected to the
final result. I have made this test several times deleting every cache or
temporal file in my computer, in order to be sure not having some cache
effect on the machine after having fully downloaded the file (I never fully
trust on WinInet-IExplorer-connected-architecture...you know). I have even
run the test on another computer on the same LAN after having downloaded
the file on the first machine, but not in this second one, and the results
were always the same. This LAN is connected to the Internet through a DSL
router with NO cache cappabilities, but I cannot asure our DSL provider is
not having some cache-like artefact between us and the web-server....

Make me know if you can guess something from this new info or if you need
more data to understand it.

Thanks,
Eduardo García

At 11:13 01/03/2004, you wrote:
>On Wed, 25 Feb 2004, Eduardo =?iso-8859-1?Q?García?= wrote:
>
> > I have extracted calls to libcurl from my code and I am
> attaching a
> > little test source reproducing 'the bug'. The test sample begins to
> download
> > a file: http://www.bitcomet.com/achive/BitComet_0.44.exe and aborts the
> > download after at least 4000 bytes. If you run it again it checks the
> > partially downloaded size and resumes the download after it has got 8000
> > bytes at least, and then aborts it again. The sample dumps libcurl verbose
> > info for both attemps, so you can check headers and bytes downloaded.
>
>Can you please provide such headers here in a mail instead? Your source code
>was severly windowsified and I would rather not sit and adjust it to build on
>my systems when you already have the info I could use to understand your
>problem!
>
> > The result is 'the bug' again: headers messages (in dumps) shows
> > correctly resumed download commands, but resumed bytes are always from the
> > beginning of the file, an not from resume point as headers show.
>
>Can you show us?
>
> > But I have got another strange behaviour while running it one and
> > another in order to be sure of it as a good sample for you: Once I have
> > completely downloaded the file on other disk-file ("Save as" in a
> > web-explorer) in order to compare the downloads, the second run of the
> > sample (resume) doesn't work, it cannot download even a byte and libcurl
> > (curl_easy_perform) returns CURLE_HTTP_RANGE_ERROR.
>
>Trying to resume a download of an already downloaded used to cause that bug,
>because you can't request a range for a file that is beyond the file's size.
>What libcurl version are you using? What headers is this request sending and
>getting back?
>
> > May it be happenning some kind of problem with any proxy or
> somehow
> > agent in the way from my computer to/from bitcomet site ?
>
>I don't know, but I doubt it.
>
>--
> Daniel Stenberg -- http://curl.haxx.se/ -- http://daniel.haxx.se/
> [[ Do not send mails to this email address. They won't reach me. ]]

Received on 2004-03-03