Re: batched write without control over future data availability
Date: Sun, 23 Oct 2011 20:24:01 -0700 (PDT)
----- Original Message -----
From: Dan Fandrich
Sent: Wednesday, October 12, 2011 12:04 AM
Subject: Re: batched write without control over future data availability
On Tue, Oct 11, 2011 at 02:09:52PM -0700, Candid Shot wrote:
> (Summary: How to use libcurl where we do not have complete data upfront. But callback is not the venue to get future data.)
> I am working on an Adapter pattern: one side with simple file read/write semantics without seek. The other side is libcurl based.
> In other words, calls to write to a new 'file' start with a simple create() call. Subsequently:
> Write() does the actual data transfer: of writing to index 0, and then subsequent calls are assumed with sequential index.
> until a close() is issued. Please note, there is no file pointer to read data from. Data comes in a buffer, of some uncertain size.
This sounds like the fopen.c example program, but in reverse (fopen.c only
handles reads/GETs, not writes/PUTs). But you can use the techniques in there
to handle writes as well (with bonus points if you contribute back to us
an enhanced version of fopen.c!)
The idea is to use the multi interface to perform the network operations
to make them non-blocking. Within the write() function (it would be called
url_fwrite in fopen.c), you'd fill a buffer with the data that has become
available to send, then call curl_multi_perform() until libcurl requests
that data by calling the read callback function. That callback would pass
in the buffered data before setting a flag to signal the
curl_multi_perform loop to stop looping and return to the url_fwrite()
caller. Eventually, the application would call url_fclose() which would do
something similar as url_fwrite but instead of passing in data, the read
callback function would signal the end of data. url_fclose() would then loop
until the URL operation was complete before returning.
Thanks Dan. I however tried something different:
We have multiple threads, and started with a curl_easy handle in each thread.
And then tried to return a CURLPAUSE_* when we ran out of data, on a per thread basis.
But the tread essentially froze (i.e. looping somewhere , and never coming out of curl_easy_perform).
Looking at call sequence, curl_easy_perform --> curl_perform() --> curl_do_perform(),
I am not sure if we could come out of the infinite for loop in curl_do_perform().
List admin: http://cool.haxx.se/list/listinfo/curl-library
Received on 2011-10-24