cURL / Mailing Lists / curl-library / Single Mail


Re: batched write without control over future data availability

From: Dan Fandrich <>
Date: Wed, 12 Oct 2011 00:04:40 -0700

On Tue, Oct 11, 2011 at 02:09:52PM -0700, Candid Shot wrote:
> (Summary: How to use libcurl  where we do not have complete data upfront. But callback is not the venue to get future data.)
> I am working on an Adapter pattern: one side with simple file read/write semantics without seek. The other side is libcurl based.
> In other words, calls to write to a new 'file' start with a simple create() call. Subsequently:
> Write() does the actual data transfer: of writing to index 0, and then subsequent calls are assumed with sequential index.
> until a close() is issued. Please note, there is no file pointer to read data from. Data comes in a buffer, of some uncertain size.

This sounds like the fopen.c example program, but in reverse (fopen.c only
handles reads/GETs, not writes/PUTs). But you can use the techniques in there
to handle writes as well (with bonus points if you contribute back to us
an enhanced version of fopen.c!)

The idea is to use the multi interface to perform the network operations
to make them non-blocking. Within the write() function (it would be called
url_fwrite in fopen.c), you'd fill a buffer with the data that has become
available to send, then call curl_multi_perform() until libcurl requests
that data by calling the read callback function. That callback would pass
in the buffered data before setting a flag to signal the
curl_multi_perform loop to stop looping and return to the url_fwrite()
caller. Eventually, the application would call url_fclose() which would do
something similar as url_fwrite but instead of passing in data, the read
callback function would signal the end of data. url_fclose() would then loop
until the URL operation was complete before returning.

>>> Dan
List admin:
Received on 2011-10-12